text
stringlengths 4
2.78M
| meta
dict |
---|---|
---
author:
- 'Thomas S. Rice'
- 'Edwin A. Bergin'
- 'Jes K. Jørgensen'
-
title: |
Exploring the Origins of Earth’s Nitrogen: Astronomical observations\
of Nitrogen-bearing Organics in Environments
---
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'In this paper, a novel linear method for shape reconstruction is proposed based on the generalized multiple measurement vectors (GMMV) model. Finite difference frequency domain (FDFD) is applied to discretized Maxwell’s equations, and the contrast sources are solved iteratively by exploiting the joint sparsity as a regularized constraint. Cross validation (CV) technique is used to terminate the iterations, such that the required estimation of the noise level is circumvented. The validity is demonstrated with an excitation of transverse magnetic (TM) experimental data, and it is observed that, in the aspect of focusing performance, the GMMV-based linear method outperforms the extensively used linear sampling method (LSM).'
author:
- 'Shilong Sun[^1]'
- Bert Jan Kooij
- 'Alexander G. Yarovoy'
- 'Tian Jin[^2]'
bibliography:
- 'mybib.bib'
title: A Linear Method for Shape Reconstruction based on the Generalized Multiple Measurement Vectors Model
---
Introduction {#sec.Intro}
============
Inverse electromagnetic (EM) scattering is a procedure of recovering the characteristics of unknown objects using the scattered fields probed at a number of positions. In many real applications, such as geophysical survey [@kuroda2007full; @ernst2007full; @virieux2009overview; @bleistein2013mathematics], it is of great importance to retrieve the geometrical features of a system of unknown targets.
For solving this problem, a wealth of reconstruction methods have been proposed over the recent decades. Due to their high efficiency, the linear focusing methods have been extensively used in real applications, among which are Kirchhoff migration [@schneider1978integral], back-projection method [@munson1983tomographic], time-reversal (TR) technique [@fink1993time; @fink2000time; @micolau2003dort; @yavuz2005frequency; @liu2007electromagnetic; @yavuz2008space; @fouda2012imaging; @bahrami2014ultrawideband; @fouda2014statistical], and so forth. However, as is well known the imaging resolutions of the linear focusing algorithms are bound by the diffraction limit [@zhang2013comparison]. As a variant of TR technique, time-reversal multiple signal classification (TR MUSIC) [@devaney2005time; @marengo2006subspace; @marengo2007time; @ciuonzo2015performance] is capable to achieve a resolution that can be much finer than the diffraction limit by exploiting the orthogonality of the signal and noise sub-spaces. Linear Sampling Method (LSM) [@colton1996simple; @colton1997simple] is a non-iterative inversion technique of finding an indicator function for each position in the region of interest (ROI) by first defining a far-field (or near-field [@fata2004linear]) mapping operator, and then solving a linear system of equations. LSM has been proven to be effective for impenetrable scatterers, and in some cases, also applicable for dielectric scatterers [@arens2003linear]. As a matter of fact, LSM can also be reinterpreted, apart from very peculiar cases, as a “synthetic focusing” problem [@catapano2007simple], and more interestingly, an extension of the MUSIC algorithm [@cheney2001linear]. There is another group of iterative surface-based inversion methods, which first parametrize the shape of the scatterer, then optimize the parameters by minimizing a cost functional iteratively [@roger1981Newton]. The drawbacks of these methods are obvious. Firstly, they require *a priori* information about the position and the quantity of the scatterers. More research on this point can be found in [@qing2003electromagnetic; @qing2004electromagnetic]. Secondly, it is intractable to deal with complicated non-convex objects. Quantitative inversion methods, such as contrast source inversion (CSI) [@kleinman1992modified; @kleinman1993extended; @kleinman1994two; @van1997contrast] and (Distorted) Born iterative methods (BIM and DBIM) [@wang1989iterative; @chew1990reconstruction; @li2004three; @gilmore2009comparison], can also be used for shape reconstruction. However, it is very time consuming due to the fact that the forward scattering problem needs to be solved in every iteration.
In this paper, a novel linear method using generalized multiple measurement vectors (GMMV) model [@van2010theoretical; @heckel2012joint] is proposed for solving the problem of shape reconstruction. Specifically, as the objects are illuminated by EM waves from various incident angles at different frequencies, the contrast sources, i.e., the multiplication of the contrast and the total fields, are distributed in the same region with the objects. Therefore, the problem is consequently formulated as a GMMV model, and the contrast sources can be retrieved by solving multiple systems of linear equations simultaneously. In our method, the sum-of-norm of the contrast sources is used as a regularization constraint to address the ill-posedness. Finite difference frequency domain (FDFD) [@W.Shin2013] is used to construct the scattering operator which enables simple incorporation of complicated background media, and the spectral projected gradient method, referred to as SPGL1 [@BergFriedlander:2008; @van2011sparse], is selected to estimate the contrast sources by solving a sum-of-norm minimization problem. Sparse scatterer imaging has been studied in [@oliveri2011bayesian], in which the single measurement vector (SMV) model was used, but the joint sparsity was not considered. The application of joint sparsity in the field of medical imaging has been reported in [@lee2011compressive], which is actually a hybridization of compressive sensing (CS) [@candes2006robust] and MUSIC based on a so-called generalized MUSIC criterion. In the aforementioned work, sparse targets (original or equivalently transformed) and their sparsest solutions are considered, and the problem of defining the best discretization grid and target number is critical for ensuring a level of sparsity that is recoverable. Equivalence principles have been considered in [@bevacqua2017shape] for reconstructing the boundary of dielectric scatterers. In this paper, we use sum-of-norm as a regularization constraint and we demonstrate a regularized solution of the contrast sources is sufficient to recover the spatial profile of the non-sparse targets. In this paper, we only considered the transverse magnetic (TM) EM scattering problem, and we verified the validity of the proposed method with 2-D experimental data provided by the Institut Fresnel, France [@0266-5611-17-6-301; @geffrin2005free] for three distinct cases – metallic objects, dielectric objects, and a hybrid one of both. Since the noise level is unknown in real applications, cross validation (CV) technique [@ward2009compressed] is used to terminate the optimization process. Comparison of the inverted results indicates that the proposed method shows higher resolving ability than LSM.
The remainder of the paper is organized as follows: In Section \[sec.ProSta\], the problem statement is given. In Section \[sec.GMMVLinMethod\], the proposed GMMV-based linear method[^3] is introduced in detail. The validation of this method with experimental data is given in Section \[sec.ExpData\]. Finally, Section \[sec.Conclusion\] ends this paper with our conclusions.
Problem Statement {#sec.ProSta}
=================
For the sake of simplicity, we consider the 2-D TM-polarized EM scattering problem. A bounded, simply connected, inhomogeneous background domain $\mcD$ contains unknown objects. The domain $\mcS$ contains the sources and receivers. The sources are denoted by the subscript $p$ in which $p\in\{1,2,3...,P\}$, and the receivers are denoted by the subscript $q$ in which $q\in\{1,2,3,...,Q\}$. We use a right-handed coordinate system, and the unit vector in the invariant direction points out of the paper.
Assume the background is known to a reasonable accuracy beforehand, and the permeability of the background and unknown objects is a constant, denoted by $\mu_0$. The contrast corresponding to the $i$-th frequency, $\chi_i$, is defined as $\chi_i = \epsilon_i-\epsilon^{\text{bg}}_i$, where $\epsilon_i=\varepsilon-\rmi\sigma/\omega_i$ and $\epsilon^{\text{bg}}_i=\varepsilon^{\text{bg}}-\rmi\sigma^{\text{bg}}/\omega_i$ are the complex permittivity of the inversion domain with and without the presence of the targets, respectively. Here, $\varepsilon$ and $\varepsilon^{\text{bg}}$ are the permittivity of the inversion domain with and without the presence of the targets, respectively; $\sigma$ and $\sigma^{\text{bg}}$ are the conductivity of the inversion domain with and without the presence of the targets, respectively; $\omega_i$ is the $i$-th angular frequency; $\rmi$ represents the imaginary unit. The time factor used in this paper is $\exp(\rmi\omega_i t)$. For 2-D TM-polarized scattering problems, the electric field is a scalar and the scattering wave equation with respect to the scattered fields can be easily derived from Maxwell’s equations, which is given by $$\label{eq.CSI.E}
-\nabla^2 E_{p,i}^{{\text{sct}}}-k_i^2 E_{p,i}^{{\text{sct}}}=\omega_i^2\mu_0 J_{p,i},\quad p=1,2,\dots,P, \quad i = 1,2,\dots,I,$$ where, $\nabla^2$ is the Laplace operator, $k_i=\omega_i\sqrt{\epsilon_b\mu_0}$ is the $i$-th wavenumber, $J_{p,i} = \chi_i E_{p,i}^{{\text{tot}}}$ is the $p$-th contrast source at the $i$-th frequency, $E_{p,i}^{{\text{sct}}}$ and $E_{p,i}^{{\text{tot}}}$ are the scattered electric field and the total electric field at the $i$-th frequency, respectively. The inverse scattering problems discussed in this paper are to retrieve the geometrical features of the scatterers from a set of scattered field measurements.
The GMMV-based Linear Method {#sec.GMMVLinMethod}
============================
The GMMV Formulation {#subsec.GMMVFor}
--------------------
Following the vector form of the FDFD scheme in [@W.Shin2013], we discretize the 2-D inversion space with $N$ grids and recast the scattering wave equation into the following matrix formalism $$\label{eq.FD-CSI.eq}
\mA_i\ve_{p,i}^{\text{sct}}=\omega_i^2\vj_{p,i}, \quad p=1,2,\dots,P, \quad i = 1,2,\dots,I,$$ where $\mA_i\in\mbC^{N\times N}$ is the FDFD stiffness matrix of the $i$-th frequency, which is highly sparse; $\ve_{p,i}^{\text{sct}}\in\mbC^{N}$ and $\vj_{p,i}\in\mbC^{N}$ are the scattered electric field and the contrast source in the form of a column vector, respectively. Obviously, the solution to Eq. can be obtained by inverting the stiffness matrix $\mA_i$, i.e., $\ve_{p,i}^{\text{sct}}=\mA_i^{-1}\omega_i^2\vj_{p,i}$. For the inverse scattering problems discussed in this paper, the scattered fields are measured with a number of receivers at specified positions, yielding the data equations given by $$\label{eq.data}
\vy_{p,i}=\bm{\Phi}_{p,i} \vj^{\text{ic}}_{p,i},\quad p=1,2,\dots,P, \quad i = 1,2,\dots,I,$$ where, $\bm{\Phi}_{p,i} = \mcM^\mcS_p \mA_i^{-1}\omega_i\in\mbC^{Q\times N}$ is the sensing matrix for the measurement $\vy_{p,i}$, $\vj^{\text{ic}}_{p,i}=\omega_i\vj_{p,i}$ is the normalized contrast source proportional to the induced current $\rmi\omega_i\mu_0\vj_{p,i}$. Here, $\mcM^{\mcS}_p$ is a measurement matrix selecting the values of the $p$-th scattered field at the positions of the receivers.
In the rest of this subsection, a GMMV model [@heckel2012joint] is constructed and solved by exploiting the joint sparsity of the normalized contrast sources. In doing so, the contrast sources can be well estimated by solving a sum-of-norm minimization problem, and consequently be used to indicate the shape of the scatterers. To do so, we reformulate the data equations, Eq. , as $$\label{eq.linearmodel}
\mY = \bm{\Phi} \cdot \mJ + \mU$$ where $$\mY =
\begin{bmatrix}
\vy_{1,1} & \vy_{2,1} & \dots & \vy_{P,1} & \vy_{1,2} & \dots & \vy_{P,I}
\end{bmatrix},$$ $$\mJ =
\begin{bmatrix}
\vj^{\text{ic}}_{1,1} & \vj^{\text{ic}}_{2,1} & \dots & \vj^{\text{ic}}_{P,1} & \vj^{\text{ic}}_{1,2} & \dots & \vj^{\text{ic}}_{P,I}
\end{bmatrix},$$ and $\bm{\Phi} \cdot \mJ$ is defined by $$\bm{\Phi} \cdot \mJ =
\begin{bmatrix}
\bm{\Phi}_{1,1}\vj^{\text{ic}}_{1,1} & \bm{\Phi}_{2,1}\vj^{\text{ic}}_{2,1} & \dots & \bm{\Phi}_{P,I}\vj^{\text{ic}}_{P,I}
\end{bmatrix},$$ and, correspondingly, $\bm{\Phi}^H \cdot \mY$ is defined as $$\bm{\Phi}^H\cdot\mY=
\begin{bmatrix}
\bm{\Phi}^H_{1,1}\vy^{\text{ic}}_{1,1} & \bm{\Phi}^H_{2,1}\vy^{\text{ic}}_{2,1} & \dots & \bm{\Phi}^H_{P,I}\vy^{\text{ic}}_{P,I}
\end{bmatrix}.$$ Here, $\mY\in\mbC^{Q\times PI}$ is the measurement data matrix, and the columns of $\mJ\in\mbC^{N\times PI}$ are the multiple vectors to be solved. $\mU\in\mbC^{Q\times PI}$ represents the complex additive noises satisfying certain probability distribution. It is worth noting that for single frequency inverse scattering problem, if the positions of the receivers are fixed, i.e., $\bm{\Phi}_{1,1} = \bm{\Phi}_{2,1} = \cdots=\bm{\Phi}_{Q,1}$, Eq. reduces to the standard multiple measurement vectors (MMV) model [@van2010theoretical].
Guideline of the Measurement Configuration {#subsec.GuiMeaConf}
------------------------------------------
Although the joint sparsity is used in this paper as a regularization constraint, an investigation on the uniqueness condition is still of much importance for two reasons: 1) It is of great interest to know how much we could benefit from a joint recovery; 2) It provides us a guideline of the measurement configuration to make the most of the joint processing.
According to the work of Chen and Huo [@chen2006theoretical] and Davies and Eldar [@davies2012rank], a necessary and sufficient condition for the measurements $\mY=\mA\mX$ to uniquely determine the row sparse matrix $\mX$ is that $$|\text{supp}(\mX)|<\frac{{\text{spark}}(\mA)-1+{\text{rank}}(\mX)}{2},$$ where, ${\text{supp}}(\mX)$ denotes the index set corresponding to non-zero rows of matrix $\mX$, $|{\text{supp}}(\mX)|$ denotes the cardinality of ${\text{supp}}(\mX)$, the spark of a given matrix is defined as the smallest number of the columns that are linearly dependent. Thereafter, Heckel and B[ö]{}lcskei have studied the GMMV problem and showed that having different measurement matrices can lead to performance improvement over the standard MMV case [@heckel2012joint]. The above work about the uniqueness condition implies specifically in our method that in order to make the most of the joint processing, the column number of matrix $\mJ$ is supposed to be larger than the number of measurements, i.e., $P\times I>Q$. Moreover, with the same measurement configuration, the inversion performance can be further improved by exploiting the frequency diversity even for the case of $P>Q$. The latter is further demonstrated in Subsection \[subsec.Die\].
Spectral Projected Gradient L1 method (SPGL1) {#subsec.SPGL1}
---------------------------------------------
### GMMV basis pursuit denoise (BPLg) problem
Suppose the noise level is known beforehand, the approach to finding the multiple vectors is based on solving a convex optimization problem (referred to as GMMV (BP$_\sigma$) problem), which can be written as follows $$\text{minimize}\quad \kappa(\mJ)\quad \text{subject to}\quad \|\bm{\Phi} \cdot \mJ - \mY\|_F\leq \tilde\sigma,$$ where, $\tilde\sigma$ represents the noise level; $\kappa(\mJ)$ is the mixed ($\alpha,\beta$)-norm defined as $$\|\mJ\|_{\alpha,\beta}:=\left(\sum_{n=1}^N\left\|\mJ_{n,:}^T\right\|_\beta^\alpha\right)^{1/\alpha},$$ where, $\mJ_{n,:}$ denotes the $n$-th row of $\mJ$; $\|\cdot\|_\beta$ is the conventional $\beta$-norm; $(\cdot)^T$ is the transpose operator; $\|\cdot\|_F$ is the Frobenius norm which is equivalent to the mixed (2,2)-norm $\|\cdot\|_{2,2}$. In this paper, we select the mixed norm $\|\cdot\|_{1,2}$ as a regularized constraint. Although $\|\cdot\|_{1,2}$ tends to enforce the row-sparsity of the matrix $\mJ$, sparsity is not a premise for this approach. The key point is the utilization of the joint structure for improving the focusing ability. As demonstrated in the following experiments, this approach is able to image objects which are not sparse by exploitation of the frequency diversity.
### Multiple GMMV Lasso (LSLg) problems
Since it is not straightforward to solve the GMMV (BP$_\sigma$) problem, we consider the GMMV (LS$_\tau$) problem formulated as [@BergFriedlander:2008] $$\label{eq.LSMMV}
\text{minimize}\quad \left\|\bm{\Phi}\cdot\mJ - \mY\right\|_F\quad \text{subject to}\quad \left\|\mJ\right\|_{1,2} \leq \tau.$$ The GMMV (LS$_\tau$) problem is equivalent to the GMMV (BP$_\sigma$) problem when $\tau = \tau_{\tilde\sigma}$. As the exact value of $\tau_{\tilde\sigma}$ is not available, a series of GMMV (LS$_\tau$) problems with different values of $\tau$ must be solved. Now let us first introduce the Pareto curve defined as follows $$\phi_{\text{GMMV}}(\tau) = \left\|\bm{\Phi}\cdot\mJ_{\tau}^{\text{opt}} - \mY\right\|_F,$$ where, $\mJ_{\tau}^{\text{opt}}$ is the optimal solution to the LS$_\tau$ problem given by Eq. . When the optimal solution $\mJ_{\tau_l}^{\text{opt}}$ to the GMMV (LS$_\tau$) problem is found, $\tau_l$ is updated to $\tau_{l+1}$ by probing the Pareto curve. The searching procedure is terminated when $\phi_{\text{GMMV}}(\tau)=\tilde\sigma$. At the mean time, $\tau$ reaches $\tau_{\tilde\sigma}$.
### Updating the parameter Lg
As the Pareto curve is proven to be a non-increasing convex function, Newton iteration is used for updating the parameter $\tau$.
![Probing the Pareto curve: the update of parameter $\tau$.[]{data-label="fig:Pareto"}](Pareto_curve.pdf){height="0.45\linewidth"}
Specifically, $\tau$ is updated by $$\label{eq.tau.update}
\tau_{l+1} = \tau_l+\frac{\tilde\sigma-\phi_{{\text{GMMV}}}(\tau_l)}{\phi_{{\text{GMMV}}}'(\tau_l)},$$ where, $$\phi_{{\text{GMMV}}}'(\tau_l)=-\frac{\left\|\bm{\Phi}^H\cdot(\bm{\Phi}\cdot\mJ_{\tau_l}^{\text{opt}}-\mY)\right\|_{\infty,2}}{\left\|\bm{\Phi}\cdot\mJ_{\tau_l}^{\text{opt}}-\mY\right\|_F}.$$ Here, $\|\cdot\|_{\infty,2}$ is the dual norm of $\|\cdot\|_{1,2}$. The searching procedure is illustrated in Fig. \[fig:Pareto\]. Unless a good estimate of $\tau_{\tilde\sigma}$ can be obtained, we set $\tau_{\tilde\sigma}=0$, yielding $\phi(0) = \|\mY\|_F$ and $\phi'(0) = \|\bm{\Phi}^H\cdot\mY\|_{\infty,2}$. With Eq. , it holds immediately that the next Newton iteration is $$\tau_1=\frac{\tilde\sigma-\|\mY\|_F}{\|\bm{\Phi}^H\cdot\mY\|_{\infty,2}}.$$ We refer to [@BergFriedlander:2008; @van2011sparse] for more details about SPGL1 and [@sun2017ALinearModel] for its application in inverse scattering problems.
CV-based Modified SPGL1 {#subsec.CVSPGL1}
-----------------------
In real applications, the termination condition, $\phi_{\text{GMMV}}(\tau)=\tilde\sigma$, is not applicable, because the noise level, i.e., the parameter $\tilde\sigma$, is unknown in general. In order to deal with this problem, we modified the SPGL1 method based on the CV technique [@ward2009compressed]. In doing so, the estimation of the noise level can be well circumvented.
Specifically, we separate the original scattering matrix to a reconstruction matrix $\bm{\Phi}_{p,i,r}\in\mbC^{Q_r\times N}$ and a CV matrix $\bm{\Phi}_{p,i,\text{CV}}\in\mbC^{Q_{\text{CV}}\times N}$ with $Q = Q_r+Q_{\text{CV}}$. The measurement vector $\vy_{p,i}$ is also separated accordingly to a reconstruction measurement vector $\vy_{p,i,r}\in\mbC^{Q_r}$ and a CV measurement vector $\vy_{p,i,\text{CV}}\in\mbC^{Q_{\text{CV}}}$. The reconstruction residual and the CV residual are defined as $$r_{\text{rec}} = \left(\sum_{i=1}^{I}\sum_{p=1}^{P}\left\|\vy_{p,i,r}-\bm{\Phi}_{p,i,r}\vj_{p,i}\right\|_2^2\right)^{1/2}$$ and $$r_{\text{CV}} = \left(\sum_{i=1}^{I}\sum_{p=1}^{P}\left\|\vy_{p,i,\text{CV}}-\bm{\Phi}_{p,i,\text{CV}}\vj_{p,i}\right\|_2^2\right)^{1/2},$$ respectively. In doing so, every iteration can be viewed as two separate parts: reconstructing the contrast sources by SPGL1 and evaluating the outcome by the CV technique. The CV residual curve turns to increasing when the reconstructed signal starts to overfit the noise. The reconstructed contrast sources are selected as the output on the criterion that its CV residual is the least one. To find the least CV residual, we initialize $\tilde\sigma$ as 0 and terminate the iteration when $$\label{eq.TerCond}
N_{\text{Iter}} > N_{\text{opt}} + \Delta N,$$ is satisfied, Here, $N_{\text{Iter}}$ is the current iteration number, and $N_{\text{opt}}$ is the iteration index corresponding to the least CV residual — the optimal solution. Namely, the CV residual is identified as the least one if the CV residual keeps increasing monotonously for $\Delta N$ iterations. In the following experimental examples, we set $\Delta N = 30$.
Once the normalized contrast sources are obtained, one can achieve the shape of the scatterers defined as $$\label{eq.GMMVimage}
\vgamma_{{\text{GMMV}},n} = \sum_{i=1}^I\sum_{p=1}^P\left|\vj^{\text{ic}}_{p,i,n}\right|^2,\quad n=1,2,\dots,N,$$ where $\vj^{\text{ic}}_{p,i,n}$ and $\vgamma_{{\text{GMMV}},n}$ represent the $n$-th element of vector $\vj^{\text{ic}}_{p,i}$ and $\vgamma_{{\text{GMMV}}}$, respectively.
Validation with Experimental Data {#sec.ExpData}
=================================
In order to validate the proposed GMMV-based linear method, we applied it to the experimental database provided by the Remote Sensing and Microwave Experiments Team at the Institut Fresnel, France, in the years of 2001 [@0266-5611-17-6-301] and 2005 [@geffrin2005free]. Three different cases of dielectric scatterers, metallic scatterers (convex and nonconvex), and a hybrid one of both, were considered, respectively. In order to guarantee the accuracy of the FDFD scheme, the inversion domain is discretized with a grid size $\Delta^2$ satisfying $$\label{eq.meshinv}
\Delta\leq\frac{\min\{\lambda_i\}}{15},\quad i = 1,2,\dots,I,$$ where, $\lambda_i$ is the wavelength of the $i$-th frequency.
We have also processed the same data by LSM for comparison. Since the background of the experiments is free space and only TM wave is considered, the LSM method consists in solving the integral equation of the indicator function $g_i(\vx_s,\vx_t)$ at the $i$-th frequency $$\label{eq.LSM}
\int E_i(\vx_r,\vx_t)g_i(\vx_s,\vx_t)d\vx_t = \frac{\omega_i\mu_0}{4}H_0^{(1)}(-k_i\|\vx_s-\vx_r\|_2),$$ where, $E_i(\vx_r,\vx_t)$ is the scattered field probed at $\vx_r$ corresponding to the transmitter at $\vx_t$ and the $i$-th frequency. Here, $\vx_s$ is the sampling point in the inversion domain, $H_0^{(1)}(\cdot)$ is the Hankel function of the first kind, $k_i$ is the wavenumber of the $i$-th frequency. Eq. can be reformulated as a set of systems of linear equations $$\label{eq.LSMEq}
\mF_i\vg_{i,\vx_s}=\vf_{i,\vx_s}, \quad i = 1,2,\dots,I,$$ where, $\mF_i$ is the measurement data matrix, $\vg_{i,\vx_s}$ is the indicator function of the sampling point $\vx_s$ in the form of a column vector, $\vf_{i,\vx_s}$ is the right side of Eq. in the form of a column vector, the index $i$ represents the $i$-th frequency. Following the same approach of solving Eq. in [@catapano2007simple; @crocco2012linear], the indicator function $\vg_{i,\vx_s}$ is sought to be $$\|\vg_{i,\vx_s}\|^2 = \sum_{d=1}^D\left(\frac{s_{i,d}}{s_{i,d}^2+a_i^2}\right)^2\left|\vu_d^H\vf_{i,\vx_s}\right|^2,$$ where, $s_{i,d}$ represents the singular value of matrix $\mF_i$ corresponding to the singular vector $\vu_d$; $(\cdot)^H$ is the conjugate transpose operator; $D=\min\{P,Q\}$; $a_i=0.01\times\underset{d}{\max}\{s_{i,d}\}$. The shape of the scatterers is defined by $$\label{eq.LSMimage}
\vgamma_{{\text{LSM}}}(\vx_s) = \frac{1}{\|\vg^{\text{MF}}_{\vx_s}\|^2},$$ where, $\|\vg^{\text{MF}}_{\vx_s}\|^2$ is a multi-frequency modified indicator defined as the average of the normalized modified ones computed at each frequency [@catapano2008improved] $$\left\|\vg^{\text{MF}}_{\vx_s}\right\|^2 = \frac{1}{I}\sum_{i=1}^I\frac{\left\|\vg_{i,\vx_s}\right\|^2}{\underset{\vx_s\in\mcD}{\max}\left(\|\vg_{i,\vx_s}\|^2\right)}.$$
It is worth mentioning that both the normalized contrast sources and the indicator functions are proportional to the amplitude of the electric field. According to the definitions in Eq. and Eq. , $\vgamma_{{\text{GMMV}}}$ and $\vgamma_{{\text{LSM}}}$ are proportional and inversely proportional to the power of electric fields, respectively. Therefore, the dB scaling shown in the following examples is defined as follows $$\vgamma_{{\text{dB}}}=10\times \log_{10}\left(\frac{\vgamma}{\max\{\vgamma\}}\right).$$
Dielectric Scatterers {#subsec.Die}
---------------------
### Example 1
![Measurement configuration of the data-sets: *twodielTM\_8f*, *rectTM\_dece*, and *uTM\_shaped*. Blue: emitter; Green: reconstruction measurements; Red: CV measurements.[]{data-label="fig:DieConf"}](TAP2_FresnelConf1.pdf){height="0.45\linewidth"}
\
\
In the first example, we consider the *twodielTM\_8f* data-set provided in the first opus of the Institut Fresnel’s database [@0266-5611-17-6-301]. The targets consist of two identical circular cylinders, which are shown in Fig. \[fig:Die1\] (a). All the cylinders have radius of 1.5 cm and relative permittivity $3\pm 0.3$. The emitter is placed at a fixed position on the circular rail, while a receiver is rotating around the center point of the vertical cylindrical target. The targets rotated from 0$^{\circ}$ to 350$^{\circ}$ in steps of 10$^{\circ}$ with a radius of $720\pm3$ mm, and the receiver rotated from 60$^{\circ}$ to 300$^{\circ}$ in steps of 5$^{\circ}$ with a radius of $760\pm3$ mm. Namely, we have $49$ $\times$ $36$ measurement data at each frequency when all the measurements are finished. The measurement configuration is shown in Fig. \[fig:DieConf\], from which we can see 9 red circles which represents the CV measurements and 40 green ones which represents the reconstruction measurements. The inversion domain is restricted to \[$-75$, 75\] $\times$ \[$-75$, 75\] mm$^2$, and the size of the discretization grids is 2.5 $\times$ 2.5 mm$^2$.
Let us first process the single frequency data at 4 GHz by the GMMV-based linear method and the LSM method. The data matrix $\mF_i$ for the LSM is a 72 $\times$ 36 matrix in which the data entries that are not available are replaced with zeros. The reconstruction residual curve and the CV residual curve are shown in Fig. \[fig:Die1CV\] (a), from which we see the CV residual decreases before the 52-nd iteration and starts to increase thereafter. The solutions at the turning point correspond to the optimal ones. In addition, the reconstruction residual corresponding to the turning point gives an estimation of the noise level $\tilde\sigma \approx 0.05\|\mY\|_F$. Fig. \[fig:Die1\] (b) and Fig. \[fig:Die1\] (c) show the images achieved by the two methods at 4 GHz in a dynamic range of \[$-25$, $0$\] dB. As we can see the GMMV image is more clear than the LSM image. However, there is obvious shape distortion in the former. Note that $Q = 49$, $P=36$ and $I = 1$, we have $P\times I<Q$. Recalling the guideline of the measurement configuration discussed in Subsection \[subsec.GuiMeaConf\], the reconstruction performance can be further improved via exploiting the frequency diversity. It is worth mentioning that an obvious position mismatch of the true objects and the reconstructed result can be observed. The reason is very likely to be the minor displacement and tilt occurred in the placement of the objects while doing this measurement, because the same phenomenon can be observed as well in the inverted results reported in [@bloemenkamp2001inversion].
Now let us process the data at 2 GHz, 4 GHz, 6 GHz, and 8 GHz, simultaneously. The residual curves are shown in Fig. \[fig:Die1CV\] (b) and the reconstructed images are shown in Fig. \[fig:Die1\] (d) and Fig. \[fig:Die1\] (e). By comparison of Fig. \[fig:Die1\] (b) and Fig. \[fig:Die1\] (d), one can see that the reconstruction performance of the proposed GMMV-based linear method is improved by exploiting the frequency diversity. One can also observe that the GMMV-based linear method achieves lower sidelobes than LSM in the case of dielectric scatterers.
### Example 2
![Measurement configuration of the data-sets: *FoamDieIntTM* and *FoamMetExtTM*. Blue: emitter; Green: reconstruction measurements; Red: CV measurements.[]{data-label="fig:DieMetConf"}](TAP2_FresnelConf2.pdf){height="0.45\linewidth"}
![Normalized reconstruction residual curve and CV residual curve in Example 2, Subsection \[subsec.Die\]. The *FoamDieIntTM* data at 2 GHz, 4 GHz, 6 GHz, 8 GHz, and 10 GHz are jointly processed.[]{data-label="fig:Die2Err"}](FoamDielIntTMErr.pdf){height="0.375\linewidth"}
\
.
In the second example, we consider the *FoamDieIntTM* data-set provided in the second opus of the Institut Fresnel’s database. The targets consist of a circular dielectric cylinder with a diameter of 30 mm embedded in another circular dielectric cylinder with a diameter of 80 mm. The smaller cylinder has a relative permittivity value of $\varepsilon_r = 3\pm 0.3$, while the larger cylinder has a relative permittivity value of $\varepsilon_r = 1.45\pm 0.15$. Fig. \[fig:Die2\] (a) shows the true objects, and we refer to [@geffrin2005free] for more description of the targets. The experiment is carried out in 2005, in which the receiver stays in the azimuthal plane ($xoy$) and is rotated along two-thirds of a circle from 60$^\circ$ to 300$^\circ$ with the angular step being 1$^\circ$. The source antenna stays at the fixed location ($\theta = 0^\circ$), and the object is rotated to obtain different illumination incidences from 0$^\circ$ to 315$^\circ$ with angular step of 45$^\circ$. Namely, we have $241 \times 8$ measurements at each frequency. The distance from the transmitter/receiver to the centre of the targets has increased to 1.67 m. The measurement configuration is shown in Fig. \[fig:DieMetConf\], in which the blue one represents the emitter, the $4 \times 9$ red ones represent the CV measurements, and the green ones are the reconstruction measurements.
The inversion domain is restricted to \[$-60$, $60$\] $\times$ \[$-60$, $60$\] mm$^2$, and the discretization grid size is 2.5 $\times$ 2.5 mm$^2$. Let us process the multi-frequency data at 2 GHz, 4 GHz, 6 GHz, 8 GHz, and 10 GHz simultaneously by the GMMV-based linear method and the LSM method, respectively. The data matrix $\mF_i$ for LSM is a $360 \times 8$ matrix in which the data entries that are not available are replaced with zeros. The reconstruction residual curve and the CV residual curve are shown in Fig. \[fig:Die2Err\], from which we see the CV residual decreases during the first 62 iterations and starts to increase thereafter. Fig. \[fig:Die2\] (b) and Fig. \[fig:Die2\] (c) show the reconstructed images by the GMMV-based linear method and LSM, respectively. One can observe that the profile of the objects is reconstructed by the proposed method with high resolution, while in the LSM image the objects cannot be distinguished at all.
Metallic Scatterers {#subsec.Met}
-------------------
\
.
\
.
In this subsection, we applied the proposed method to the *rectTM\_dece* and *uTM\_shaped* data-sets provided in the first opus of the Institut Fresnel’s database [@0266-5611-17-6-301], which correspond to a convex scatterer – a rectangular metallic cylinder, and a non-convex scatterer – a “U-shaped” metallic cylinder, respectively. The dimensions of the rectangular cross section are 24.5 $\times$ 12.7 mm$^2$, while those of the “U-shaped” cylinder are about 80 $\times$ 50 mm$^2$. The measurement configuration is same with that in Subsection \[subsec.Die\]. More details about the description of the targets can be found in [@0266-5611-17-6-301].
For the rectangular cylinder, the inversion domain is restricted to \[$-25$, 25\] $\times$ \[15, 65\] mm$^2$ and the multiple frequency data at 10 GHz, 12 GHz, 14 GHz, and 16 GHz are processed simultaneously. While for the larger “U-shaped” cylinder, the inversion domain is restricted to \[$-70$, 70\] $\times$ \[$-70$, 70\] mm$^2$ and the multiple frequency data at 4 GHz, 8 GHz, 12 GHz, and 16 GHz are processed simultaneously. The size of the discretization grids is 1.3 $\times$ 1.3 mm$^2$. Fig. \[fig:MetCV\] (a,b) give the residual curves and the reconstructed images are shown in Fig. \[fig:Met1\] and Fig. \[fig:Met2\], respectively, from which we can see that the focusing performance of LSM is poor in the rectangular cylinder case, and it is even worse in retrieving the shape of the non-convex “U-shaped” cylinder, while the rectangular shape and “U” shape are well reconstructed by the proposed GMMV-based linear method, indicating that the latter shows higher resolving ability than the former in both the convex metallic target case and the non-convex metallic target case.
Hybrid Scatterers {#subsec.DieMet}
-----------------
![Normalized reconstruction residual curve and the CV residual curve in Subsection \[subsec.DieMet\]. The *FoamMetExtTM* data-set at 2 GHz, 3 GHz, …, 8 GHz is processed.[]{data-label="fig:DieMetCV"}](FoamMetExtTMErr.pdf){height="0.375\linewidth"}
\
In this subsection, we applied the proposed method to hybrid scatterers consisting of a foam circular cylinder (diameter = 80 mm, $\varepsilon_r = 1.45 \pm 0.15$) and a copper tube (diameter = 28.5 mm, thickness = 2 mm), which was considered in the *FoamMetExtTM* data-set provided in the second opus of the Institut Fresnel’s database. We refer to [@geffrin2005free] for more description of the targets. The measurement configuration is the same with the one shown in Fig. \[fig:DieMetConf\]. In contrast to the *FoamDieIntTM* data-set, this data-set is obtained using 18 transmitters, while other settings are kept the same. Specifically, the source antenna stays at the fixed location ($\theta = 0^\circ$), and the object is rotated to obtain different illumination incidences from 0$^\circ$ to 340$^\circ$ in steps of 20$^\circ$.
Let us first restrict the inversion domain to \[$-90$, 60\] $\times$ \[$-75$, 75\] mm$^2$ and discretize this domain with 2.5 $\times$ 2.5 mm$^2$ grids. The multi-frequency data at 7 frequencies, 2 GHz, 3 GHz, $\dots$, and 8 GHz, are jointly processed. The data matrix $\mF_i$ for LSM is a $360 \times 18$ matrix in which the data entries that are not available are replaced with zeros. Fig. \[fig:DieMetCV\] gives the normalized residual curves of the GMMV-based linear method, and the reconstructed images by both methods are shown in Fig. \[fig:DieMet\]. As we can see both the metallic cube and the circular foam cylinder can be well reconstructed by the GMMV-based linear method with high resolution, but for a slight part lost in between. In addition, one can also see from the GMMV image that the metallic cube obviously has larger intensity than the foam cylinder, showing a potential ability of distinguishing dielectric objects and metallic objects. In contrast, LSM shows a poor focusing ability in the hybrid scatterer case, indicating once again that the proposed GMMV-based linear method is able to achieve higher resolution image than LSM in this case.
Computation Time
----------------
[|l|>m[1.3cm]{}|>m[1.3cm]{}|>m[1.3cm]{}|]{} Data-set & Frequency number & GMMV /s & LSM /s\
*twodielTM\_8f* & 1 & 2.5 & 0.0145\
*twodielTM\_8f* & 4 & 12.7 & 0.0270\
*FoamDieIntTM* & 5 & 3.8 & 0.0693\
*rectTM\_dece* & 4 & 2.7 & 0.0225\
*uTM\_shaped* & 4 & 41.0 & 0.0498\
*FoamMetExtTM* & 7 & 15.6 & 0.0911\
In this subsection, we discuss the computational complexity of the GMMV-based linear method. Since the sensing matrices can be computed (or analytically given for the experiments in homogeneous backgrounds) and stored beforehand, the GMMV-based linear method only involves a number of matrix-vector multiplications. The codes for reconstructing the contrast sources are written in MATLAB language. We ran the codes on a desktop with one Intel(R) Core(TM) i5-3470 CPU @ 3.20 GHz, and we did not use parallel computing. The running times of the GMMV-based linear method and LSM are listed in Table \[tab.Num\], from which we see that, on one hand, all the reconstructions by the GMMV-based linear method require less than 1 minute (or even a couple of seconds for some examples); on the other hand, LSM shows overwhelmingly high efficiency in comparison to the GMMV-based linear method, because singular value decomposition (SVD) in LSM is done only once, then all of the indicator functions can be obtained simultaneously by several matrix-matrix multiplications. However, in view of the higher resolving ability of the proposed method, the extra computational cost is worth to pay. It is also worth mentioning that the proposed method is faster than the iterative shape reconstruction methods which solve the forward scattering problem in each iteration. In addition, parallel computing can be straightforwardly applied to the proposed method for acceleration.
Conclusion {#sec.Conclusion}
==========
In this paper, a novel linear method for shape reconstruction based on the generalized multiple measurement vectors (GMMV) model is proposed. The sum-of-norm of the contrast sources at multiple frequencies was used for the first time as a regularization constraint in solving the electromagnetic inverse scattering problems. We applied this method to process 2-D transverse magnetic (TM) experimental data, and the results demonstrated that a regularized solution of the contrast sources by the sum-of-norm constraint is sufficient to recover the spatial profile of the non-sparse targets. Comparison results indicated that the GMMV-based linear method outperforms LSM in all the three cases of dielectric scatterers, convex and non-convex metallic scatterers, and hybrid scatterers in the shape reconstruction quality and the level of the sidelobes in the images. In view of the resolving ability and computational efficiency, the proposed method looks very promising in the application to three-dimensional imaging problems. Besides, the outcome of the GMMV-based linear method — the contrast sources, can be directly used for quantitative imaging when the incident fields are known with a reasonable accuracy.
[^1]: S. Sun, B. J. Kooij, and A. G. Yarovoy are with the Delft University of Technology, 2628 Delft, The Netherlands (e-mail: [email protected]; [email protected]; [email protected]).
[^2]: T. Jin is with the College of Electronic Science and Engineering, National University of Defense Technology, Changsha 410073, China (e-mail: [email protected]).
[^3]: The GMMV-LIM package is available at <https://github.com/TUDsun/GMMV-LIM>.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Sound velocities in classical single-component fluids with Yukawa (screened Coulomb) interactions are systematically evaluated and analyzed in one-, two-, and three spatial dimensions (${\mathcal D}=1,2,3$). In the strongly coupled regime the convenient sound velocity scale is given by $\sqrt{Q^2/\Delta m}$, where $Q$ is the particle charge, $m$ is the particle mass, $n$ is the particle density, and $\Delta=n^{-1/{\mathcal D}}$ is the unified interparticle distance. The sound velocity can be expressed as a product of this scaling factor and a dimension-dependent function of the screening parameter, $\kappa=\Delta/\lambda$, where $\lambda$ is the screening length. A unified approach is used to derive explicit expressions for these dimension-dependent functions in the weakly screened regime ($\kappa\lesssim 3$). It is also demonstrated that for stronger screening ($\kappa\gtrsim 3$), the effect of spatial dimensionality virtually disappears, the longitudinal sound velocities approach a common asymptote, and a one-dimensional nearest-neighbor approximation provides a relatively good estimate for this asymptote. This result is not specific to the Yukawa potential, but equally applies to other classical systems with steep repulsive interactions. An emerging relation to a popular simple freezing indicator is briefly discussed. Overall, the results can be useful when Yukawa interactions are relevant, in particular, in the context of complex (dusty) plasmas and colloidal suspensions.'
author:
- 'Sergey A. Khrapak'
bibliography:
- 'SoundDim\_References.bib'
title: Unified description of sound velocities in strongly coupled Yukawa systems of different spatial dimensionality
---
Introduction
============
Investigations into linear and non-linear waves in complex (dusty) plasmas – systems of charged macroscopic particles immersed in a plasma environment – is an active research area with many interesting topics, such as e.g. sound (dust-acoustic) waves, instabilities, Mach cones, shocks, solitons, and turbulence. [@MerlinoPoP1998; @FortovPR; @ShuklaRMP2009; @Merlino2014; @ThomasPPCF2018] In experiments, sufficiently long wavelengths are usually easy accessible for investigation, which exceed considerably the characteristic interparticle separation. At these wavelengths collective excitations exhibit acoustic-like dispersion and the sound velocities play central role in characterizing the system.
The particle charge in complex plasmas is typically very high ($10^3-10^4$ elementary charges for a micron-range sized particles). Due to strong electrical repulsion between the particles they usually form condensed liquid and solid phases. It is well understood that dispersion properties of strongly coupled complex plasmas significantly deviate from those characteristic for an ideal gaseous plasma. [@FortovUFN; @FortovPR; @Bonitz2010; @DonkoJPCM2008] Strong coupling effects affect the magnitudes of sound velocities. [@KalmanPRL2000; @KhrapakPPCF2015] Strongly coupled complex plasma fluids in two and three dimensions can support transverse excitations at finite (sufficiently short) wavelengths. [@OhtaPRL2000; @PramanikPRL2002; @NosenkoPRL2006] Instability thresholds (e.g. of the ion current instability) are shifted at strong coupling. [@RosenbergPRE2014]
Waves in complex plasmas are investigated in one-dimensinal (1D), two-dimensional (2D), and three-dimensional configurations (3D). 1D linear particle arrangements as well as 1D and quasi-1D particle rings are formed by creating appropriate confining potential configurations above the negatively charged surface (electrode), responsible for particle levitation. [@PetersPLA1996; @HomannPRE1997; @SheridanPoP2016] 2D and quasi-2D layers are extensively studied in laboratory experiments with radio-frequency (rf) discharges, where the levitating particles form horizontal layer(s) in the plasma sheath above the lower rf electrode. [@Zuzic1996; @PieperPRL1996; @NunomuraPRL2002; @NunomuraPRE2002; @NunomuraPRL2005] Waves in large 3D particle clouds have been initially observed in a Q-machine, [@BarkanPoP1995] and then in dusty plasmas formed in a positive column (sometimes stratified) of direct-current glow discharges, [@MolotkovJETP1999; @FortovPoP2000; @RatynskaiaPRL2004; @KhrapakPRE2005; @EThomasCPP2009] as well as in various experiments under microgravity conditions. [@KhrapakPoP2003; @YaroshenkoPRE2004; @PielPRL2006; @MenzelPRL2010; @HimpelPoP2014; @YaroshenkoPoP2019]
Sound velocities can be relatively easy and accurately measured in experiments [@NunomuraPRE2002; @NunomuraPRL2005; @KhrapakPoP2003; @SaitouPRL2012] and contain important information about the systems investigated.
The purpose of this paper is to provide a unified description of sound velocities in strongly coupled complex plasmas in 1D, 2D, and 3D geometries. It is assumed that the particles are interacting via the isotropic pairwise Yukawa (screened Coulomb) potential. Simple practical formulas are obtained, which are applicable to condensed fluid and solid phases. In particular, it is demonstrated that the sound velocities are given by the product of the relevant velocity scale $\sqrt{Q^2/\Delta m}$ and the screening function $f(\kappa)$, where $Q$ is the particle charge, $\Delta=n^{-1/{\mathcal D}}$ is the characteristic interparticle separation, $n$ is the density, ${\mathcal D}$ is the dimensionality, $m$ is the particle mass, and $\kappa$ is the screening parameter defined as the ratio of the interparticle separation to the screening length $\lambda$, that is $\kappa=\Delta/\lambda$. The properties of $f(\kappa)$ in 1D, 2D, and 3D cases are investigated. In particular, the two regimes of weakly screened ($\kappa\ll 1$) and strongly screened interactions ($\kappa\gg 1$) are considered in detail. Important consequences and relations are discussed.
Yukawa systems are characterized by the repulsive interaction potential of the form $\phi(r)=(Q^2/r)\exp(-r/\lambda)$. Regardless of dimensionality, the phase state of the system is conventionally described by the two dimensionless parameters, which are the (Coulomb) coupling parameter $\Gamma=Q^2/\Delta T$, and the screening parameter $\kappa$, where $T$ is the system temperature (in energy units, so that $k_{\rm B}=1$). It is important to note that very often the Wigner-Seitz radius is used as a length unit, instead of $\Delta$. The Wigner-Seitz radius is defined from $4\pi n a^3/3=1$ in 3D, $\pi a^2 n =1$ in 2D, and $na=1$ in 1D (that is only in 1D we have $\Delta=a$). Correspondingly, $\Gamma$ and $\kappa$ are also often defined in terms of $a$ and one should pay attention to this. In this paper $\Delta$ is exclusively used as a length unit.
The Yukawa potential is considered as a reasonable starting point to model interactions in complex (dusty) plasmas and colloidal dispersions, [@FortovPR; @IvlevBook] although in many cases the actual interactions (in particular, their long-range asymptotes) are much more complex. [@TsytovichUFN1997; @FortovPR; @KhrapakCPP2009; @RatynskaiaPoP2006; @KhrapakPRL2008; @KhrapakPoP2010; @ChaudhuriIEEE2010; @ChaudhuriSM2011; @LampePoP2015] This is particularly true in cases when electric fields and ion drifts are present, resulting in plasma wakes and wake-mediated interactions. [@VladimirovPRE1995; @KompaneetsPoP2009; @HutchinsonPoP2011; @LudwigNJP2012; @KompaneetsPRE2016] The sound velocities will be certainly affected by deviations from the assumed Yukawa potential, but we do not attempt to discuss this issue here. Recently, the effect of long-range deviations from the pure Yukawa potential on the dispersion relations of the longitudinal waves in isotropic complex plasmas have been investigated. [@Fingerprints] The behavior of waves in a 1D dusty plasma lattice where the dust particles interact via Yukawa plus electric dipole interactions has been theoretically studied in Refs. .
The paper is organized as follows. In section \[sec\_sound\] the unified approach to the calculation of sound velocities in strongly coupled Yukawa systems in 1D, 2D, and 3D is presented. Main results are summarized in Section \[MainResults\]. Here the weakly screened regime is analyzed in detail. Approximate expressions for the sound velocities in systems with steeply repulsive potentials are derived, and it is explained why spatial dimensionality does not affect considerably the magnitude of sound velocities in this regime. This is followed by conclusion in Sec. \[Concl\]. Relation to a simple freezing indicator of classical 3D fluids proposed earlier is then briefly discussed in Appendix \[freezing\].
Sound velocities in different spatial dimensions {#sec_sound}
================================================
Strongly coupled Yukawa systems support one longitudinal mode in 1D case, one longitudinal and one transverse mode in 2D case, and one longitudinal and two transverse modes in 3D case.
The longitudinal sound velocities can be obtained from the conventional hydrodynamic (fluid) approach. [@LL_hydrodynamics] This requires knowledge of an appropriate equation of state. The standard adiabatic sound velocity is $c_{\rm s}=\sqrt{(1/m)(\partial P/{\partial n})_s}$, where $P$ is the pressure of a [*single component*]{} Yukawa system and the subscript $s$ denotes that the derivative with respect to density is taken at constant entropy. Note that $(\partial P/{\partial n})_s=\gamma(\partial P/{\partial n})_T$, where $\gamma = c_{\rm p}/c_{\rm v}$ is the adiabatic index. For strongly coupled Yukawa systems we have $\gamma\simeq 1$, which is a general property of soft repulsive interactions. [@KhrapakPRE2015_Sound; @SemenovPoP2015; @FengPoP2018] This fluid approach has been exploited previously for Yukawa systems in 3D case [@KhrapakPPCF2015; @KhrapakPRE2015_Sound] as well as in 2D case. [@SemenovPoP2015] Generalization to 1D case is trivial.
The sound velocities of strongly coupled Yukawa systems can also be obtained from infinite-frequency (instantaneous) elastic moduli, directly related to the instantaneous normal modes. [@Stratt1997; @KhrapakSciRep2017; @WangPRE2019] This approach is applicable to fluids and solids and allows to calculate both the longitudinal and transverse sound velocities in a universal manner and hence is adopted here.
The elastic waves modes (instantaneous normal modes) in the strongly coupled plasma-related fluids are rather well described by the quasilocalized charge approximation (QLCA), [@GoldenPoP2000; @KalmanPRL2000; @DonkoJPCM2008] also known as the quasi-crystalline approximation (QCA). [@Hubbard1969; @Takeno1971; @KhrapakSciRep2017] This approximation relates wave dispersion relations to the interparticle interaction potential $\phi(r)$ and the equilibrium radial distribution function (RDF) $g(r)$, characterizing structural properties of the system. It can be considered as either a generalization of the random phase approximation or as a generalization of the phonon theory of solids. [@Hubbard1969] The latter point of view is particularly relevant, because in the special case of a cold crystalline solid the QCA dispersion reduces to the ordinary phonon dispersion relation, [@Hubbard1969] justifying the approach name. It is known that for 2D Yukawa systems, the angularly averaged lattice dispersions are remarkably similar to the isotropic QCA fluid dispersions. [@SullivanJPA2006; @HartmannIEEE2007] It is not very unreasonable to expect similar behavior in the 3D case.
The long-wavelength limits of the QCA dispersion relations can be used to define the elastic longitudinal and transverse sound velocities, $c_l$ and $c_t$, as explained in detail below. The relation to the thermodynamic (adiabatic $\simeq$ isthermal) sound velocity is then $c_s^2\simeq c_l^2-(4/3)c_t^2$ in 3D and $c_s^2=c_l^2-c_t^2$ in 2D. For Yukawa interactions (as well as for other soft long-ranged repulsive interaction potentials) the strong inequality $c_l^2\gg c_t^2$ holds at strong coupling. This implies that we have approximately $c_s\simeq c_l$. The accuracy of this relation has been numerously tested for strongly coupled Yukawa fluids, [@KhrapakPRE2015_Sound; @KhrapakPPCF2015; @KhrapakPoP2016_Relations] as well as other soft interactions, [@KhrapakSciRep2017; @GoldenPRE2010; @KhrapakPoP2016_Log] both in 3D and 2D cases.
The general QCA (QLCA) expressions for the longitudinal and transverse dispersion relations are $$\label{omegaL}
\omega_{l}^2=\frac{n}{m}\int\frac{\partial^2 \phi(r)}{\partial z^2}g(r)\left[1-\cos(\bf{kz})\right]d{\bf r},$$ and $$\label{omegaT}
\omega_{t}^2=\frac{n}{m}\int\frac{\partial^2 \phi(r)}{\partial x^2}g(r)\left[1-\cos(\bf{kz})\right]d{\bf r},$$ where $\omega$ is the frequency and ${\bf k}$ is the wave vector. It is worth mentioning at this point that $\omega_l^2$ and $\omega_t^2$ can be identified as the potential (excess) contributions to the normalized second frequency moments of the longitudinal and transverse current spectra, $C_{l/t}(k,\omega)$. [@BulacaniBook] Kinetic terms, which are absent in the QCA approach \[$3(T/m)k^2$ for the longitudinal branch and $(T/m)k^2$ for the transverse one\], are relatively small at strong coupling. Thus, the formal essence of the QCA approach is just to approximate the actual dispersion relations by the excess contributions to the second frequency moments of the corresponding current spectra.
We proceed further as follows. The derivatives of the pair interaction potential in Eqs. (\[omegaL\]) and (\[omegaT\]) are evaluated from $$\frac{\partial^2 \phi(r)}{\partial x_{\alpha}^2} = \phi''(r)\frac{x_{\alpha}^2}{r^2}+\frac{\phi(r)'}{r}\left(1-\frac{x_{\alpha}^2}{r^2}\right),$$ where $x_{\alpha}=x,y,z$ in 3D, $x_{\alpha}=x,z$ in 2D, $x_{\alpha}=z$ in 1D, and $r=\sqrt{\sum_{\alpha}x_{\alpha}^2}$. Note also that from symmetry $$\frac{\partial^2 \phi(r)}{\partial x^2}=\frac{\partial^2 \phi(r)}{\partial y^2}=\frac{1}{2}\left[\Delta \phi(r) -\frac{\partial^2 \phi(r)}{\partial z^2}\right]$$ in 3D, and $$\frac{\partial^2 \phi(r)}{\partial x^2}=\Delta \phi(r) -\frac{\partial^2 \phi(r)}{\partial z^2}$$ in 2D.
Let us consider isotropic fluids with pairwise interactions of the form $$\phi(r)=\epsilon f (r/\sigma),$$ where $\epsilon$ is the energy scale and $\sigma$ is the length scale. Except for some special cases (in the present context this corresponds to the unscreened Coulomb interaction limit, which will not be considered explicitly), the long-wavelength dispersion is acoustic: $$\lim_{k\rightarrow 0}\frac{\omega_{l}^2}{k^2}=c_{l}^2, \quad\quad \lim_{k\rightarrow 0}\frac{\omega_{t}^2}{k^2}=c_{t}^2.$$ The emerging elastic longitudinal and transverse sound velocities can be presented in a universal form [@KhrapakPoP2016_Relations] $$\label{disp_gen_3D}
c_{{l}/{t}}^2=\omega_{{\mathcal D}}^2\sigma^2\int_0^{\infty}dx x^{{\mathcal D}+1} g(x) \left[{\mathcal A}\frac{f'(x)}{x}+{\mathcal B}f''(x)\right],$$ where $x=r/\sigma$ is the reduced distance. The ${\mathcal D}$-dimensional effective frequencies $\omega_{\mathcal D}$ and the coefficients ${\mathcal A}$ and ${\mathcal B}$ are summarized in Table \[Tab1\]. The last line in Table \[Tab1\] simply reflects the fact that the transverse mode is absent in 1D case and the integration over the positive and negative parts of $z$-axis is equivalent to the doubled integration over the positive part.
An important remark about the transverse dispersion relation in fluids should be made at this point. Although strongly coupled (dense) fluids do support the transverse waves propagation, their dispersion is somewhat different from that in a solid. The existence of transverse modes in fluids is a consequence of the fact that their response to high-frequency short-wavelength perturbations is similar to that of a solid. [@ZwanzigJCP1965] However, shear waves in fluids cannot exist for arbitrary long wavelengths. The minimum threshold wave number $k_*$ emerges, below which transverse waves cannot propagate. This phenomenon, often referred to as the $k$-gap in the transverse mode, is a very well known property of the fluid state. [@HansenBook; @Trachenko2015] Locating $k_*$ for various simple fluids in different parameter regimes and investigating $k$-gap consequences on the liquid state properties is an active area of research. [@GoreePRE2012; @YangPRL2017; @KhrapakJCP2018; @KhrapakJCP2019; @KryuchkovSciRep2019] For our present purpose it is important that the inclination of the dispersion curve $\partial \omega_t/\partial k$ near the onset of the transverse mode at $k>k_*$ can be well approximated by $c_t$. Thus, the latter is a meaningful quantity both in solid and strongly coupled fluid states.
$\mathcal D$ $\omega_{\mathcal D}^2$ ${\mathcal C}_{\mathcal D}$ ${\mathcal A}_{l}$ ${\mathcal B}_{l}$ ${\mathcal A}_{t}$ ${\mathcal B}_{t}$
-------------- -------------------------- ----------------------------- -------------------- -------------------- -------------------- --------------------
3D $4\pi n\epsilon\sigma/m$ $4\pi$ $\frac{1}{15}$ $\frac{1}{10}$ $\frac{2}{15}$ $\frac{1}{30}$
2D $2\pi\epsilon n/m$ $2\pi$ $\frac{1}{16}$ $\frac{3}{16}$ $\frac{3}{16}$ $\frac{1}{16}$
1D $2\epsilon n/m\sigma $ $2$ 0 $\frac{1}{2}$ 0 0
: \[Tab1\] The coefficients ${\mathcal A}_{l/t}$ and ${\mathcal B}_{l/t}$ appearing in Eq. (\[disp\_gen\_3D\]) for the longitudinal ($l$) and transverse ($t$) sound velocities, as well as ${\mathcal D}$-dimensional nominal frequencies and the coefficients ${\mathcal C}_{\mathcal D}$ in 3D, 2D, and 1D spatial dimensions.
Next we take $\sigma=\Delta$ and assume Yukawa interaction potential between the particles. This implies $\epsilon=Q^2/\Delta$ and $f(x)=\exp(-\kappa x)/x$. The expressions for the longitudinal and transverse sound velocities become $$\begin{split}
c_{{l}/{t}}^2={\mathcal C}_{{\mathcal D}}\left(\frac{Q^2}{\Delta m}\right)\int_0^{\infty}dx x^{{\mathcal D}-2}\exp(-\kappa x) g(x) \\ \left[{\mathcal B}_{l/t}\kappa^2 x^2 +(2{\mathcal B}_{l/t}-{\mathcal A}_{l/t})(1+\kappa x)\right].
\end{split}$$ The numerical coefficients ${\mathcal C}_{\mathcal D}$ are provided in Table \[Tab1\]. At this point it is also useful to introduce the universal velocity scale $c_0=\sqrt{Q^2/\Delta m}$. Note that $c_0=\sqrt{\Gamma}v_T$, where $v_T=\sqrt{T/m}$ is the thermal velocity.
The excess internal (potential) energy can also be expressed using the RDF and the pair interaction potential. The expression for the excess energy per particle in units of temperature is [@HansenBook] $$\label{energy}
u_{\rm ex}= \frac{n}{2T}\int d{\bf r} \phi(r)g(r).$$ For the Yukawa interaction potential in ${\mathcal D}$ dimensions this yields $$u_{\rm ex}={\mathcal C}_{\mathcal D}\frac{\Gamma}{2}\int_0^{\infty}dx x^{{\mathcal D}-2}\exp(-\kappa x)g(x),$$ where we have used the identity $\epsilon/T=Q^2/\Delta T\equiv \Gamma$.
Finally, the following line of arguments is used. In the special case of a cold crytalline solid, the RDF represents a series of delta-correlated peaks corresponding to a given lattice structure. Assuming that the lattice structure is fixed (in fact, the equilibrium lattice structure changes from bcc to fcc when $\kappa$ increases [@RobbinsJCP1988; @HamaguchiPRE1997; @VaulinaPRE2002] in 3D case, but this is not important for our present purpose) the RDF is a universal function of $x$: $g(x;\Gamma,\kappa)=g(x)$ (for simplicity we keep isotropic notation). Independence of $g(x)$ of $\kappa$ allows us make use of the following identities: $${\mathcal C}_{\mathcal D}\Gamma\int_0^{\infty}dx x^{{\mathcal D}-2}\exp(-\kappa x) g(x) = 2u_{\rm ex},$$ $${\mathcal C}_{\mathcal D}\Gamma\int_0^{\infty}dx x^{{\mathcal D}-2}\kappa x\exp(-\kappa x) g(x) = -2\kappa \frac{\partial u_{\rm ex}}{\partial \kappa},$$ $${\mathcal C}_{\mathcal D}\Gamma\int_0^{\infty}dx x^{{\mathcal D}-2}\kappa^2 x^2\exp(-\kappa x) g(x) = 2\kappa^2 \frac{\partial^2 u_{\rm ex}}{\partial \kappa^2}.$$ These expressions are exact for crystalline lattices, but remain good approximations in the strongly coupled fluid regime. In particular, the dependence $g(x;\Gamma,\kappa)$ on $\kappa$ is known to be very weak for weakly screened ($\kappa$ is not much larger than unity) Yukawa fluids. [@FaroukiJCP1994; @RosenbergPRE1997; @KhrapakPoP2018] The excess energy at strong coupling can be very accurately approximated as $u_{\rm ex}\simeq M_{\rm fl}\Gamma \simeq M_{\rm cr}\Gamma$, where $M_{\rm fl}$ and $M_{\rm cr}$ can be referred to as the fluid and crystalline Madelung constants ($M_{\rm fl}\sim M_{\rm cr}$). [@KhrapakISM] This reflects the fact that for soft repulsive interactions the dominant contribution to the excess energy comes from static correlations. [@KhrapakJCP2015] One can understand this as follows. For soft long-ranged interactions the integral in Eq. (\[energy\]) is dominated by long distances, where $g(x)$ exhibits relatively small oscillations around unity (for finite temperatures). The ratio $u_{\rm ex}/\Gamma$ is then not very sensitive to the exact shape of $g(x)$ at small $x$ (provided the correlation hole radius [@KhrapakPoP2016] is properly accounted for) and, hence, to the phase state of the system.
The consideration above implies that if $u_{\rm ex}$ (and its dependence on $\kappa$) is known, the integrals appearing in the expressions for sound velocities can be evaluated. Below we demonstrate how this works in practice in 1D, 2D, and 3D cases.
1D case
-------
The excess energy of an equidistant chain of particles is $$u_{\rm ex}=\Gamma\sum_{j=1}^{\infty}\frac{e^{-\kappa j}}{j}=\Gamma\left[\kappa-\ln(e^{\kappa}-1)\right].$$ After simple algebra we get $$c_{l}^2=c_0^2\left\{\frac{\kappa e^{\kappa}[\kappa-2+2e^{\kappa}]}{(e^{\kappa}-1)^2}-2\ln(e^{\kappa}-1)\right\}.$$ This result has been previously reported in Ref. . It can be also obtained by direct summation $$\begin{split}
c_{l}^2=c_0^2\int_0^{\infty}dxg(x)e^{-\kappa x}(2+2\kappa x+\kappa^2x^2)/x \\
=c_0^2\sum_{j=1}^{\infty}e^{-\kappa j}(2+2\kappa j+\kappa^2 j^2)/j.
\end{split}$$ If only contribution from the two nearest neighbor particles is retained ($j=1$), the conventional dust lattice wave (DLW) sound velocity scale is obtained, [@MelandsoPoP1996] $$\label{DLW}
c_{\rm DLW}^2=c_0^2\exp(-\kappa)(2+2\kappa+\kappa^2).$$
Of course, transverse mode does not exist in truly 1D case.
![Reduced longitudinal sound velocities versus the screening parameter $\kappa=\Delta/\lambda$ for Yukawa systems in different spatial dimensions. The three solid curves from top to bottom correspond to 3D, 2D, and 1D cases, respectively. The dashed curve corresponds to the conventional DLW scale of Eq. (\[DLW\]). []{data-label="Fig1"}](Figure1.pdf){width="7.5cm"}
2D case
-------
Combining expressions for the sound velocities and reduced excess energy and denoting $M=u_{\rm ex}/\Gamma$ we get $$c_{l}^2=\frac{c_0^2}{8}\left[3\kappa^2\frac{\partial^2 M}{\partial \kappa^2}-5\kappa\frac{\partial M}{\partial \kappa}+5M\right],$$ $$c_{t}^2=\frac{c_0^2}{8}\left[\kappa^2\frac{\partial^2 M}{\partial \kappa^2}+\kappa\frac{\partial M}{\partial \kappa}-M\right].$$ The Madelung constant for the triangular lattice can be well represented by [@KryuchkovJCP2017] $$\label{M2D}
M=-1.9605 +0.5038\kappa -0.06236\kappa^2+0.00308\kappa^3+\frac{\pi}{\kappa}.$$ In Eq. (\[M2D\]) it is taken into account that $\kappa=\sqrt{\pi}a/\lambda$ and $\Gamma=(1/\sqrt{\pi})(Q^2/aT)$. The explicit expressions for the sound velocities are then $$c_{l}^2=c_0^2\left(\frac{6.2832}{\kappa}-1.2253-0.0078\kappa^2+0.00308\kappa^3\right),$$ $$c_{t}^2=c_0^2\left(0.2451-0.0234\kappa^2+0.00308\kappa^3\right).$$ The longitudinal sound velocity diverges as $\kappa^{-1/2}$ on approaching the one-component plasma (OCP) limit, while the transverse sound velocity remains finite.
3D case
-------
The relations between the longitudinal and transverse sound velocities and the Madelung constant in 3D case are $$c_{l}^2=\frac{c_0^2}{15}\left[3\kappa^2\frac{\partial^2 M}{\partial \kappa^2}-4\kappa\frac{\partial M}{\partial \kappa}+4M\right],$$ $$c_{t}^2=\frac{c_0^2}{15}\left[\kappa^2\frac{\partial^2 M}{\partial \kappa^2}+2\kappa\frac{\partial M}{\partial \kappa}-2M\right].$$ The excess energy can be very well represented by the ion sphere model (ISM) [@KhrapakISM; @RosenfeldMolPhys1998] resulting in $$\label{ISM}
M=\frac{\kappa'(\kappa'+1)}{(\kappa'+1)+(\kappa'-1)e^{2\kappa'}}\left(\frac{4\pi}{3}\right)^{1/3},$$ where $\kappa'=a/\lambda=\kappa(4\pi/3)^{-1/3}$ and the last factor in (\[ISM\]) arises from $\Gamma=(Q^2/aT)(4\pi/3)^{-1/3}$ in the present notation. The explicit expressions for the longitudinal and transverse sound velocities become $$c_{l/t}^2=\frac{1}{15}\left(\frac{4\pi}{3}\right)^{1/3}c_0^2{\mathcal F}_{l/t}(\kappa'),$$ where, after some algebra, we obtain $${\mathcal F}_{l}(x)=\frac{x^4\left[(4+3x^2)\sinh(x)-4x\cosh(x)\right]}{\left[x\cosh(x)-\sinh(x)\right]^3},$$ and $${\mathcal F}_{t}(x)=\frac{x^4\left[(3+x^2)\sinh(x)-3x\cosh(x)\right]}{\left[x\cosh(x)-\sinh(x)\right]^3}.$$ It will be shown below that $c_l$ diverges as $\kappa^{-1}$ when the OCP limit is approached, while $c_t$ remains finite.
![Reduced transverse sound velocities of strongly coupled Yukawa systems versus the screenening parameter $\kappa=\Delta/\lambda$. Velocities are denoted by solid curves. Dashed curve show the ratio of longitudinal-to-transverse sound velocities. The blue (upper) curves correspond to 2D case. The red curves are for 3D case. []{data-label="Fig2"}](Figure2.pdf){width="7.5cm"}
Main Results {#MainResults}
============
General trends
--------------
The calculated sound velocities are plotted in Figs. \[Fig1\] and \[Fig2\].
Figure \[Fig1\] shows the longitudinal velocities for 3D, 2D, and 1D cases. In the weakly screened regime with $\kappa\lesssim 3$, the sound velocities are well separated. The highest velocity corresponds to the 3D case, the lowest one to the 1D case. Not that the sound velocities diverge as $\kappa\rightarrow 0$. This will be discussed in Sec. \[OCP\]. For stronger screening with $\kappa\gtrsim 3$ the longitudinal sound velocities are virtually independent of the dimensionality. They approach the common 1D DLW results with nearest neighbor interactions retained, Eq. (\[DLW\]). This tendency is related to the increasing steepness of the interaction potential with increasing $\kappa$. This is a general property of steep repulsive interactions, not based on the particular shape of Yukawa potential, and we will discuss this in more detail in Sec. \[steep\].
The transverse sound velocities plotted in Fig. \[Fig2\] are finite in the Coulomb limit and slowly decrease with increase of $\kappa$. The transverse velocity is somewhat higher in 2D than in 3D. The ratios $c_t/c_l$ start from zero at $\kappa=0$ and approach $\simeq 0.5$ as $\kappa$ increases to 5. This is yet another illustration of the strong inequality $c_l^2\gg c_t^2$ from the side of soft interactions, which has important implications in a broad physical context. [@Melting2D; @KhrapakMolPhys2019]
Weakly screened limit {#OCP}
---------------------
In the limit of the Coulomb gas, the longitudinal dispersion relations do not exhibit acoustic asymptotes as $k\rightarrow 0$. The dispersion relation in the absence of correlations (random phase approximation) can be obtained by simply substituting $g(r)=1$ in Eq. (\[omegaL\]). This yields the conventional plasmon dispersion $\omega^2=\omega_p^2=4\pi Q^2n/m$ in the 3D case. In the 2D case the frequency grows as the square root of the wave vector, $\omega^2\propto k$. In the 1D case random phase approximation produces an integral which diverges logarithmically at small $r$. This indicates that the longitudinal sound velocities should diverge on approaching the $\kappa\rightarrow 0$ limit, as already observed. The functional for of this divergence will be established below.
In the weakly screening limit $\kappa\ll 1$ the following series expansions of the sound velocities emerge: In 1D case we have $$c_{l}=c_0\sqrt{3-2\ln \kappa};$$ In 2D case we get $$\begin{split}
c_{l}=c_0\left(\frac{2.5066}{\sqrt{\kappa}}-0.2444\sqrt{\kappa}-0.0119\kappa^{3/2}\right), \\
c_{t}=c_0\left(0.4951-0.0236\kappa^2+0.00311\kappa^3\right);
\end{split}$$ And, finally, in 3D case the sound velocities are $$\begin{split}
c_{l}=c_0\left(\frac{3.545}{\kappa}-0.0546\kappa-0.001620\kappa^{3}\right), \\
c_{t}=c_0\left(0.4398-0.0193\kappa^2+0.00055\kappa^4\right).
\end{split}$$ Alternative fits for the sound velocities in the 3D weakly screening regime have been previously suggested in Ref. .
![Reduced longitudinal sound velocity versus the screening parameter $\kappa=\Delta/\lambda$. The panels from top to bottom correspond to 1D, 2D, and 3D cases, respectively. Solid curves denote the weakly screened asymptotes, symbols correspond to the full calculation. The dashed curve for the 3D case is the fit from Ref. . []{data-label="Fig3"}](Figure3.pdf){width="7.8cm"}
The weakly screened asymptotes for the longitudinal mode (solid curves) are compared with the full calculation (symbols) in Fig. \[Fig3\]. As the Coulomb $\kappa\rightarrow 0$ limit is approached, the longitudinal sound velocity scales as $c_{l}/c_0\sim \sqrt{-2\ln \kappa}$ (${\mathcal D}=1$), $2.5066/\sqrt{\kappa}$ (${\mathcal D}=2$), and $3.545/\kappa$ (${\mathcal D}=3$). The last two coefficients are not just fitting parameters. It is known that in the weakly screening regime (and only in this regime) the longitudinal sound velocity does not depend on the coupling strength and tends to the conventional dust acoustic wave (DAW) velocity. [@RaoDAW] The details can be found in Refs. , here we just reproduce the scalings. In the 3D case we have $$c_{\rm DAW} = \omega_{p}\lambda=\sqrt{\frac{4\pi Q^2 n}{m}}\lambda=c_0\sqrt{\frac{4\pi}{\kappa^2}}\simeq c_0\frac{3.545}{\kappa}.$$ Similarly, in the 2D case we get [@PielPoP2006] $$c_{\rm DAW} = \omega_p\sqrt{\lambda}=c_0\sqrt{\frac{2\pi}{\kappa}}\simeq c_0\frac{2.5066}{\sqrt{\kappa}}.$$
It is observed that the weakly screened asymptotes work quite well even outside the range of applicability, i.e. even at $\kappa\gtrsim 1$. The dashed curve in the bottom panel of Fig. \[Fig3\] corresponds to the fit proposed in Ref. . The agreement is excellent for $\kappa\lesssim 4$.
![Reduced transverse sound velocity versus the screening parameter $\kappa=\Delta/\lambda$. The top (blue) curve and symbols correspond to the 2D case, the lower (red) curves and symbols are for the 3D cases. Solid curves denote the weakly screened asymptotes, symbols correspond to the full calculation. The dashed curve for the 3D case is the fit from Ref. . []{data-label="Fig4"}](Figure4.pdf){width="7.5cm"}
The results for the transverse sound velocity in 2D and 3D are plotted in Figure \[Fig4\]. The solid curves denote the weakly screened asymptotes, symbols correspond to the full calculation, the dashed curve is the 3D fit from Ref. . We observe that the weakly screened asymptotes are appropriate only for $\kappa\lesssim 2$ in this case. The transverse velocities do not vary much in the considered range of $\kappa$ and remain finite in the limit $\kappa\rightarrow 0$. We have $c_{t}/c_0\simeq 0.495$ (${\mathcal D}=2$) and $0.440$ (${\mathcal D}=3$). How this compares with the known results for the one-component plasma (OCP) systems with Coulomb interactions in 2D and 3D? For the OCP systems the transverse sound velocities are directly related to the thermal velocity and the reduced excess energy. [@GoldenPoP2000] In the 2D case we have $$c_{t}^2=-\frac{1}{8}v_{\rm T}^2 u_{\rm ex}.$$ Combining this with the strong coupling asymptote, [@KhrapakCPP2016] $u_{\rm ex}\simeq -1.106103(Q^2/aT)$, we get $c_{t}/c_0\simeq 0.495$, in excellent agreement with the result above. Similarly, in the 3D case we have $$c_{t}^2=-\frac{2}{15}v_{\rm T}^2 u_{\rm ex}.$$ Using the ISM estimation of the OCP excess energy, [@KhrapakPoP2014; @DubinRMP1999] $u_{\rm ex}\simeq -\tfrac{1}{9} (Q^2/aT)$ we get $c_{t}/c_0\simeq 0.440$, again in excellent agreement with the result above. The dashed curve in the 3D case corresponds to the fit from Ref. . For $\kappa\lesssim 2$ all the data shown are almost coinciding.
Sound velocities for steep repulsive potentials {#steep}
-----------------------------------------------
For steep repulsive potentials we should have $|f'(x)/x|\ll |f''(x)|$. Then the main contribution to the sound velocities comes from the second derivative of the potential. This main contribution to the longitudinal sound velocity can be evaluated from $$c_{l}^2=c_0{\mathcal B}_l{\mathcal C}_{\mathcal D}\int_0^{\infty}dx x^{{\mathcal D}+1} g(x) f''(x),$$ where as usually in this paper $x=r/\Delta$. Further, for steep interactions the main contribution to the integral above comes from the first shell of neighbors at $x\simeq 1$. We can therefore approximate $x^2f''(x)$ by $f''(1)$ under the integral. Such substitution is exact only for a long-range logarithmic potential, but should provide a good estimate for quickly decaying potentials and an RDF $g(x)$ that has a strong peak near $x\simeq 1$. The remaining of the integral can be related to the number of nearest neighbors $N_{\rm nn}$ using $$\begin{split}
{\mathcal C}_{\mathcal D}\int_0^{\infty}x^{{\mathcal D}+1}g(x)f''(x)dx\simeq \\ {\mathcal C}_{\mathcal D}\int_0^{x_{\min}}x^{{\mathcal D}-1}g(x)f''(1)dx\simeq f''(1)N_{\rm nn},
\end{split}$$ where $x_{\min}>1$ is roughly the position of the first non-zero minimum of $g(x)$ (in the considered situation the value of the integral is not sensitive to $x_{\min}$, because the main contribution comes from the immediate vicinity of $x=1$). Taking into account that at strong coupling $N_{\rm nn}\simeq 12$ (${\mathcal D}=3$), 6 (${\mathcal D}=2$), and 2 (${\mathcal D}=1$), we get $$\begin{split}
c_{l}^2=\frac{\epsilon}{m}f''(1), \quad\quad ({\rm 1D}) \\
c_{l}^2=\frac{18}{16}\frac{\epsilon}{m}f''(1), \quad\quad ({\rm 2D}) \\
c_{l}^2=\frac{12}{10}\frac{\epsilon}{m}f''(1). \quad\quad ({\rm 3D})
\end{split}$$ Thus the, longitudinal sound velocities are all proportional to $\sqrt{(\epsilon/m)f''(1)}$, multiplied by a coefficient of order unity. This coefficient has the following scaling with the dimensionality: 3D:2D:1D$\simeq \sqrt{1.2}:\sqrt{1.13}:1$. The difference in the coefficients is insignificant taking into account simplifications involved. This explains, why all the curves approach the common asymptote as $\kappa$ increases in Fig. \[Fig1\]. This common asymptote is just the DLW nearest neighbor result of Eq. (\[DLW\]).
Note that within this approximation the ratio of the longitudinal to transverse sound velocities is $c_{t}/c_{l}=1/\sqrt{3}\simeq 0.58$, independently of dimensionality. The dashed curves in Fig. \[Fig2\] should approach this asymptote as $\kappa$ increases further. Note, however, that the QCA approach itself cannot be applied for arbitrary large $\kappa$. It loses its applicability when approaching the hard sphere interaction limit. [@KhrapakJCP2016; @KhrapakSciRep2017]
In the Appendix \[freezing\] we discuss how the consideration in this Section can lead to a simple freezing indicator, which was previously applied to various classical 3D fluids and, particularly successfully, to the 3D Yukawa fluid.
Conclusion {#Concl}
==========
The effect of spatial dimensions on the amplitude of sound velocities in strongly coupled Yukawa systems has been investigated. A unified approach, based on infinite frequency (instantaneous) elastic moduli of fluids and isotropic solids has been formulated. In this approach, the sound velocities are expressed in terms of the excess internal energy, which is very well known quantity for Yukawa systems. Physically motivated expressions, convenient for practical application have been derived and analyzed. Relations to dust-acoustic wave (DAW) and dust-lattice wave (DLW) velocities have been explored. The regimes of weak and strong screening have been analyzed separately. It has been demonstrated that at weak screening ($\kappa\lesssim 3$) the longitudinal sound velocities in different spatial dimensions are well separated and their amplitude increases with dimensionality. For stronger screening ($\kappa\gtrsim 3$), the longitudinal sound velocities in different dimensions all approach the same DLW asymptote, and this can be a very useful observation for practical applications. The explanation of this tendency has been provided.
I would like to thank Viktoria Yaroshenko for reading the manuscript.
Related freezing indicator {#freezing}
==========================
To the same level of accuracy as in Sec. \[steep\] we can estimate the Einstein frequency in 3D systems with steep interparticle interactions as $$\label{efreq}
\Omega_{\rm E}^2 = \frac{n}{3m}\int d{\bf r}\Delta\phi(r) g(r) \simeq \frac{\epsilon N_{\rm nn}}{3m\Delta^2}f''(1)\propto \frac{\phi''(\Delta)}{m}.$$ The celebrated Lindemann melting criterion [@Lindemann] states that melting occurs when the particle root-mean-square vibrational amplitude around the equilibrium position reaches a threshold value of about $0.1$ of the interparticle distance. Its simplest version (assuming the Einstein approximation for particle vibrations in the solid state) may be cast in the form $$\label{Lindemann}
\langle\xi^2\rangle \simeq \frac{3T}{m\Omega_{\rm E}^2} \simeq L^2\Delta^2,$$ where $L$ is the Lindemann parameter. Combining Eqs. (\[efreq\]) and (\[Lindemann\]) we immediately see that at the fluid-solid phase transition one may expect $$\label{indicator}
\frac{\phi''(\Delta)\Delta^2}{T}\simeq {\rm const}.$$ This kind of criterion was first applied to Yukawa systems, [@VaulinaJETP2000; @VaulinaPRE2002; @FortovPRL2003] in which case it works very well for $\kappa\lesssim 5$. It was also applied with some success to Lennard-Jones (LJ) systems [@KhrapakPRB2010; @KhrapakMorfJCP2011] and LJ-type systems, [@KhrapakJCP2011] where it is able to approximately predict the liquid boundary of the liquid-solid coexistence region (freezing transition). For potentials, exhibiting anomalous re-entrant melting behavior, such as the exp-6 and Gaussian Core Model, the agreement with numerical data is merely qualitative and its application is limited to the low-density region. [@KhrapakMolPhys2011] From the derivation, it is expected that the freezing indicator (\[indicator\]) is more appropriate for steep interactions. Why it works so well for soft weakly screened Yukawa systems (including OCP), remains to some extent mysterious. Note, however, that an alternative derivation of the freezing indicator (\[indicator\]) for Yukawa systems, based on the isomorph theory approach, has been recently discussed. [@VeldhorstPoP2015]
Application of this freezing indicator to 2D and 1D systems is not possible in view of the predicted divergence of $\langle \xi^2 \rangle$ in these spatial dimensions due to long-wavelengths density fluctuations. [@Landau1937; @JancoviciPRL1967]
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Since the seminal work of Grossglauser and Tse [@key-1], the two-hop relay algorithm and its variants have been attractive for mobile ad hoc networks (MANETs) due to their simplicity and efficiency. However, most literature assumed an infinite buffer size for each node, which is obviously not applicable to a realistic MANET. In this paper, we focus on the exact throughput capacity study of two-hop relay MANETs under the practical finite relay buffer scenario. The arrival process and departure process of the relay queue are fully characterized, and an ergodic Markov chain-based framework is also provided. With this framework, we obtain the limiting distribution of the relay queue and derive the throughput capacity under any relay buffer size. Extensive simulation results are provided to validate our theoretical framework and explore the relationship among the throughput capacity, the relay buffer size and the number of nodes.'
author:
- |
Jia Liu$^{1\text{,2}}$, Min Sheng$^{1}$, [*Member, IEEE*]{}, Yang Xu$^{1}$, Hongguang Sun$^{1}$, Xijun Wang$^{1}$, [*Member, IEEE*]{}\
and Xiaohong Jiang$^{2}$, [*Senior Member, IEEE*]{}\
title: 'Throughput Capacity of Two-Hop Relay MANETs under Finite Buffers'
---
Mobile ad hoc networks, finite buffers, two-hop relay, throughput capacity, queuing analysis.
Introduction
============
The self-autonomous and inherent highly dynamic characteristics of mobile ad hoc networks (MANETs) make the routing schemes based on opportunistic transmission widely applied. Among them, the two-hop relay algorithm which is first proposed by Grossglauser and Tse [@key-1] and its variants [@key-2], have become a class of attracting routing protocols due to their simplicity and efficiency [@key-3]. The basic idea of this protocol is that the source node can transmit packets to the destination directly, or to a node (served as a relay) it encounters, later the relay node carrying the packet forward it to the destination node. Thus, each packet travels at most two hops to reach the destination.
By now, a lot of work has been committed to studying the throughput capacity of two-hop relay algorithm in MANETs [@key-1; @key-2], [@key-4]-[@key-6]. Grossglauser and Tse (2002) [@key-1] studied the order sense of per node capacity and showed that it is possible to achieve a $\Theta(1)$ throughput by employing the node mobility, which is a substantial improvement for the static networks considered by Gupta and Kumar (2000) [@key-7]. Neely *et al.* (2005) [@key-2] computed the exact capacity and the end-to-end queuing delay for a cell-partitioned MANET, and a fundamental tradeoff between throughput capacity and delay has been developed. Sharma *et al.* (2006) [@key-4] proposed a global perspective for delay and capacity tradeoff in MANETs. They considered a general mobility model and related the nature of delay-capacity tradeoff to the nature of node motion. The throughput capacity with packet redundancy has been researched in [@key-5], while the throughput capacity under power control has been examined in [@key-6]. For a detailed survey, please see [@key-5] and the reference therein.
It is notable that all available work mentioned above assumed the node buffer is infinite. However, this assumption will not hold for a realistic MANET obviously. In [@key-8], J. Herdtner and E. Chong have studied the throughput and storage tradeoff, and they showed that the limited relay buffer will degrade the throughput capacity. Since they only provided a scaling law relationship, the exact throughput capacity of two-hop relay MANETs under finite relay buffer size remains largely unknown by now.
As a first step towards this end, in this paper, we analytically study the exact throughput capacity of two-hop relay MANETs with the consideration that the relay buffer of each node, which is used for storing other nodes’ packets, is strictly bounded. The main contributions of this paper are summarized as follows.
- Considering the soure-to-relay transmission under finite relay buffer scenario, we carefully compute the arrival rate at the relay queue. By utilizing the *occupancy probability* technique, we exactly characterize the departure process of the relay queue.
- Based on the queuing process of the relay node, a finite-state *ergodic Markov chain* model is constructed to obtain the limiting distribution of the relay queue. With this framework, we proceed to derive the exact throughput capacity under any relay buffer size.
- Extensive simulation results are provided to validate our new analysis model and explore how the throughput capacity varies with the relay buffer size and the number of nodes. The results indicate that the throughput capacity under finite relay buffer cannot stay constant as the network size growing, which is quite different from the infinite buffer scenario.
The remainder of this paper is outlined as follows. The system models and a modified two-hop relay scheme are introduced in Section \[section:preliminaries\]. In Section \[section:throughput\], we develop the ergodic Markov chain-based framework to fully characterize the queuing process of relay nodes and proceed to obtain the exact throughput capacity. The simulation results are provided in Section \[section:simulation\]. Finally, we conclude this paper in Section \[section:conclusion\].
Preliminaries {#section:preliminaries}
=============
System Models
-------------
*Network model*: The *cell partitioned* network model [@key-2] is adopted. The network is partitioned into $C$ non-overlapping cells of equal size. $N$ mobile nodes roam from cell to cell over the network according to the independent and identically distributed (i.i.d) mobility model [@key-1; @key-2]. With i.i.d mobility model, at the beginning of each time slot, each node selects a cell among the network uniformly and independently, then stays in this cell during this time slot. Thus, i.i.d mobility model can be served as the limit of infinite mobility. Further, we assume that time is slotted with a fixed length, and during each time slot, only one node in each cell can transfer exactly one packet to another node in the same cell. Nodes located in different cells cannot communicate with each other. We assume each node has a local queue and a relay queue. The local queue is used to store the self-generated packets and there is no constraint on it; while the relay queue is used to store the packets from other nodes and the buffer size is set to be $B$ (packets) [@key-8]. The reason for this buffer assumption is elaborated in [@key-9], where the ingress buffer and the internal buffer correspond to the local buffer and the relay buffer respectively. Thus, the node model can be represented in Fig. \[fig:queue\_structure\].
![The local queue and relay queue in a node.[]{data-label="fig:queue_structure"}](queue_structure){width="3.0in"}
*Traffic model*: We consider the traffic pattern widely adopted in previous studies [@key-2; @key-10], where $N$ is even and the source-destination pairs are composed as follows: $1\leftrightarrow2$, $3\leftrightarrow4$, $\cdots$, $(N-1)\leftrightarrow N$. Thus, there are in total $N$ distinct unicast traffic flows, each node is the source of a traffic flow and meanwhile the destination of another traffic flow. The exogenous packet arrival at each node is a Bernoulli process with rate $\lambda$ packets/slot.
A Modified Two-Hop Relay Algorithm
----------------------------------
In this section, we make a modification on the traditional two-hop relay algorithm, to be applicable under the finite relay buffer scenario. We consider that a source node encounters a relay node and try to send a packet, if the relay queue is full, this transmission fails leading to the packet loss and the energy waste. To avoid this phenomenon, we introduce a handshake mechanism before each source-to-relay transmission to confirm the relay buffer occupancy state, if the relay queue is not full, the transmission can be conducted; else, the source node remains idle. At any time slot for each cell containing at least two nodes, the cell executes the modified two-hop relay (M2HR) algorithm which is summarized in Algorithm \[algorithm:M2HR\].
With equal probability, randomly choose such a pair to do source-to-destination transmission. The source transmits the new packet to the destination. The source remains idle. With equal probability, randomly designate a node within the cell as the sender. Independently choose another node among the remaining nodes within the cell as the receiver. Flips an unbiased coin; The sender conducts source-to-relay transmission with the receiver. The sender transmits that packet to the receiver. The sender remains idle. The sender conducts relay-to-destination transmission with the receiver. The sender transmits a packet to the receiver. The sender remains idle.
Throughput Capacity Analysis {#section:throughput}
============================
In this section, we first introduce some basic probabilities. Then, an *irreducible ergodic* Markov chain-based framework is established to explore the occupancy distribution on the relay queue. In order to solve the Markov chain, we further investigate the departure process of the relay queue. Based on this framework, we proceed to derive the exact throughput capacity.
Some Basic Probabilities
------------------------
For a given time slot and a particular cell, we denote by $p$ and $q$ the probability that there are at least two nodes and at least one source-destination pair in a cell, respectively. Then, the same as [@key-2], we have $$\begin{aligned}
& p=1-\left(1-\frac{1}{C}\right)^{N}-\frac{N}{C}\left(1-\frac{1}{C}\right)^{N-1}, \\
& q=1-\left(1-\frac{1}{C^{2}}\right)^{N/2}.\end{aligned}$$
Under the M2HR algorithm, we denote by $p_{sd}$, $p_{sr}$ and $p_{rd}$ the probabilities that a given node has a chance to conduct a source-to-destination transmission, source-to-relay transmission, and relay-to-destination transmission at a given time slot, respectively. Then we have $$\begin{aligned}
& p_{sd}=\frac{C}{N}q, \\
& p_{sr}=p_{rd}=\frac{C(p-q)}{2N}.\end{aligned}$$ The derivations of probabilities $p_{sd}$, $p_{sr}$ and $p_{rd}$ are omitted here due to space limit, and please kindly refer to Appendix B in [@key-2] for details.
Analysis of the Local Queue and the Relay Queue
-----------------------------------------------
Under the M2HR algorithm, each packet experiences at most two queuing processes, i.e., the packet dispatching process at the local queue and the packet forwarding process at the relay queue (if the packet cannot be transmitted to the destination directly).
*Local Queue*: Due to the i.i.d mobility, the local queue can be represented as a Bernoulli/Bernoulli queue, where in every time slot a new packet will arrive with probability $\lambda$, and a service opportunity will arise with a corresponding probability $\mu_S$ which is determined as $$\mu_{S}(\lambda)=p_{sd}+p_{sr}\left(1-P_{B}\right), \label{eq:mu_S}$$ where $P_B$ denotes the probability that the relay queue is full. Note that the Bernoulli/Bernoulli queue is reversible [@key-11], so the output process is also a Bernoulli flow with rate $\lambda$.
*Relay Queue*: Since a specific packet from the output process of a local queue is transmitted to relay nodes with probability $\frac{p_{sr}\left(1-P_{B}\right)}{\mu_S}$, each of the $N-2$ relay nodes are equally likely to encounter the source, and for each relay node there are in total $N-2$ independent output processes of local queues may arrive at its relay queue. We denote by $\tilde{\lambda}$ the packet arrival rate at a relay queue when it is not full (when a relay queue is full, its input rate is $0$), then we have $$\tilde{\lambda}\cdot(1-P_{B})+0\cdot P_{B}=(N-2)\lambda\cdot\frac{p_{sr}\left(1-P_{B}\right)}{\mu_{S}(\lambda)}/(N-2), \nonumber$$ $$\Rightarrow\tilde{\lambda}=\frac{\lambda p_{sr}}{\mu_{S}(\lambda)}.\label{eq:lambda_r}$$
We denote by $\mu_R(k)$ that the service rate when the relay queue contains $k$ packets and it is clear that $\mu_{R}(k)>0$ for $1\leq k<B$. Thus, the relay queue can be modeled as a discrete Markov chain as illustrated in Fig. \[fig:state\_machine\] and the corresponding one-step transition matrix of the relay queue occupancy process is given by
![State transition diagram for the relay queue occupancy process.[]{data-label="fig:state_machine"}](state_machine){width="3.0in"}
$$\mathbf{P}\!=\!\left[
\begin{array}{ccccc}
\!1\!-\!\tilde{\lambda} \!&\! \tilde{\lambda} \!&\! \!&\! \!&\! \\
\!\ddots \!&\! \ddots \!&\! \ddots \!&\! \!&\! \\
\! \!&\! \mu_R(k) \!&\! 1\!-\!\tilde{\lambda}\!-\!\mu_R(k) \!&\! \tilde{\lambda} \!&\! \\
\! \!&\! \!&\! \ddots \!&\! \ddots \!&\! \ddots \\
\! \!&\! \!&\! \!&\! \mu_R(B) \!&\! 1\!-\!\mu_R(B)
\end{array}
\right],$$
For this Markov chain we have the following lemma.
\[lemma:ergodic\] The limiting distribution of the relay queue occupancy process exists and is unique, and is equal to the stationary distribution.
For any state $i,j\in\mathbf{S}$, $i<j$, there exist $m$ and $\tilde{m}$, such that $p_{ij}^{(m)}>0$ and $p_{ji}^{(\tilde{m})}>0$. For example, $p_{ij}^{(j-i)}=\tilde{\lambda}^{(j-i)}>0$ and $p_{ji}^{(j-i)}=\mu_{R}(j)\times\mu_{R}(j-1)\times\cdots\times\mu_{R}(i+1)>0$. Thus, any two states $i,j\in\mathbf{S}$ can *communicate* with each other, which is denoted by $i\leftrightarrow j$. This type of Markov chain is called *irreducible* (see, for example, Definition 4.1 in [@key-12] or [\[]{}13, ch. 4, p. 168[\]]{}), and in this Markov chain all the states are in the same class. From [@key-13] it follows that all the states are *recurrent*.
We denote by $d(i)$ that the period of state $i$. Note that $p_{00}^{(1)}=1-\tilde{\lambda}$, $p_{BB}^{(1)}=1-\mu_{R}(B)$, and $p_{kk}^{(1)}=1-\tilde{\lambda}-\mu_{R}(k)$, for $0<k<B$. Thus, for any state $i\in\mathbf{S}$, we have $d(i)=1$. The state $i$ and the Markov chain are called *aperiodic* [@key-12]. Since all the states are *recurrent* and *aperiodic*, the relay queue occupancy process is an *irreducible* *ergodic* Markov chain. Referring to [\[]{}12, ch. 5[\]]{}, the limiting distribution of the relay queue occupancy process exists and is unique, and is equal to the stationary distribution.
We use $\Pi=\left\{ \pi(0),\pi(1),\cdots\pi(B)\right\} $ to denote the limiting distribution of the relay queue. By lemma \[lemma:ergodic\] we have $$\Pi\cdot\mathbf{P}=\Pi,$$ $$\Rightarrow\begin{cases}
\tilde{\lambda}\pi(0)=\mu_{R}(1)\pi(1),\\
\tilde{\lambda}\pi(1)=\mu_{R}(2)\pi(2),\\
...\\
\tilde{\lambda}\pi(B-1)=\mu_{R}(B)\pi(B).
\end{cases}$$ Combining with the normalization equation $\sum_{k=0}^{B}\pi(k)=1$, the limiting distribution of the relay queue is given by $$\begin{aligned}
& \pi(0)=\left(1+\sum_{k=1}^{B}\frac{\tilde{\lambda}^{k}}{\mu_{R}(k)!}\right)^{-1}, \label{eq:pi_0} \\
& \pi(k)=\frac{\tilde{\lambda}^{k}}{\mu_{R}(k)!}\left(1+\sum_{k=1}^{B}\frac{\tilde{\lambda}^{k}}{\mu_{R}(k)!}\right)^{-1}, \label{eq:pi_k}\end{aligned}$$ where $0<k\leq B$, $\mu_{R}(k)!=\mu_{R}(k)\times\mu_{R}(k-1)\times\cdots\mu_{R}(1)$. From the (\[eq:pi\_0\]) and (\[eq:pi\_k\]), we can see that in order to derive the limiting distribution of the relay queue, we need to compute the service rate $\mu_R(k)$ when the relay queue is on state $k$.
Computation of the Service Rate at Relay Queue
----------------------------------------------
We denote by $p_{k}^{(i)}$ the probability that the relay queue has $k$ packets and these packets are destined for $i$ different destination nodes, $1\leq i\leq k$. Due to the i.i.d mobility model, we have $$\mu_{R}(k)=\sum_{i=1}^{k}p_{k}^{(i)}\cdot i\cdot\frac{p_{rd}}{N-2}. \label{eq:mu_R_k}$$
From the (\[eq:mu\_R\_k\]), it is clear that in order to compute $\mu_{R}(k)$, we need to derive $p_{k}^{(i)}$. To address this issue, we utilize the *occupancy* technique (see, for example, [\[]{}14, ch. 1[\]]{}). We represent the packets by ’stars’ and the $N-2$ destination nodes by the spaces between $N-1$ ’bars’. For example, Fig. \[fig:occupancy\] represents one packet is destined for the first node, no packet is destined for the second and third nodes and two packets are destined for the fourth.
![Examples of occupancy.[]{data-label="fig:occupancy"}](occupancy){width="3.0in"}
Notice that $N-2$ nodes need $N-1$ ’bars’, since the first and last symbols must be ’bars’, only $N-3$ ’bars’ and $k$ ’stars’ can appear in any order. The number of all possible permutations $E_{k}$ is given by $$E_{k}=\binom{N-3+k}{k}.$$ Considering these packets are destined for $i$ different destination nodes. The number of permutations $E_{k}^{(i)}$ is given by $$E_{k}^{(i)}=\binom{N-2}{i}\cdot \binom{i-1+k-i}{k-i}.$$ According to the classical probability we have $$p_k^{(i)}=\frac{E_{k}^{(i)}}{E_{k}}=\frac{\binom{N-2}{i} \cdot \binom{k-1}{k-i}}{\binom{N-3+k}{k}}.\label{eq:p_k^i}$$ Substituting (\[eq:p\_k\^i\]) into (\[eq:mu\_R\_k\]) we have $$\mu_R(k)=p_{rd}\cdot\left\{ \sum\limits_{i=1}^k \frac{\binom{N-2}{i} \cdot \binom{k-1}{k-i}}{\binom{N-3+k}{k}} \cdot \frac{i}{N-2}\right\} .\label{eq:mu_r_k_1}$$
Throughput Capacity
-------------------
*Definition: Throughput Capacity*: For a MANET under M2HR and with a given packet arrival rate $\lambda$, the network is called stable if the queue length in each node (thus the average delay) is bounded. The throughput capacity $T_{c}$ of the network is then defined as the maximum value of $\lambda$ the network can stably support.
For a MANET with finite relay buffers, the throughput capacity under M2HR satisfies $$T_{c}=max\{\lambda:\lambda<\mu_{S}(\lambda)\}.$$
It is notable that $P_{B}=\pi(B)$ and this equation contains the single unknown quantity $P_{B}$. Thus, given any packet arrival rate $\lambda$, by solving the equation we can obtain $P_{B}$, and proceed to derive $\mu_{S}(\lambda)$ by (\[eq:mu\_S\]).
Since the relay buffer size is strictly bounded by $B$ (packets), then no matter what the input rate $\lambda$ is, the relay queue is always stable. For $\lambda\in\{\lambda:\lambda<\mu_{S}(\lambda)\}$, the average delay in the local queue is given by [@key-11] $$E\left[D_{S}\right]=\frac{1-\lambda}{\mu_{S}(\lambda)-\lambda}<\infty,$$ thus the network is stable. When $\lambda\notin\{\lambda:\lambda<\mu_{S}(\lambda)\}$, for the Bernoulli/Bernoulli queue, the queue length will tend to infinity. Thus, the network cannot support the input rate stably. Since $T_{c}=max\{\lambda:\lambda<\mu_{S}(\lambda)\}$, then $T_{c}$ is the throughput capacity of the network.
Simulation {#section:simulation}
==========
In this section, we first provide the simulation results to validate our theoretical framework, and then we proceed to explore how the throughput capacity be influenced by the network parameters.
Simulation Setting
------------------
For model validation, a simulator in C++ was developed to simulate the packet delivery process in the concerned MANET. We focus on a specific node and count its received packets over a period of $2\times10^{8}$ time slots, to calculate the time-averaged throughput. Besides the i.i.d mobility model, another realistic mobility model, the random walk model was also implemented in the simulator. With random walk mobility model, at the beginning of each time slot, each node selects a cell among its current cell and its $8$ adjacent cells with equal probability, then stays in it during this time slot.
Model Validation
----------------
![Service rate $\mu_{S}$ of the local queue vs. packet generation rate $\lambda$.[]{data-label="fig:lambda_vs_mu"}](lambda_vs_mu){width="3in"}
![Per node’s throughput performance.[]{data-label="fig:throughput"}](n72b5a0_5th){width="3in"}
We fix the node number $N=72$, the cell number $C=36$, and consider three cases of $B=5$, $B=8$ and $B=10$. We increase the packet generation rate $\lambda$ step by step, and calculate the service rate $\mu_{S}$ based on the analytical framework provided. The corresponding results are summarized in Fig. \[fig:lambda\_vs\_mu\]. We can see that for all the three cases there, as $\lambda$ increasing, the service rate $\mu_{S}$ monotonically decreases (note that in the infinite buffer scenario, the service rate of local queue has no relationship with $\lambda$ [@key-2; @key-15]), and the curves $\mu_{S}=\mu_{S}(\lambda)$ and $\lambda=\lambda$ intersect with each other. The intersection points are the exact values of throughput capacity for $B=5$, $B=8$ and $B=10$, respectively. The results are $T_{c}|_{B=5}=0.0232$, $T_{c}|_{B=8}=0.0283$ and $T_{c}|_{B=10}=0.0315$, it indicates that the larger relay buffer size leads to a higher throughput capacity.
To validate our theoretical results, we summarize the simulated throughput performance of network scenario $(N=72,C=36,B=5)$ in Fig. \[fig:throughput\], where the throughput capacity $T_{c}$ is obtained by Fig. \[fig:lambda\_vs\_mu\] and the system load $\rho$ is defined as the ratio between $\lambda$ and $T_{c}$. We can see that the simulated throughput linearly increases with $\rho$ until $\rho=1$, when $\rho>1$, the throughput no longer grows up and stays as a constant which is consistent with our theoretical analysis. It is interesting to observe from Fig. \[fig:throughput\] that, the performance behavior under random walk mobility model is very similar to that under i.i.d mobility model. As shown in [@key-2], for a cell-partitioned MANET, the throughput capacity under i.i.d model is identical to those under non-i.i.d models, if these models follow the same-steady distribution.
Performance Analysis
--------------------
![Throughput capacity $T_{c}$ vs. relay buffer size $B$.[]{data-label="fig:th_vs_B"}](th_vs_B){width="3in"}
![Throughput capacity $T_{c}$ vs. number of nodes $N$.[]{data-label="fig:th_vs_N"}](th_vs_N){width="3in"}
Fig. \[fig:th\_vs\_B\] shows the relationship between the throughput capacity $T_{c}$ and the relay buffer size $B$, under different settings of node number. The ratio between $N$ and $C$ is fixed by 2. From Fig. \[fig:th\_vs\_B\], we can see that for all cases, $T_{c}$ monotonically increases with $B$, which indicates that the MANET really requires a sufficient relay buffer size to guarantee the throughput performance. A further careful observation is that when the buffer size is large, the increasing of the throughput capacity becomes smooth. It indicates that when the buffer size is large enough, if we continue to increase it, the performance gain will be little. It provides a guideline for the practical applications that it is effective to choose an appropriate buffer size to ensure the network performance as well as save the cost.
We proceed to explore the relationship between $T_{c}$ and $N$, where $N/C$ is fixed by 2. The corresponding results are represented in Fig. \[fig:th\_vs\_N\]. We can see that for both of the two cases there, $T_{c}$ monotonically decreases with $N$ increasing, and vanishes to 0 as $N$ tends to infinity. It is notable that the results are quite different from the throughput capacity under infinite buffer scenario, where $T_{c}$ can keep constant (about 0.14 packets/slot) with the network size increasing [@key-1; @key-2]. It indicates that the throughput capacity can not be increased by utilizing the node mobility alone, it also requires that the relay buffer of each node increases with the network size.
Conclusions {#section:conclusion}
===========
In this paper, we focus on the throughput capacity of MANETs under finite buffer scenario. A modified two-hop relay routing algorithm has been proposed. We have provided an ergodic Markov chain-based framework to fully characterize the queuing process of relay nodes, and further derived the exact throughput capacity. Extensive simulation results have been conducted to verify the efficiency of the new theoretical framework and show the relationship between $T_{c}$ and $B$, $N$. The results indicate that the throughput capacity can not stay constant under the finite relay buffer scenario and increasing the relay buffer size can upgrade the throughput capacity.
[10]{}
M. Grossglauser and D. N. C. Tse, Mobility increases the capacity of ad hoc wireless networks, *IEEE/ACM Trans. Networking*, vol. 10, no. 4, pp. 477-486, Aug. 2002.
M. J. Neely and E. Modiano, Capacity and delay tradeoffs for ad hoc mobile networks, *IEEE Trans. Inf. Theory*, vol. 51, no. 6, pp. 1917-1937, Jun. 2005.
A. A. Hanbali, M. Ibrahim, V. Simon, E. Varga, and I. Carreras, “A survey of message diffusion protocols in mobile ad hoc networks,” in *Proc. ValueTools*, 2008, Article no. 82.
G. Sharma, R. R. Mazumdar, and N. B. Shroff, “Delay and capacity trade-offs in mobile ad hoc networks: A global perspective,” in *Proc. IEEE INFOCOM*, Barcelona, Spain, Apr. 2006.
J. Liu, X. Jiang, H. Nishiyama, and N. Kato, “Delay and capacity in ad hoc mobile networks with *f*-cast relay algorithms,” *IEEE Trans. Wireless Commun.*, vol. 10, no. 8, pp. 2738-2751, Aug. 2011.
J. Liu, X. Jiang, H. Nishiyama, and N. Kato, “Exact throughput capacity under power control in mobile ad hoc networks,” in *INFOCOM*, 2012, pp. 1-9.
P. Gupta and P. R. Kumar, The capacity of wireless networks, *IEEE Trans. Inf. Theory*, vol. 46, pp. 388-404, Mar. 2000.
J. Herdtner and E. Chong, “Throughput-storage tradeoff in ad hoc networks,” in *Proc. of IEEE INFOCOM*, Miami, FL, Mar. 2005.
L. B. Le, E. Modiano, and N. B. Shroff, “Optimal control of wireless networks with finite buffers”, in *Proc. IEEE INFOCOM*, April 2010, pp. 1-9.
R. Urgaonkar and M. J. Neely, “Network capacity region and minimum energy function for a delay-tolerant mobile ad hoc network,” *IEEE/ACM Transactions on Networking*, vol. 19, no. 4, pp. 1137-1150, August 2011.
H. Daduna, *Queueing Networks with Discrete Time Scale: Explicit Expressions for the Steady State Behavior of Discrete Time Stochastic Networks*.$\quad$New York: Springer-Verlag, 2001.
O. Haggstrom, *Finite Markov Chains and Algorithmic Applications*.$\quad$Cambridge, June 2002.
S. M. Ross, *Stochastic Processes*.$\quad$New York: Wiley, 1996.
H. Stark and J. W. Woods, *Probability and Random Processes with Applications to Signal Processing*, third ed. Prentice Hall, 2001.
J. Tao, J. Liu, X. Jiang, O. Takahashi and N. Shiratori, “Throughput capacity of MANETs with group-based scheduling and general transmission range”, *IEICE Trans. Commun.*, pp. 1791-1802, 2013.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Motivated by the recent rapid development of the field of quantum gases in optical lattices, we present a comprehensive study of the spectrum of ultracold atoms in a one-dimensional optical lattice subjected to a periodic lattice modulation. Using the time-dependent density matrix renormalization group method, we study the dynamical response due to lattice modulations in different quantum phases of the system with varying density. For the Mott insulating state, we identify several excitation processes, which provide important information about the density profile of the gases. For the superfluid, the dynamical response can be well described in a local density approximation. This simplification can be valuable in understanding the strong-correlated superfluid in a slow-varying harmonic potential. All these spectroscopic features of an inhomogeneous system can be used as a test for the validity of the Bose-Hubbard model in a parabolic trapping potential.'
author:
- 'Jia-Wei Huo'
- 'Fu-Chun Zhang'
- Weiqiang Chen
- 'M. Troyer'
- 'U. Schollwöck'
bibliography:
- 'mybib.bib'
title: Trapped Ultracold Bosons in Periodically Modulated Lattices
---
Introduction
============
Following the rapid development of experimental techniques for the manipulation and detection of dilute ultracold atom gases, a wide range of fundamental quantum many-body phenomena have been observed. Specifically, due to techniques including Feshbach resonances[@Inouye1998] and optical lattices[@Orzel2001], bosonic systems loaded into a periodic lattice described by the Bose-Hubbard model[@Fisher1989; @Jaksch1998; @Giamarchi1987; @Giamarchi1988] have been experimentally accessible both in the weakly and strongly interacting regimes with highly controllable parameters, allowing for example the observation of the superfluid to Mott-insulator phase transition driven by quantum fluctuations[@Greiner2002]. This achievement has provided a new platform to study quantum many-body physics by virtue of the high degree of control and tuning available[@Bloch2008].
Particularly rich quantum physics is to be expected in the context of quantum many-body physics far from equilibrium, but this remains largely unexplored at the moment. One main experimental difficulty lies in the limited number of techniques of measurement in the strongly correlated system. In this paper, we will focus on one particular technique and point the way to the extraction of additional theoretical information from the raw data, the periodic lattice modulation approach by Stöferle [*et al.*]{}[@Stoferle2004] They developed this technique to study the excitation spectrum of the bosonic system in an optical lattice. It acts at a probe with a specific frequency on the ultracold bosons and can be used to reveal the excitation spectrum of the system. More recently, this technique has been widely used in the dynamical control[@Chen2011] and the realization of the quantum phase transition[@Eckardt2005; @Zenesini2009; @Struck2011] in optical lattices.
Previous theoretical and numerical studies on this experiment have shown how to extract important information about the system. Theoretically, the technique has been studied via perturbative methods in two limits, by a linear response analysis[@Reischl2005; @Iucci2006] in the Mott insulating phase and by solving the Gross-Pitaevskii equation in the superfluid regime[@Kramer2005]. A drawback of these perturbative methods is that they cannot be used to deal with the whole interaction regime. Numerically, the time-dependent density-matrix renormalization group technique (t-DMRG) has been applied to simulate the experimental setup in a quasi-exact fashion[@Kollath2006; @Clark2006; @Poletti2011]. Basic features seen in the experiment could be reproduced successfully.
Although these studies have opened a window on the understanding of the experimental observation[@Stoferle2004], further theoretical and numerical questions about the excitation spectroscopy arise mainly for three reasons. First, ultracold experiments are carried out in a harmonic trapping potential. Although it had been pointed out that this will induce spectral broadening and a shifting of peaks, a detailed and quantitative study of this issue is still lacking. This is important to assess whether the presence of the trap qualitatively or quantitatively changes the behavior of the homogeneous system. Secondly, a direct comparison to the experimental observation, which averages over many 1D systems with different particle numbers and densities per 1D tube, is not satisfactory unless the dependence on the density is taken into account. Therefore, building upon previous studies, this paper will generally focus on extracting new information from the spectrum due to the harmonic confinement and different densities.
The Bose-Hubbard model {#sec:model}
======================
Ultracold bosons in an optical lattice can be described by the Bose-Hubbard model[@Fisher1989; @Jaksch1998] $$\hat{H}=-J\sum_{j}(\hat{b}^\dag_{j}\hat{b}^{}_{j+1}+\text{H.c})+\frac{U}{2}\sum_{j}\hat{n}^{}_{j}(\hat{n}^{}_{j}-1)+\sum_{j}V_j\hat{n}_{j},$$ where $\hat{b}_j$ and $\hat{n}_j$ are the annihilation and number operators on site $j$, respectively. The first term describes the hopping process between nearest neighbours, while the second one depicts the on-site interaction. The last term models the harmonic trapping potential $V_j = V_t(j-j_0)^2$ with $V_t$ the curvature and $j_0$ the center of the system. In this paper, we are only interested in the absorption spectrum of ultracold bosons in a 1D optical lattice. We assume the 1D optical lattices all to be directed along the $x$-direction. We denote by $V_x$ and $V_{\bot}$ the laser strength along the $x$-direction and $yz$-directions respectively. For deep lattices, the hopping matrix element $J$ and on-site interaction $U$ can be approximated as[@Zwerger2003; @Kollath2006] $$\begin{aligned}
\frac{J}{E_r}&=&\frac{4}{\sqrt{\pi}}\left(\frac{V_x}{E_r}\right)^{\frac{3}{4}}\exp\left(-2\sqrt{\frac{V_x}{E_r}}\right)\end{aligned}$$ and $$\begin{aligned}
\frac{U}{E_r}&=&4\sqrt{2\pi}\frac{a_s}{\lambda}\sqrt[4]{\frac{V_xV^2_\bot}{E^3_r}},\end{aligned}$$ where $E_r$ is the recoil energy, $a_s$ is the s-wave scattering length, and $\lambda$ is the wavelength of the laser forming the optical lattice. The parameters used in the calculations are $a_s\!=\!5.45$ nm, $\lambda\!=\!825$ nm, $V_{\bot}\!=\!30E_r$. In order to investigate the absorption spectrum, one applies a sinusoidal modulation of the $x$-direction laser strength $V_x$ starting at $t = 0$ with frequency $\omega$ and amplitude $\delta V$, [*i.e.*]{}, $V_x(t\!=\!0)=V_0$ and $V_x(t\!>\!0)=V_0+\delta V\sin\omega t$[^1], and measures the energy absorbed by the system. The absorbed energy is strongly frequency-dependent and gives information about the excitation spectrum.
Methods {#sec:method}
=======
Time-dependent perturbation (t-perturbation) {#subsec:perturbation}
--------------------------------------------
If the modulation amplitude $\delta V$ is small and the system is in a deep Mott-insulating state ($J\!\ll\!U$), the system can be understood within the framework of time-dependent perturbation theory where the Hamiltonian reads $$\begin{aligned}
\hat{H}[J(t),U(t)]\approx\hat{H}_0+\hat{H}'(t),\end{aligned}$$ where $\hat{H}_0\!\!\equiv\!\!\hat{H}(t\!\!=\!\!0)$. (For brevity $U_0\!\!\equiv\!\!U(t\!\!=\!\!0)$ and $J_0\!\!\equiv\!\!J(t\!\!=\!\!0)$ are used hereafter.) To keep the only term contributing to the excitations, we make a transformation[@Reischl2005] $$\begin{aligned}
\hat{H}'(t)\rightarrow\widetilde{\hat{H}'}(t)=\hat{H}'(t)-\frac{\delta U}{U_0}H_0.\end{aligned}$$ By further neglecting the time-independent term, we have $$\widetilde{\hat{H}'}(t)\!=\!-F_J\sin\omega t\sum_{j=1}^{L-1}(\hat{b}^\dag_{j}\hat{b}^{}_{j+1}+\text{H.c.}),$$ with the coupling constant $$F_J=\left.\left(\frac{\partial\ln J}{\partial V_x}-\frac{\partial\ln U}{\partial V_x}\right)\right|_{V_x=V_0}J_0\delta V.$$ This coupling constant has been shown to be valid in the large $U$-limit[@Iucci2006].
In standard time-dependent perturbation theory, the transition probability is given by $$\begin{gathered}
W_{mn}(t)=\frac{|H_{mn}'|^2}{4\hbar^2}\left|\frac{1-e^{i(\omega_{mn}+\omega)t}}{\omega_{mn}+\omega}\right.\\
\left.-\frac{1-e^{i(\omega_{mn}-\omega)t}}{\omega_{mn}-\omega}\right|^2.\end{gathered}$$ Here $H_{mn}'$ is the matrix element of $\hat{H}'$ between two eigenstates $m$ and $n$ of the unperturbed Hamiltonian $\hat{H}_0$. So the energy absorbed is $$\Delta E(t)=\sum_{m}\hbar\omega_{m0}W_{m0}(t).$$
t-DMRG {#subsec:tdmrg}
------
Numerically, we use the t-DMRG method to study the time evolution of the system[@Daley2004; @White2004; @Schollwock2011]. This method is a quasi-exact algorithm which allows for simulating real time evolutions of 1D quantum many-body systems, which operates on a class of matrix product states[@Vidal2003; @Vidal2004; @Verstraete2004]. To begin with, a conventional finite-system DMRG algorithm is used to determine the ground state, $|\psi(t=0)\rangle$, of the Hamiltonian at time $t=0$, $\hat{H}(t=0)$ for a system with $L$ sites and $N$ bosons. Then a full time evolution of the quantum state, $|\psi(t)\rangle$, is calculated with the t-DMRG algorithm based on a Trotter decomposition of time steps. We keep up to 200 states in the reduced Hilbert space in the algorithm. In order to reduce the error from the Trotter decomposition, we use a linear fit to extrapolate the results to Trotter time steps $\delta t\rightarrow 0$. Convergence in the number of states kept has also been checked, such that on the time scales simulated the results are quasi-exact.
The full time dependence of the total energy reads $$\begin{aligned}
E(t)&=&\langle\psi(t)|\hat{H}(t)|\psi(t)\rangle \nonumber \\
&\approx&\langle\psi(t)|\hat{H}_0|\psi(t)\rangle+\langle\psi(t)|\hat{H}'(t)|\psi(t)\rangle.\end{aligned}$$ The main contribution to the energy transfer to the system comes from the first term as the time average of $H'(t)\propto\sin\omega t$ vanishes. Thus, to get the absorption spectrum, we calculate the energy absorbed up to a given time $t_m$, $$\Delta E=\langle\psi(t=t_m)|\hat{H}(t=0)|\psi(t=t_m)\rangle-E_0,$$ where $E_0$ is the ground state energy[@Clark2006]. This method is essentially equivalent to fitting the full time dependence of the energy curve[@Kollath2006].
Results {#sec:results}
=======
Absorption spectroscopy with Mott domains {#subsec:mott}
-----------------------------------------
The Mott phase in the Bose-Hubbard model is characterized by short-ranged exponentially decaying correlations and commensurate filling. In a trapped system, there is no homogeneous Mott phase because of the harmonic trap. But for large enough $U$ and suitable particle filling, the system may still have one or several Mott domains separated by superfluid domains in space[@Kollath2006; @Clark2006]. The number of Mott domains and the filling in those domains depend on the average density of the system. In the following, we will study the absorption spectroscopy in the presence of Mott domains with varying densities.
For the low density case, we consider a system with 12 bosons in a deep optical lattice with $V_0=15E_r$ and 36 sites. The curvature of the trapping potential is $V_t=0.0123 E_r\approx 0.017 U_0$. The density profile of the system is shown in the inset of Fig. , where there is only one Mott domain with unit filling in the center of the system. Then we consider the absorption spectrum which is measured at time $t_m = 100\hbar/E_r$ and depicted in Fig. . In a homogeneous Mott phase, the absorption spectrum is highlighted by a sharp peak at energy $U$[@Kollath2006]. This resonance, corresponding to a particle-hole excitation, is sharp since the excited states are almost degenerate. However, the situation is different after a trapping potential is applied. The trap introduces a difference in potential energy between two neighboring sites $$\label{eqn:diff}
\Delta V(j,j\pm 1)=V_{t}[1 \pm 2(j-j_0)].$$ As a rough estimate, the particle-hole excitation energy from site $j$ to $j+1$, as shown in Fig. , will deviate from $U$ by $V(j,j+1)$. The width of the peak is expected to be determined by the potential difference at the edges of the domain where the potential energy is maximized. According to the density profile shown in the inset of Fig. , the edge is at site 13 and site 24 where the potential difference is $0.17U_0$. So the estimated width of the $1U$ peak is $0.34U_0$, which coincides very well with our numerical results in Fig. .
Another new feature in the spectrum is the small peak at $\hbar\omega \approx 0.21U_0$ which corresponds to the particle-hole excitation where a particle hops to an empty site as shown in Fig. . This stems mainly from hopping from site 13 to site 12 and site 24 to site 25, where the potential difference is $0.204 U_0$, in excellent agreement with numerics.
As the particle number $N$ increases, there will be several Mott domains with different fillings as shown in the inset of Fig. where the particle number is 36 and the other parameters are the same as in the previous case. Besides the broadened $1U$ peak, there are two more peaks: one centered at 0.25U$_0$, and another centered at $1.75U_0$. The former corresponds to the hopping of electrons between two Mott domains with different doping at the domain boundaries as depicted in Fig. , and its frequency is determined by the potential energy difference $\Delta V(12,11)=\Delta V(25,26)=0.25U_0$.
At a first glance, the peak at $1.75 U_0$ originates from the hopping of bosons from the unit filled Mott domain into the doubly occupied region. This is qualitatively but not quantitatively accurate. If there is no external potential, the excitation energy is exactly $2U$, indicating the non-unit filling of the system[@Kollath2006]. However, due to the parabolic potential, one has to take into account the difference of the potential energy of the two site involved in the hopping process $\Delta V$ as shown in Fig. . Thus, the position of the excitation peak should be $\hbar\omega=2U-\Delta V$. Both excitations in Fig. and Fig. involve exactly the same two sites in the system, so we have $\Delta V = 0.25 U_0$ which is the peak energy analyzed above. Then the correct position should be at $1.75 U_0$ which is exactly the results in our numerical calculations. This shift has also been observed by Stöferle *et al.*, who reported that there was a peak at about $1.9U$ in the Mott phase[@Stoferle2004].
An interesting consequence of these observations is that by combining the broadening effect of the $1U$ peak and the shift of the $2U$ peak, one can directly determine $U$. In the case considered here, we find $$\label{eqn:appx}
U\approx\frac{1}{2}(\frac{W_1}{2}+U_2),$$ with $W_1$ the width of the $1U$ peak and $U_2$ the position of the $2U$ peak. This formula works because the broadening and shifting effect are caused by almost the same chemical potential difference $\Delta V$. This is useful in the calibration of the on-site interaction parameter $U$.
In order to further corroborate the connection between the energy shift and the density profile, we performed an additional calculation in a Mott system involving triple occupancies shown in Fig. . In this high-density system, there exist particle-hole excitations at the various Mott domain walls. Since the chemical potential difference is spatially varying, the shifts away from $2U$ are different. Therefore, we can see the $\Delta V$ peak and $2U$ peak both split into two peaks due to this difference.
To compare our results with experiments where the modulation amplitude is up to $20\%$ of the lattice depth, we also carried out a simulation for a large modulation (Fig. ). Here the breakdown of perturbation theory indicates the saturation effect in a real system. Due to this effect and the relatively large trap curvature the splitting of the $1U$ peak is pronounced. We can still identify the positions of the excitations from the shifts. For example, the peak at $1.17U_0$ is related to the hopping process from site 14 to 13, or from site 23 to 24. Another important finding is that the positions of the $\Delta V$ peak and $2U$ peak are robust. Also, the saturation effect for these excitations is less significant than that of the $1U$ resonance.
Based on the findings above, the position of the $2U$ peak reveals important information of the density profile of the Mott system. For one thing, the number of the peaks indicates the number of Mott layers in the “wedding cake” structure. For another, the shift away from $2U$ provides important information about the positions of the Mott domain walls.
Absorption spectrum in a superfluid {#subsec:superfluid}
-----------------------------------
In this section, we turn to the superfluid state, where the interpretation of the excitation spectroscopy is less straightforward than in the Mott regime. Without loss of generality, we choose $V_0=4E_r$ and $\delta V=0.2V_0$ in all the calculations in this subsection, leading to $U/J\approx 5$. In contrast with the Mott insulating state where the hopping is substantially suppressed, both the parameters $J$ and $U$ play an important role in determining the main properties of the superfluid. What makes the situation more complicated is the external harmonic trap, which introduces inhomogeneity into the system. A simplification occurs nevertheless as we will show that one can map the absorption spectrum in a trapped system to the homogeneous one by using the local density approximation (LDA). Mathematically, this means $$\label{eqn:lda}
\Delta E^{\text{trapped}}(\omega)=\int\rho(\vec{r})\Delta\epsilon^{\text{homo}}(\omega,\rho(\vec{r})) {\mathop{}\!\mathrm{d}}\vec{r},$$ where $\rho(\vec{r})$ is the spatially dependent density, $\Delta E^{\text{trapped}}(\omega)$ is the energy transferred as a function of frequency $\omega$ in a trapped system, and $\Delta\epsilon^{\text{homo}}(\omega,\rho(\vec{r}))$ is the energy absorbed density in a homogeneous system with particle density $\rho(\vec{r})$.
To show this approximation really holds, we compare the exact spectrum of a trapped system with a result from the LDA (Fig. )[@footnote1]. To simplify the calculation of spectra we first make an approximation on the density profile (Fig. ). Although this approximation seems rough, the resulting spectrum is in good agreement with the exact one. Then the approximate results can be calculated as $\Delta E^{\text{LDA}}\!=\!\sum_i\rho(i)\epsilon^{\text{homo}}(\omega,\rho(i))$, where $\epsilon^{\text{homo}}(\omega,\rho(i))$ is calculated in a homogeneous system with density $\rho(i)$.
From the comparison, it is clear that the LDA works extremely well despite the approximate density profile. From a physical point of view, the validity of LDA stems from the slow-varying density profile in the harmonically confined superfluid. Also the chosen interaction is away from the location of the phase transition. Thus, here the main effect of the parabolic trap is no more than introducing a slow-varying inhomogeneity. On the other hand, LDA must fail in the Mott insulating phase since there exists non-trivial excitations at the boundaries between Mott domains, as we saw in the previous section.
The significance of the LDA is that one can understand the properties of a trapped superfluid with the help of a homogeneous one. To further test the LDA and to study how the density influences the spectroscopy, we performed calculations for different particle numbers both in homogeneous and confined systems (see Fig. ).
For the unconfined atoms, there is a sharp excitation between $3U$ and $4U$ for intermediate densities. This resonance can be interpreted as two particle-hole excitations with both particles on the same site[@Clark2006]. Here, we find that this peak is very sensitive to the density of the quantum gas. As shown in Fig. , when the density of the system is below 1/2, this excitation is negligible because it is rare for three particles to be nearest neighbours. As the density increases to unity, [*i.e.*]{}, $N=36$, the spectrum shows a sharp peak around $3.8U$. Strictly speaking, its excitation energy involves two parts. The first part is the excitation energy to the $3U$-Hubbard band, while the second one comes from the change of the kinetic energy for these two atoms from the delocalized ground state to the localized excited state. In a system with unit filling, the energy difference per atom between delocalization and localization is about $2J$. Therefore, the total energy gained for this type of excitations should be $3U+2\times2J$. In our case where $J\approx 0.2U$, the estimate $3.8U$ coincides with the t-DMRG result.
When we consider the system in a harmonic trap (see Fig. ), we find a similar density dependence for the spectrum: in the low-density regime, it exhibits a two-peak structure; as the particle number increases, the two peaks merge and become a broad one. This subtle change is also an indication for the density of the quantum gases.
To demonstrate the resemblance of the two cases (absence or presence of a trap), we also plot the centers of two peaks in the spectra as a function of the number of particles in Fig. . It is clear that the two peak centers shift towards each other as the particle number increases in both cases. For the first peak, it moves towards higher energy because hoppings between sites with differing fillings become important with more particles. For the second one, its excitation energy of the second peak involves $3U$ for the Hubbard-band and $4J$ for the kinetic energy. It shifts towards lower energy because the ground state for atoms is not completed delocalized for a high density, which compromises the energy gained from localization in the excited state. From this argument, it is expected that the peak would be located around $3U$, which is consistent with our numerical result. With further increased density, the $3U$ resonance and the excitations around $2U$ merge together and form a broad continuum eventually.
Thus, the density dependence of the absorption spectrum in either a homogeneous or trapped system is basically the same. To understand this in the framework of the LDA, a trapped system can be divided into several homogeneous region with slow-varying density. The energy transferred $\Delta E^{\text{trapped}}$ is mainly determined by the high-density region since it maintains the most important weight in the integral of Eq. (\[eqn:lda\]). Then it is reasonable to map the trapped system to a homogeneous one.
Discussion and Conclusion {#sec:discussion}
=========================
How can these observations on the absorption spectroscopy in a trapped system be turned into tools for experimental analysis? It turns out that they lead to schemes to test the validity of the Bose-Hubbard Hamiltonian and to calibrate the interaction parameters.
First of all, our findings in the Mott regime are useful in determining the density profile of the cold gases in an optical lattice. As has been mentioned in Sec. \[subsec:mott\], the broadening of the 1$U$ excitation and the shift of the 2$U$ resonance are both directly related to the density distribution of the system. For example, the number of the 2$U$ peaks indicates the number of the Mott domains, while their shifts away from 2$U$ can help to calculate the boundaries of the domains. Also, it is straightforward to test the Bose-Hubbard model by comparing the results from the spectroscopy with those from the time-of-flight imaging technique. Another important application in the Mott phase is calibrating the parameter $U$, which has proven to be difficult in a deep lattice[@Buchler2010]. The simplest way to do this with the spectroscopy is to fit the first strong $1U$ peak. To improve the accuracy, one can take into account the $2U$ resonance to recalibrate. For instance, the $U$ is directly related to the width of the first peak and the position of the second peak according to Eq. (\[eqn:appx\]). The advantage of this method is that no other fit parameters are needed.
For the superfluid case, the main concern is how to make use of the LDA. Since one can map the absorption spectrum of a confined system to a homogeneous one, it suggests that the basic features of the superfluid, including the ground state and the excited states, can be well described by the LDA. This is an important characteristic to distinguish different phases driven by quantum fluctuations. Whereas we only test this approximation in 1D, it is quite reasonable to conclude that it works also in 2D and 3D where fluctuations are less important. This generalization would greatly simplify many theoretical studies on the actual superfluid since it builds a bridge connecting a confined system and a homogeneous one. This can also be used as a criterion to test the validity of Bose-Hubbard model in the superfluid regime. In addition, the $3U$ resonance in the spectrum is a useful signature to characterize the density of the system. Therefore, the absorption spectroscopy is another experimental technique to study the density profile of the system besides the time-of-flight imaging.
In conclusion, we have analysed in detail the dynamical response at zero temperature of the trapped ultracold bosons in an optical lattice subjected to lattice modulations. For the Mott-insulating system we identified several excitation processes. For the superfluid state, the presence of the harmonic trap induces slow-varying inhomogeneities, which can be understood within the LDA. All these unique properties can be used examine whether the Bose-Hubbard model is a good realization of the ultracold atom system in a parabolic trapping potential. On the other hand, if one believes that the model can explain all the physics in this system, absorption spectroscopy can be used as a powerful technique in revealing many basic features of the system, including its quantum state and density distribution.
We acknowledge part of financial support from HKSAR RGC Grant No. 701009. U.S. thanks the DFG for support through FOR 801. M.T. was supported by the Swiss National Science Foundation and by a grant from the Army Research Office through the DARPA OLE program.
[^1]: The change of the phase boundary due to the modulation discussed in Ref. [@Eckardt2005] is negligible in this form of modulation as long as its strength is not strong enough.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
It is not known whether the Flint Hills series $\sum_{n=1}^{\infty} \frac{1}{n^3\cdot\sin(n)^2}$ converges. We show that this question is closely related to the irrationality measure of $\pi$, denoted $\mu(\pi)$. In particular, convergence of the Flint Hills series would imply $\mu(\pi) \leq 2.5$ which is much stronger than the best currently known upper bound $\mu(\pi)\leq 7.6063\ldots$.
This result easily generalizes to series of the form $\sum_{n=1}^{\infty} \frac{1}{n^u\cdot |\sin(n)|^v}$ where $u,v>0$. We use the currently known bound for $\mu(\pi)$ to derive conditions on $u$ and $v$ that guarantee convergence of such series.
author:
- 'Max A. Alekseyev[^1]'
bibliography:
- 'flint.bib'
title: On convergence of the Flint Hills series
---
Introduction
============
Pickover [@Pickover2002] defined the *Flint Hills series* as $\sum_{n=1}^{\infty} \frac{1}{n^3\cdot\sin(n)^2}$ (named after Flint Hills, Kansas) and questioned whether it converges. It was noticed that behavior of the partial sums of this series is closely connected to the rational approximations to $\pi$. In this paper we give a formal description of this connection, proving that convergence of the Flint Hills series would imply an upper bound $2.5$ for the irrationality measure of $\pi$ which is much stronger than the best currently known bound $7.6063\ldots$ obtained by Salikhov [@Salikhov2008]. A rather slow progress in evaluating the irrationality measure of $\pi$ over past decades [@Mahler1953; @Mignotte1974; @Chudnovsky1982; @Hata1990; @Hata1993a; @Hata1993b; @Salikhov2008] indicates the hardness of this problem and suggests that the question of the Flint Hills series’ convergence would unlikely be resolved in the nearest future. The *irrationality measure* $\mu(x)$ of a positive real number $x$ is defined as the infimum of such $m$ that the inequality $$0 < \left|x - \frac{p}{q}\right| < \frac{1}{q^m}$$ holds only for a finite number of co-prime positive integers $p$ and $q$. If no such $m$ exists, then $\mu(x) = +\infty$ (in which case $x$ is called *Liouville number*).
Informally speaking, the larger is $\mu(x)$, the better $x$ is approximated by rational numbers. It is known that $\mu(x)=1$ if $x$ is a rational number; $\mu(x)=2$ if $x$ is irrational algebraic number (Roth’s theorem [@Roth1955] for which Roth was awarded the Fields Medal); and $\mu(x)\geq 2$ if $x$ is a transcendental number. Proving that $\mu(x) > 1$ is a traditional way to establish irrationality of $x$, with the most remarkable example of the $\zeta(3)$ irrationality (where $\zeta(s)=\sum_{n=1}^{\infty} n^{-s}$ is the Riemann zeta function) proved by Apery [@Apery1979; @Poorten1979].
Convergence of the Flint Hills series
=====================================
\[Lsinx\] For a real number $x$, we have $$|\sin(x)| \leq |x|.$$ Furthermore, if $|x|\leq\nicefrac{\pi}{2}$ then $$|\sin(x)| \geq \frac{2}{\pi}\cdot |x|.$$
The former bound follows from the integral estimate $$|\sin(x)| = \left| \int_0^x \cos y\cdot\mathrm{d}y \right| \leq \int_0^{|x|} | \cos y |\cdot\mathrm{d}y \leq \int_0^{|x|} 1\cdot\mathrm{d}y = |x|,$$
To prove the latter bound, we notice that $|\sin(x)|=\sin(|x|)$ and without loss of generality assume that $0\leq x\leq \nicefrac{\pi}{2}$. Let $x_0 = \arccos(\nicefrac{2}{\pi})$ so that for $x\leq x_0$ we have $\cos(x)\geq \nicefrac{2}{\pi}$ and thus $$\sin(x) = \int_0^x \cos(y)\cdot\mathrm{d}y \geq \int_0^x \frac{2}{\pi}\cdot\mathrm{d}y = \frac{2}{\pi}\cdot x,$$ while for $x\geq x_0$ we have $\cos(x)\leq \nicefrac{2}{\pi}$ and thus $$\sin(x) = 1 - \int_x^{\nicefrac{\pi}2} \cos(y)\cdot\mathrm{d}y \geq 1 - \int_x^{\nicefrac{\pi}2} \frac{2}{\pi}\cdot\mathrm{d}y
= 1 - \frac{2}{\pi}\cdot\left(\frac{\pi}{2} - x\right) = \frac{2}{\pi}\cdot x.$$
\[Tmain\] For positive real numbers $u$ and $v$, $\frac{1}{n^u\cdot |\sin(n)|^v} = O\left(\frac{1}{n^{u - (\mu(\pi)-1)\cdot v - \epsilon}}\right)$ for any $\epsilon>0$. Furthermore,
1. If $\mu(\pi)< 1+\nicefrac{u}{v}$, the sequence $\frac{1}{n^u\cdot |\sin(n)|^v}$ converges (to zero);
2. If $\mu(\pi) > 1+\nicefrac{u}{v}$, the sequence $\frac{1}{n^u\cdot |\sin(n)|^v}$ diverges.
Let $\epsilon>0$ and $k=\mu(\pi)+\nicefrac{\epsilon}{v}$. Then the inequality $$\label{Epiapprox}
\left|\pi - \frac{p}{q}\right| < \frac{1}{q^k}$$ holds only for a finite number of co-prime positive integers $p$ and $q$.
For a positive integer $n$, let $m = \left\lfloor \nicefrac{n}{\pi} \right\rfloor$ so that $\left|\nicefrac{n}{\pi}-m\right|\leq \nicefrac{1}{2}$ and thus $\left|n-m\cdot\pi\right|\leq \nicefrac{\pi}{2}$. Then by Lemma \[Lsinx\], $$|\sin(n)| = |\sin(n-m\cdot \pi)| \geq \frac{2}{\pi}\cdot |n-m\cdot \pi| = \frac{2}{\pi}\cdot m\cdot\left|\frac{n}{m}-\pi\right|.$$
On the other hand, for large enough $n$ and $m$, we have $|\nicefrac{n}{m}-\pi|\geq \nicefrac{1}{m^k}$, implying that $$|\sin(n)| \geq \frac{2}{\pi}\cdot m\cdot\left|\frac{n}{m}-\pi\right| \geq \frac{2}{\pi}\cdot \frac{1}{m^{k-1}} \geq c\cdot \frac{1}{n^{k-1}}$$ for some constant $c>0$ depending only on $k$ but not $n$ (since $\nicefrac{n}{m}$ tends to $\pi$ as $n$ grows).
Therefore, for all large enough $n$, we have $$\frac{1}{n^u\cdot |\sin(n)|^v} \leq \frac{1}{c^v\cdot n^{u-(k-1)\cdot v}} = O\left(\frac{1}{n^{u - (\mu(\pi)-1)\cdot v - \epsilon}}\right).$$
The statement 1 now follows easily. If $\mu(\pi)<1+\nicefrac{u}{v}$, we take $\epsilon = \nicefrac{v}{2}\cdot (1+\nicefrac{u}{v} - \mu(\pi))$ to obtain $$\frac{1}{n^u\cdot |\sin(n)|^v} = O\left(\frac{1}{n^{u - v\cdot (\mu(\pi)-1) - \epsilon}}\right) = O\left(\frac{1}{n^{\epsilon}}\right).$$
Now let us prove statement 2. If $\mu(\pi) > 1+\nicefrac{u}{v}$, then for $k=1+\nicefrac{u}{v}$ the inequality holds for infinitely many co-prime positive integers $p$ and $q$. That is, there exists a sequence of rationals $\nicefrac{p_i}{q_i}$ such that $\left|p_i - \pi\cdot q_i\right| < \frac{1}{q_i^{k-1}}$. Then $$|\sin(p_i)| = |\sin(p_i-q_i\cdot \pi)| \leq |p_i-q_i\cdot \pi| < \frac{1}{q_i^{k-1}} < C\cdot \frac{1}{p_i^{k-1}}$$ for some constant $C>0$ depending only on $k$.
Therefore, for $n=p_i$ we have $$\frac{1}{n^u\cdot |\sin(n)|^v} > C^v\cdot n^{v\cdot(k-1)-u} = C^v.$$ On the other hand, we have $$|\sin(1+p_i)| = |\sin(1+p_i-q_i\cdot \pi)|\quad \mathop{\longrightarrow}\limits_{i\to\infty}\quad \sin(1)$$ and thus $$\frac{1}{(1+p_i)^u\cdot |\sin(1+p_i)|^v}\quad \mathop{\longrightarrow}\limits_{i\to\infty}\quad 0.$$ We conclude that the sequence $\frac{1}{n^u\cdot |\sin(n)|^v}$ diverges, since it contains two subsequences one which is bounded from below by a positive constant, while the other tends to zero.
\[Cor1\] For positive real numbers $u$ and $v$,
1. If the sequence $\frac{1}{n^u\cdot |\sin(n)|^v}$ converges, then $\mu(\pi)\leq 1+\nicefrac{u}{v}$;
2. If the sequence $\frac{1}{n^u\cdot |\sin(n)|^v}$ diverges, then $\mu(\pi) \geq 1+\nicefrac{u}{v}$.
If the Flint Hills series $\sum_{n=1}^{\infty} \frac{1}{n^3\cdot \sin(n)^2}$ converges, then $\mu(\pi) \leq \nicefrac{5}{2}$.
Convergence of $\sum_{n=1}^{\infty} \frac{1}{n^3\cdot \sin(n)^2}$ implies that $\lim\limits_{n\to\infty} \frac{1}{n^3\cdot \sin(n)^2} = 0$ and thus by Corollary \[Cor1\], $\mu(\pi)\leq\nicefrac{5}{2}$.
\[Tsum\] For positive real numbers $u$ and $v$, if $\mu(\pi)< 1+\nicefrac{(u-1)}{v}$, then $\sum_{n=1}^{\infty} \frac{1}{n^u\cdot |\sin(n)|^v}$ converges.
The inequality $\mu(\pi)< 1+\nicefrac{(u-1)}{v}$ implies that $u-v\cdot(\mu(\pi)-1)>1$. Then there exists $\epsilon>0$ such that $w=u-v\cdot(\mu(\pi)-1)-\epsilon>1$. By Theorem \[Tmain\], $\frac{1}{n^u\cdot |\sin(n)|^v} = O\left(\frac{1}{n^w}\right)$ further implying that $$\sum_{n=1}^{\infty} \frac{1}{n^u\cdot |\sin(n)|^v} = O\left(\zeta(w)\right) = O(1).$$
\[Csercon\] For positive real numbers $u$ and $v$, if $\sum_{n=1}^{\infty} \frac{1}{n^u\cdot |\sin(n)|^v}$ diverges, then $\mu(\pi) \geq 1+\nicefrac{(u-1)}{v}$.
Unfortunately, the divergence of the Flint Hills series would not imply any non-trivial result per Corollary \[Csercon\].
Known bounds for $\mu(\pi)$ and their implications
==================================================
Since $\pi$ is a transcendental number, $\mu(\pi)\geq 2$. To the best of our knowledge, no better lower bound for $\mu(\pi)$ is currently known.
The upper bound for $\mu(\pi)$ has been improved over the past decades. Starting with the bound $\mu(\pi)\leq 30$ established by Mahler in 1953 [@Mahler1953], it was improved to $\mu(\pi)\leq 20$ by Mignotte in 1974 [@Mignotte1974], and then to $\mu(\pi)\leq 19.8899944\ldots$ by Chudnovsky in 1982 [@Chudnovsky1982]. In 1990-1993 Hata in a series of papers [@Hata1990; @Hata1993a; @Hata1993b] decreased the upper bound down to $\mu(\pi)\leq 8.016045\ldots$. The best currently known upper bound $\mu(\pi)\leq 7.6063\ldots$ was obtained in 2008 by Salikhov [@Salikhov2008].
By Theorem \[Tmain\], the Salikhov’s bound implies that the sequence $\frac{1}{n^u\cdot |\sin(n)|^v}$ converges to zero as soon as $1+\nicefrac{u}{v} > 7.6063$, including in particular the pairs $(u,v) = (7,1)$, $(14,2)$, $(20,3)$ etc. Correspondingly, Theorem \[Tsum\] further implies that the series $\sum_{n=1}^{\infty} \frac{1}{n^u\cdot |\sin(n)|^v}$ converges for $(u,v) = (8,1)$, $(15,2)$, $(21,3)$ etc.
[^1]: Department of Computer Science and Engineering, University of South Carolina, Columbia, SC, U.S.A.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
A classical problem in general relativity is the Cauchy problem for the linearised Einstein equation (the initial value problem for gravitational waves) on a globally hyperbolic vacuum spacetime. A well-known result is that it is uniquely solvable up to gauge solutions, given initial data on a spacelike Cauchy hypersurface. The solution map is an isomorphism between initial data (modulo gauge producing initial data) and solutions (modulo gauge solutions).
In the first part of this work, we show that the solution map is actually an isomorphism of locally convex topological vector spaces. This implies that the equivalence class of solutions depends continuously on the equivalence class of initial data. We may therefore conclude well-posedness of the Cauchy problem.
In the second part, we show that the linearised constraint equations can always be solved on a closed manifold with vanishing scalar curvature. This generalises the classical notion of TT-tensors on flat space used to produce models of gravitational waves.
All our results are proven for smooth and distributional initial data of arbitrary real Sobolev regularity.
address: 'University of Hamburg, Department of Mathematics, Bundesstraße 55, 20146 Hamburg, Germany'
author:
- Oliver Lindblad Petersen
title: On the Cauchy problem for the linearised Einstein equation
---
Introduction
============
Gravitational waves are usually modelled as solutions to the linearised Einstein equation. The purpose of this work is to extend well-known results on the Cauchy problem for the linearised Einstein equation.
The classical existence theorem for the Cauchy problem for the (non-linear) Einstein equation, proven by Choquet-Bruhat in [@F-B1952], can be formulated as follows. Given a Riemannian manifold $({\Sigma}, {{\tilde g}})$ with a smooth $(0,2)$-tensor ${{\tilde k}}$ satisfying the vacuum constraint equations $$\Phi({{\tilde g}}, {{\tilde k}}) := \left(
\begin{array}{ll}
{\mathrm{Scal}}({{\tilde g}}) - {{\tilde g}}({{\tilde k}}, {{\tilde k}}) + ({\mathrm{tr}}_{{\tilde g}}{{\tilde k}})^2 \\
{\mathrm{div}}({{\tilde k}}- {\mathrm{tr}}_{{\tilde g}}({{\tilde k}}){{\tilde g}})
\end{array} \right) = 0,$$ there is a globally hyperbolic spacetime $(M, g)$ satisfying the Einstein vacuum equation $${\mathrm{Ric}}(g) = 0,$$ and an embedding $\iota: {\Sigma}\hookrightarrow M$ such that $({{\tilde g}}, {{\tilde k}})$ are the induced first and second fundamental forms. It was shown in [@C-BG1969] that each such globally hyperbolic development can be embedded into a “maximal globally hyperbolic development”, determined up to isometry. Assume now that $(M, g)$ is a smooth vacuum spacetime and let ${\Sigma}\subset M$ denote a Cauchy hypersurface. Using methods analogous to [@F-B1952], it can be shown (see [@FewsterHunt2013]\*[Thm. 3.1, Thm. 3.3]{} and [@FisherMarsden1979]\*[Thm. 4.5]{}) that the Cauchy problem for the linearised Einstein equation can be solved. More precisely, given smooth $(0,2)$-tensors $({{\tilde h}}, {{\tilde m}})$ on ${\Sigma}$ such that the linearised constraint equation is satisfied, i.e. $$D\Phi_{{{\tilde g}}, {{\tilde k}}}({{\tilde h}}, {{\tilde m}}) = 0,$$ there is a smooth $(0,2)$-tensor $h$ on $M$ such that the linearised Einstein equation $$D{\mathrm{Ric}}_g(h) = 0$$ is satisfied. Analogously to the (non-linear) Einstein equation, the solution is only determined up to addition of a gauge solution. However, the *equivalence class* of gauge solutions is uniquely determined by the corresponding *equivalence class of initial data*. In other words, the solution map $$\begin{gathered}
{{\raisebox{.2em}{$\text{Initial data on }{\Sigma}$}\left/\raisebox{-.2em}{$\text{Gauge producing initial data}$}\right.}} \\
\downarrow \\
{{\raisebox{.2em}{$\text{Global solutions on }M$}\left/\raisebox{-.2em}{$\text{Gauge solutions}$}\right.}}\end{gathered}$$ is an isomorphism. Our first main result is Theorem \[thm: Wellposedness\], which says that this map is an isomorphism of *locally convex topological vector spaces*. This concludes *well-posedness* of the Cauchy problem for the linearised Einstein equation, meaning global existence, uniqueness and continuous dependence on initial data. We prove this for initial data of arbitrary real Sobolev regularity. This enables us to model gravitational waves that are very singular at a certain initial time. See Example \[ex: arbitrarily irregular\] for an example of arbitrarily irregular initial data that does not produce gauge solutions.
In order to apply Theorem \[thm: Wellposedness\] in practice, it is necessary to understand the space $$\label{eq: quotient}
{{\raisebox{.2em}{$\text{Initial data on }{\Sigma}$}\left/\raisebox{-.2em}{$\text{Gauge producing initial data}$}\right.}}.$$ We show that this space can be well understood if we assume that ${\Sigma}$ is compact and ${{\tilde k}}= 0$, in which case the constraint equation $\Phi({{\tilde g}}, {{\tilde k}}) = 0$ just means ${\mathrm{Scal}}({{\tilde g}}) = 0$. Using Moncrief’s splitting theorem, it is easy to calculate that solutions $({{\tilde h}}, {{\tilde m}})$ of $$\begin{aligned}
\Delta {\mathrm{tr}}_{{\tilde g}}{{\tilde h}}&= {{\tilde g}}({\mathrm{Ric}}({{\tilde g}}), {{\tilde h}}), \label{eq: Eq1firstFF} \\
{\mathrm{div}}{{\tilde h}}&= 0, \label{eq: Eq2firstFF} \\
\Delta {\mathrm{tr}}_{{\tilde g}}{{\tilde m}}&= - {{\tilde g}}({\mathrm{Ric}}({{\tilde g}}), \tilde m), \label{eq: Eq1secondFF} \\
{\mathrm{div}}({{\tilde m}}- ({\mathrm{tr}}_{{\tilde g}}{{\tilde m}}){{\tilde g}}) &= 0, \label{eq: Eq2secondFF}\end{aligned}$$ are in one-to-one correspondence with elements in in case ${{\tilde k}}= 0$. In other words, one can show that $$\begin{aligned}
\text{Solutions to (\ref{eq: Eq1firstFF} - \ref{eq: Eq2secondFF})} &\to {{\raisebox{.2em}{$\text{Initial data on }{\Sigma}$}\left/\raisebox{-.2em}{$\text{Gauge producing i.d.}$}\right.}}\\
({{\tilde h}}, {{\tilde m}}) &\mapsto [({{\tilde h}}, {{\tilde m}})] \end{aligned}$$ is an isomorphism of topological vector spaces, see Proposition \[prop: Moncrief\_splitting\]. Our second main result, Theorem \[thm: initial\_data\_split\], concerns solving equations - . We show that given $(0,2)$-tensors $({\alpha}, {\beta})$, there is a unique decomposition $$\begin{aligned}
{\alpha}&= {{\tilde h}}+ L {\omega}+ C {\mathrm{Ric}}({{\tilde g}}) + \phi {{\tilde g}}, \label{eq: alpha}\\
{\beta}&= {{\tilde m}}+ L \eta + C' {\mathrm{Ric}}({{\tilde g}}) + \psi {{\tilde g}}, \label{eq: beta}\end{aligned}$$ where $({{\tilde h}}, {{\tilde m}})$ solves (\[eq: Eq1firstFF\] - \[eq: Eq2secondFF\]), $L$ is the conformal Killing operator, ${\omega}, \eta$ are one-forms, $\phi, \psi$ are functions such that $\int_{\Sigma}\phi d\mu_{{\tilde g}}= \int_{\Sigma}\psi d\mu_{{\tilde g}}= 0$. If ${\mathrm{Ric}}({{\tilde g}}) = 0$, then the solution space of (\[eq: Eq1firstFF\] - \[eq: Eq2secondFF\]) is spanned by the TT-tensors and $C {{\tilde g}}$ for any $C \in {\mathbb{R}}$ and and are nothing but the usual $L^2$-split. Note however that TT-tensors are only guaranteed to solve (\[eq: Eq1firstFF\] - \[eq: Eq2secondFF\]) in case ${\mathrm{Ric}}({{\tilde g}}) = 0$. Our result therefore extends the classical use of TT-tensors to produce models of gravitational waves.
We start by introducing spaces of sections of various regularity in Section \[sec: Notation\]. In Section \[ch: non-linear\_Cauchy\] we formulate the Cauchy problem for the linearised Einstein equation. The goal of Section \[ch: Cauchy\_lin\_Ein\] is then to prove our first main result, Theorem \[thm: Wellposedness\], concerning the linearised Einstein equation. We conclude in Section \[ch: lin\_constraint\] with our second main result, Theorem \[thm: initial\_data\_split\], concerning the linearised constraint equations.
We expect that our results can be generalised to various models with matter, using the methods presented here, but we will for simplicity restrict to the vacuum case.
### Acknowledgements {#acknowledgements .unnumbered}
It is a pleasure to thank my PhD supervisor Christian Bär for suggesting this topic and for many helpful comments. I especially want to thank Andreas Hermann for many discussions and for reading early versions of the manuscript. Furthermore, I would like to thank the Berlin Mathematical School, Sonderforschungsbereich 647 and Schwerpunktprogramm 2026, funded by Deutsche Forschungsgemeinschaft, for financial support.
The function spaces {#sec: Notation}
===================
Let us start by introducing our notation. All manifolds, vector bundles and metrics will be smooth, but the sections will have various regularity. Assume that $M$ is a smooth manifold and let $E \to M$ be a real vector bundle over $M$. We denote the *space of smooth sections* in $E$ by $$C^\infty(M, E),$$ equipped with the canonical Fr[é]{}chet space structure. Let us denote the space of *sections of Sobolev regularity* $k \in {\mathbb{R}}$ by $$H^k_{loc}(M, E).$$ By the Sobolev embedding theorem, we may write $$H^\infty_{loc}(M, E) := \bigcap_{k \in {\mathbb{R}}} H_{loc}^k(M, E) = C^\infty(M, E).$$ We will write $C^\infty(M)$ and $H^k_{loc}(M)$ instead of $C^\infty(M, E)$ and $H^k_{loc}(M, E)$ whenever it is clear from the context what vector bundle is meant. For a compact subset $K \subset M$ and a $k \in {\mathbb{R}}\cup \{\infty\}$, let $$H^k_K(M, E)$$ denote the sections of Sobolev regularity $k$ with support contained in $K$. As above, we have $H_K^\infty(M, E) = C^\infty_K(M, E)$. Define the space of *sections of compact support* of Sobolev regularity $k$ in $E$ by $$H_c^k(M, E) := \bigcup_{\stackrel{K \subset M}{\text{compact}}} H^k_K(M, E).$$ In order to define the topology, choose an exhaustion of $M$ by compact sets $K_1 \subset K_2 \subset \hdots \subset \bigcup_{n \in {\mathbb{N}}} K_n = M$. Since $H^k_{K_n}(M) \subset H^k_{K_{n+1}}(M)$ is closed for all $n \in {\mathbb{N}}$, the strict inductive limit topology is defined on $H_c^k(M)$ (see for example [@Treves1967]). A linear map $L: H_c^k(M) \to V$ into a locally convex topological vector space $V$ is continuous if and only if $L|_{H_K^k(M)}:H_K^k(M) \to V$ is continuous for any compact set $K \subset M$. The strict inductive limit topology turns $H_c^k(M)$ into a locally convex topological vector space (in fact an LF-space) and is independent of the choice of exhaustion. The following lemma gives the notion of convergence of a sequence (or net) of sections.
\[le: convergence\_H\_c\] Let $k \in {\mathbb{R}}\cup \{\infty\}$. Assume that $V \subset H_c^k(M)$ is bounded. Then there is a compact subset $K \subset M$ such that $V \subset H_K^k(M)$. In particular, if $u_n \to u$ is a converging sequence (or net), then there is a compact subset $K \subset M$ such that ${\mathrm{supp}}(u_n), {\mathrm{supp}}(u) \subset K$ and $u_n \to u$ in $H^k_K(M, E)$.
Assume to reach a contradiction, that the statement is not true. Let $K_1 \subset K_2 \subset \hdots$ be a exhaustion by compact subsets of $M$. By assumption, for each $i \in {\mathbb{N}}$ there is an $f_i \in V$ such that ${\mathrm{supp}}(f_i) \not\subset K_i$. Hence there are test sections $\varphi_i \in C_c^\infty(M, E^*)$ such that ${\mathrm{supp}}(\varphi_i) \subset K_i^\mathsf{c}$ and $f_i[\varphi_i] \neq 0$. Consider the convex subset containing zero, given by $$W := \left\{f \in H_c^k(M, E) \mid {\left\lvertf[\varphi_i]\right\rvert} < \frac{{\left\lvertf_i[\varphi_i]\right\rvert}}{i}, \ \forall i\right\} \subset H_c^k(M, E).$$ We claim that $W$ is open. We have $$W \cap H^k_{K_j}(M, E) = \bigcap_{i = 1}^{j-1} \left\{f \in H_{K_j}^k(M, E) \mid {\left\lvertf[\varphi_i]\right\rvert} < \frac{{\left\lvertf_i[\varphi_i]\right\rvert}}{i} \right\}.$$ Since $f \mapsto {\left\lvertf[\varphi_i]\right\rvert}$ is a continuous function on $H_{K_j}^k({\Sigma}, E)$, this is a finite intersection of open sets and hence open. Hence $W$ is open. Note that for each $T > 0$, we have $f_i \notin T\cdot W$ if $i > T$. It follows that $V$ is not bounded.
Let $E^* \to M$ be the dual vector bundle to $E$. We denote the space of all continuous functionals on $C_c^\infty(M, E^*)$ by ${\mathcal{D}}'(M, E)$ and we equip it with the weak\*-topology. Elements of ${\mathcal{D}}'(M, E)$ are called *distributional sections in $E$*. For $k < 0$, the elements of $H^k_{loc}(M)$ cannot be realised as measurable functions, only as distributions. The natural inclusion $L^1_{loc}(M, E) \hookrightarrow {\mathcal{D}}'(M, E)$ is given by $$f \mapsto \left(\varphi \mapsto \int_{M} \varphi(f) d \mu_g \right),$$ for some fixed (semi-)Riemannian metric $g$ on $M$. The image of the embedding $$C^\infty(M, E) \hookrightarrow \mathcal D'(M, E)$$ is dense. We have the continuous inclusions $$H_K^k(M) \subset H_c^k(M) \subset H_{loc}^k(M) \subset {\mathcal{D}}'(M, E)$$ for each compact set $K \subset M$ and $k \in {\mathbb{R}}\cup \{\infty\}$. Moreover, it is a standard result that each compactly supported distribution is of some Sobolev regularity, i.e. $${\mathcal{D}}'_c(M, E) = \bigcup_{k \in {\mathbb{R}}} H_c^k(M, E).$$
Let us explain how linear differential operators act on distributional sections. Since any Sobolev section is a distribution, this shows how differential operators act on Sobolev spaces as well. Assume that $E, F \to M$ are equipped with positive definite metrics $\langle \cdot, \cdot \rangle_E$ and $\langle\cdot, \cdot \rangle_F$. Denote the space of linear differential operators of order $m \in {\mathbb{N}}$ mapping sections in $E$ to sections in $F$ by $\mathrm{Diff}_m(E, F)$. Given a $P \in \mathrm{Diff}_m(E, F)$, define the formal adjoint operator $P^* \in \mathrm{Diff}_m(F, E)$ to be the unique differential operator such that $$\label{eq: int_local_int}
\int_{M} \langle P\varphi, \psi \rangle_F = \int_{M} \langle \varphi, P^* \psi \rangle_E d\mu_g,$$ for all $\psi \in C_c^\infty(M, F)$ and $\varphi \in C^\infty_c(M, E)$. Using this, $P$ can be extended to act on distributions by the formula $$PT[\langle \cdot, \psi\rangle_F] = T[\langle \cdot, P^* \psi\rangle_E].$$ This coincides with equation when $T$ can be identified with a compactly supported smooth section. $P$ extends to continuous maps $$\begin{aligned}
{\mathcal{D}}'(M, E) &\to \mathcal D'(M, F), \\
H^k_{loc}(M, E) &\to H^{k-m}_{loc}(M, F), \\
H^k_K(M, E) &\to H^{k-m}_K(M, F), \\
H^k_c(M, E) &\to H^{k-m}_c(M, F),\end{aligned}$$ for all $k \in {\mathbb{R}}\cup \{\infty\}$ and all compact subsets $K \subset M$. The following lemma will be of importance.
\[le: two topologies\] Let $k \in {\mathbb{R}}\cup \{\infty\}$ and let $P \in \mathrm{Diff}_m(E, F)$. Then the induced subspace topology on $$H^k_c(M, E) \cap \ker(P)$$ is the same as the strict inductive limit topology induced by the embeddings $$H^k_K(M, E) \cap \ker(P) \hookrightarrow H^k_c(M, E) \cap \ker(P).$$
Let $u_n \to u$ be a net converging in $H^k_c(M, E) \cap \ker(P)$ with respect to the subspace topology. Then $u_n \to u$ in $H^k_c(M, E)$, which by Lemma \[le: convergence\_H\_c\] means that there is a compact subset $K \subset M$ such that $u_n \to u$ in $H^k_K(M, E)$. It follows that $u_n \to u$ in $H_K^k(M, E) \cap \ker(P)$ and hence in $H_c^k(M, E) \cap \ker(P)$ with respect to the strict inductive limit topology, since the embedding is continuous. The other direction is clear.
Assume now that $(M,g)$ is a smooth globally hyperbolic spacetime. By [@BernalSanches08]\*[Thm. 1.1]{} there is a Cauchy temporal function $t:M \to {\mathbb{R}}$, i.e. for all $\tau \in t(M)$, ${\Sigma}_{\tau} := t^{-1}(\tau)$ is a smooth spacelike Cauchy hypersurface and ${\mathrm{grad}}(t)$ is timelike and past directed. The metric can then be written as $$g = -{\alpha}^2 dt^2 + {{\tilde g}}_t,$$ where ${\alpha}: M \to {\mathbb{R}}$ is a positive function and ${{\tilde g}}_\tau$ denotes a Riemannian metric on ${\Sigma}_\tau$, depending smoothly on $\tau \in t(M)$. It follows that the future pointing unit normal $\nu$ is given by $\nu = -\frac1 {\alpha}{\mathrm{grad}}(t)|_{{\Sigma}_\tau}$. Let us use the notation $${\nabla}_{t} := {\nabla}_{{\mathrm{grad}}(t)}.$$ For each $k \in {\mathbb{R}}$, we get a Fr[é]{}chet vector bundle $$(H_{loc}^k({\Sigma}_\tau, E|_{{\Sigma}_\tau}))_{\tau \in t(M)}.$$ We denote the $C^m$-sections in this vector bundle by $$C^m\left(t(M), H_{loc}^k({\Sigma}_\cdot, E|_{{\Sigma}_\cdot}) \right).$$ This is a Fr[é]{}chet space. When solving wave equations, the solutions typically lie in the following spaces of sections of *finite energy of infinite order*: $$CH^k_{loc}(M, E, t) := \bigcap_{j = 0}^\infty C^j\left(t(M), H_{loc}^{k-j}({\Sigma}_\cdot, E|_{{\Sigma}_\cdot})\right).$$ The spaces $CH_{loc}^k(M, E, t)$ carry a natural induced Fr[é]{}chet topology. For $k = \infty$, write $$CH^\infty_{loc}(M, E, t) := \bigcap_{k \in {\mathbb{R}}} CH^k_{loc}(M, E, t) = C^\infty(M, E).$$ Note that we have the continuous embedding $$CH^k_{loc}(M, E, t) \hookrightarrow H^{\lfloor k \rfloor}_{loc}(M, E), \label{eq: CH_H-embedding}$$ where $\lfloor k \rfloor$ is the largest integer smaller than or equal to $k$. The finite energy sections can be considered as distributions defined by $$u[\varphi] := \int_{t(M)}u(\tau) \left[({\alpha}\varphi)|_{{\Sigma}_\tau} \right] d\tau.$$ For any subset $A \subset M$, let $J^{-/+}(A)$ denote the causal past/future of $A$ and denote their union $J(A)$. A subset $A \subset M$ is called *spatially compact* if $A \subset J(K)$ for some compact subset $K \subset M$. For each spatially compact subset $A \subset M$, the space $$CH_{A}^k(M, E, t) := \{f \in CH_{loc}^k(M, E, t) \mid {\mathrm{supp}}(f) \subset A \} \subset CH_{loc}^k(M, E, t)$$ is closed and therefore also a Fr[é]{}chet space. We define the *finite energy sections of spatially compact support* by $$CH_{sc}^k(M, E, t) := \bigcup_{\stackrel{A}{\text{spatially compact}}} CH_{A}^k(M, E, t)$$ with the strict inductive limit topology. The strict inductive limit topology is defined, since if $K_i$ is an exhaustion of a Cauchy hypersurface ${\Sigma}$, then $J(K_i)$ is an exhaustion of $M$ by spatially compact sets. Similar to before, the notion of convergence is given by the following lemma. The proof is analogous to the proof of Lemma \[le: convergence\_H\_c\].
\[le: convergence\_CH\_sc\] Assume that $V \subset CH_{sc}^k(M, E, t)$ is bounded. Then there is a compact subset $K \subset {\Sigma}$ such that $V \subset CH_{J(K)}^k(M, E, t)$. In particular, if $u_n \to u$ is a converging sequence (or net), then there is a compact subset $K \subset {\Sigma}$ such that ${\mathrm{supp}}(u_n), {\mathrm{supp}}(u) \subset J(K)$ and $u_n \to u$ in $CH^k_{J(K)}(M, E, t)$.
Any $P \in \mathrm{Diff}_m(E, F)$ extends to continuous maps $$\begin{aligned}
CH^k_{loc}(M, E, t) &\rightarrow CH^{k-m}_{loc}(M, E, t), \\
CH^k_{A}(M, E, t) &\rightarrow CH^{k-m}_{A}(M, E, t), \\
CH^k_{sc}(M, E, t) &\rightarrow CH^{k-m}_{sc}(M, E, t), \end{aligned}$$ for any $k \in {\mathbb{R}}\cup \{\infty\}$ and any spatially compact set $A \subset M$. The following lemma is proven analogously to Lemma \[le: two topologies\], using Lemma \[le: convergence\_CH\_sc\] instead of Lemma \[le: convergence\_H\_c\].
\[le: two topologies2\] Let $k \in {\mathbb{R}}\cup \{\infty\}$ and let $P \in \mathrm{Diff}_m(E, F)$. Then the induced subspace topology on $$CH^k_{sc}(M, E) \cap \ker(P)$$ is the same as the strict inductive limit topology induced by the embeddings $$CH^k_{J(K)}(M, E) \cap \ker(P) \hookrightarrow CH^k_{sc}(M, E) \cap \ker(P).$$
Since we will commonly work with distributional tensors, let us conclude this section by showing how some standard tensor operations are made on distributional tensors. Let $g$ be a smooth semi-Riemannian metric on a manifold $M$, extended to tensor fields.
- If $X \in \mathcal D'(M, TM)$ and $Y \in C^\infty(M, TM)$, then the distribution $g(X, Y)$ is given by $$g(X, Y)[\varphi] = X[\varphi g(\cdot, Y)].$$ This is well-defined since $\varphi g(\cdot, Y) \in C_c^\infty(M, T^*M)$. Using this, we can project $X$ to vector subbundles for example.
- Similarly, if $a \in \mathcal D'(M, T^*M \otimes T^*M)$ and $b \in C^\infty(M, T^*M \otimes T^*M)$, then the distribution $g(a, b)$ is defined by $$g(a, b)[\varphi] := a[\varphi g(\cdot, b)].$$ In particular, the trace of $a$ with respect to $g$ is defined and equals $${\mathrm{tr}}_g(a) := g(g, a).$$
Linearising the Einstein equation {#ch: non-linear_Cauchy}
=================================
We will study the linearisation of the vacuum Einstein equation $${\mathrm{Ric}}(g) = 0$$ globally hyperbolic spacetimes of dimension at least $3$. Recall that if $(M, g)$ is a vacuum spacetime (i.e. ${\mathrm{Ric}}(g) = 0$) and ${\Sigma}\subset M$ is a spacelike hypersurface, then the induced first and second fundamental forms $({{\tilde g}}, {{\tilde k}})$ on ${\Sigma}$ satisfy $$\begin{aligned}
{\mathrm{Scal}}({{\tilde g}}) + ({\mathrm{tr}}_{{\tilde g}}{{\tilde k}})^2 - {{\tilde g}}({{\tilde k}}, {{\tilde k}}) =&0, \label{eq: ham_constraint} \\
{\mathrm{div}}({{\tilde k}}- ({\mathrm{tr}}_{{{\tilde g}}} {{\tilde k}}) {{\tilde g}}) =& 0. \label{eq: momentum_constraint}\end{aligned}$$ A famous result by Choquet-Bruhat and Geroch gives a converse statement to this.
\[thm: Choquet-Bruhat\] Given a Riemannian manifold $({\Sigma}, {{\tilde g}})$ and a smooth $(0,2)$-tensor ${{\tilde k}}$ on ${\Sigma}$ satisfying and , there is a maximal globally hyperbolic development $(M, g)$ of $({\Sigma}, {{\tilde g}}, {{\tilde k}})$ that is unique up to isometry.
A globally hyperbolic development means that there exists an embedding $\iota:{\Sigma}\hookrightarrow M$ such that $\iota({\Sigma}) \subset M$ is a Cauchy hypersurface and $({{\tilde g}}, {{\tilde k}})$ are the induced first and second fundamental form. In particular, $(M, g)$ is a globally hyperbolic spacetime. That $(M, g)$ is maximal means that any other globally hyperbolic development embeds isometrically into $(M, g)$ such that the embedding of the Cauchy hypersurface is respected.
Note that a maximal globally hyperbolic development can of course only be unique up to isometry. We will see that this “gauge invariance” shows up in the linearised case as an important feature of the linearised Einstein equation.
The linearised Einstein equation {#sec: Linearisation}
--------------------------------
Assume in the rest of the paper, unless otherwise stated, that $(M,g)$ is a globally hyperbolic spacetime of dimension at least $3$ satisfying the *Einstein equation*, i.e. $${\mathrm{Ric}}(g) = 0.$$ We *do not* require $(M,g)$ to be maximal in the sense of Theorem \[thm: Choquet-Bruhat\]. Let us now linearise the Einstein equation around $g$. For this, we first define the Lichnerowicz operator $$\Box_L h := {\nabla}^*{\nabla}h - 2 \mathring{R}h,$$ where $$\begin{aligned}
{\nabla}^*{\nabla}&:= - {\mathrm{tr}}_g({\nabla}^2), \qquad \qquad \text{(connection-Laplace operator)}\\
\mathring{R}h(X,Y) &:= {\mathrm{tr}}_g(h(R(\cdot, X)Y, \cdot)),\end{aligned}$$ for any $(0,2)$-tensor $h$ and $X, Y \in TM$. It will be natural to write the linearised Einstein equation as the Lichnerowicz operator plus a certain Lie derivative of the metric $g$. We use the following notation $$\begin{aligned}
{\nabla}\cdot h(X) &:= {\mathrm{tr}}_g({\nabla}_{\cdot}h(\cdot, X)), \qquad\text{(divergence)} \\
\overline h &:= h - \frac12 {\mathrm{tr}}_g(h)g. \end{aligned}$$ for any $h \in C^\infty(M, S^2M)$ and $X \in TM$.
\[le: LinearisedEinstein\] Any curve of smooth Lorentz metrics $g_s$, such that $g_0 = g$ satisfies $${\frac{d}{d s}\Big|_{s = 0}} {\mathrm{Ric}}(g_s) = \frac12 \left( \Box_L h + {\mathcal{L}}_{({\nabla}\cdot \overline h)^\sharp} g\right),$$ where $$h := {\frac{d}{d s}\Big|_{s = 0}} g_s$$ and ${\mathcal{L}}$ is the Lie derivative and $\sharp$ is the musical isomorphism (“raising an index”).
This straightforward computation can for example be found in [@Besse1987]\*[Thm. 1.174]{}.
Let us denote the vector bundle of symmetric $2$-tensors on $M$ by $$S^2M := \otimes_{sym}^2T^*M.$$ Lemma \[le: LinearisedEinstein\] motivates the following definition.
We define the linearised Ricci curvature $$D {\mathrm{Ric}}(h) := \frac12 \left( \Box_L h + {\mathcal{L}}_{({\nabla}\cdot \overline h)^\sharp}g \right)$$ for any $h \in \mathcal D'(M, S^{2}M)$. We say that $h$ satisfies the *linearised Einstein equation* if $$D {\mathrm{Ric}}(h) = 0.$$
Here we use the definition that extends to distributions, $${\mathcal{L}}_Vg(X,Y) = g({\nabla}_XV, Y) + g({\nabla}_YV, X).$$
Note that $\square_L$ is a wave operator, but $D{\mathrm{Ric}}$ is not (c.f. Section \[sec: Waves\]).
There are certain solutions of the linearised Einstein equation, called “gauge solutions”, which are due to “infinitesimal isometries”.
For any vector field $V \in \mathcal D'(M, TM)$, we have $$D{\mathrm{Ric}}({\mathcal{L}}_V g) = 0.$$
Let us first restrict to smooth objects. Let $\varphi_s: M \to M$ be a curve of diffeomorphisms such that $\varphi_0 = {\mathrm{id}}$ and such that $\frac d {ds} \big |_{s = 0} \varphi_s = V$. Differentiating the equation $$0 = \varphi_s^*{\mathrm{Ric}}(g) = {\mathrm{Ric}}(\varphi_s^* g)$$ gives $$D{\mathrm{Ric}}({\mathcal{L}}_Vg) = 0.$$ By density of smooth sections in distributional sections, the result extends to the general case.
The linearised constraint equation
----------------------------------
Assume throughout the rest of the paper, unless otherwise stated, that ${\Sigma}\subset M$ is a smooth spacelike Cauchy hypersurface with future pointing unit normal vector field $\nu$. Let $({{\tilde g}}, {{\tilde k}})$ denote the first and second fundamental forms. As mentioned in the beginning of this section, $({{\tilde g}}, {{\tilde k}})$ will satisfy the *constraint equation* $$\Phi({{\tilde g}}, {{\tilde k}}) := \left(
\begin{array}{ll}
\Phi_1({{\tilde g}}, {{\tilde k}}) \\
\Phi_2({{\tilde g}}, {{\tilde k}})
\end{array} \right) = 0,$$ where $$\begin{aligned}
\Phi_1({{\tilde g}}, {{\tilde k}}) &= {\mathrm{Scal}}({{\tilde g}}) - {{\tilde g}}({{\tilde k}}, {{\tilde k}}) + ({\mathrm{tr}}_{{\tilde g}}{{\tilde k}})^2, \\
\Phi_2({{\tilde g}}, {{\tilde k}}) &= \tilde {\nabla}\cdot {{\tilde k}}- d({\mathrm{tr}}_{{\tilde g}}{{\tilde k}}),\end{aligned}$$ and $\tilde {\nabla}$ is the Levi-Civita connection on ${\Sigma}$ with respect to ${{\tilde g}}$. We linearise the constraint equation around $({{\tilde g}}, {{\tilde k}})$, analogously to Lemma \[le: LinearisedEinstein\].
\[def: lin\_constr\_eq\] A pair of tensors $({{\tilde h}}, {{\tilde m}}) \in \mathcal D'({\Sigma}, S^{2}{\Sigma}) \times \mathcal D'({\Sigma}, S^2{\Sigma})$ is said to satisfy the *linearised constraint equation*, linearised around $({{\tilde g}}, {{\tilde k}})$, if $$D\Phi({{\tilde h}}, {{\tilde m}}) := \left( \begin{array}{ll}
D\Phi_1({{\tilde h}}, {{\tilde m}}) \\
D\Phi_2({{\tilde h}}, {{\tilde m}})
\end{array} \right) = 0,$$ in $\mathcal D'\left({\Sigma}, {\mathbb{R}}) \times \mathcal D'({\Sigma}, T^*{\Sigma}\right)$, where $$\begin{aligned}
D\Phi_1({{\tilde h}}, {{\tilde m}})
:= & \tilde {\nabla}\cdot(\tilde {\nabla}\cdot {{\tilde h}}- d {\mathrm{tr}}_{{\tilde g}}{{\tilde h}}) - {{\tilde g}}({\mathrm{Ric}}({{\tilde g}}), {{\tilde h}}) \nonumber \\
& + 2 {{\tilde g}}({{\tilde k}}\circ {{\tilde k}}- ({\mathrm{tr}}_{{\tilde g}}{{\tilde k}}) {{\tilde k}}, {{\tilde h}}) - 2 {{\tilde g}}({{\tilde k}}, {{\tilde m}}- ({\mathrm{tr}}_{{\tilde g}}{{\tilde m}}){{\tilde g}}), \label{eq: LinConstraints1}\\
D\Phi_2({{\tilde h}}, {{\tilde m}})(X) \nonumber
:=& - {{\tilde g}}({{\tilde h}}, \tilde {\nabla}_{(\cdot)} {{\tilde k}}(\cdot, X)) - {{\tilde g}}\left({{\tilde k}}(\cdot, X), \tilde {\nabla}\cdot ({{\tilde h}}- \frac12 ({\mathrm{tr}}_{{{\tilde g}}}{{\tilde h}}) {{\tilde g}})\right) \nonumber\\
& - \frac 12 {{\tilde g}}({{\tilde k}}, \tilde {\nabla}_X {{\tilde h}}) + d({{\tilde g}}({{\tilde k}}, {{\tilde h}}))(X) + \tilde {\nabla}\cdot ({{\tilde m}}- ({\mathrm{tr}}_{{\tilde g}}{{\tilde m}}){{\tilde g}})(X), \label{eq: LinConstraints2}\end{aligned}$$ for any $X \in T{\Sigma}$, where ${{\tilde k}}\circ {{\tilde k}}(X, Y) := {{\tilde g}}({{\tilde k}}(X, \cdot), {{\tilde k}}(Y, \cdot))$ for any $X, Y \in T{\Sigma}$.
Similarly to the non-linear case, we will be given initial data satisfying the linearised constraint equations and require a solution to induce these initial data as linearised first and second fundamental forms. Therefore, we need to linearise the following expressions $$\begin{aligned}
{{\tilde g}}(X,Y) :=& g(X,Y), \\
{{\tilde k}}(X,Y) :=& g({\nabla}_X \nu, Y),\end{aligned}$$ analogously to Lemma \[le: LinearisedEinstein\]. In order to make sense of the restriction of distributional tensors to ${\Sigma}$, we assume the following regularity.
\[def: LinearisedFundamentalForms\] Given $h \in CH_{loc}^k(M, S^2M, t)$, we define $({{\tilde h}}, {{\tilde m}}) \in H_{loc}^{k}({\Sigma}, S^2{\Sigma}) \times H_{loc}^{k-1}({\Sigma}, S^2{\Sigma})$ as $$\begin{aligned}
{{\tilde h}}(X,Y) &= h(X,Y), \\
{{\tilde m}}(X,Y) &= -\frac12 h(\nu, \nu){{\tilde k}}(X,Y) - \frac12 {\nabla}_X h(\nu, Y) - \frac12 {\nabla}_Y h(\nu, X) + \frac12 {\nabla}_\nu h(X,Y),\end{aligned}$$ for any $X, Y \in T{\Sigma}$. We call ${{\tilde h}}$ and ${{\tilde m}}$ the *linearised first and second fundamental forms* induced by $h$.
Analogously to the non-linear case, one shows that if ${{\tilde h}}$ and ${{\tilde m}}$ are the linearised first and second fundamental forms induced by $h$, then using that ${\mathrm{Ric}}(g) = 0$ we get $$\begin{aligned}
{\mathrm{tr}}_g(D{\mathrm{Ric}}(h)) + 2 D{\mathrm{Ric}}(h)(\nu, \nu) = D\Phi_1({{\tilde h}}, {{\tilde m}}), \label{eq: ricciNormalNormal}\\
D{\mathrm{Ric}}(h)(\nu, \cdot) = D\Phi_2({{\tilde h}}, {{\tilde m}}) \label{eq: ricciNormalDot}.\end{aligned}$$ In particular, if $D{\mathrm{Ric}}(h) = 0$, the induced initial data $({{\tilde h}}, {{\tilde m}})$ must satisfy $D\Phi({{\tilde h}}, {{\tilde m}}) = 0$. Let us now formulate the Cauchy problem of the linearised Einstein equation.
Let $({{\tilde h}}, {{\tilde m}}) \in H_{loc}^{k}({\Sigma}, S^2{\Sigma}) \times H_{loc}^{k-1}({\Sigma}, S^2{\Sigma})$ satisfy $D\Phi({{\tilde h}}, {{\tilde m}}) = 0$. If $h \in CH_{loc}^k(M, S^2M, t)$ satisfies $$D{\mathrm{Ric}}(h) = 0$$ and induces $({{\tilde h}}, {{\tilde m}})$ as linearised first and second fundamental forms, then we call $h$ a *solution to the Cauchy problem of the linearised Einstein equation* with initial data $({{\tilde h}}, {{\tilde m}})$.
Well-posedness of the Cauchy problem {#ch: Cauchy_lin_Ein}
====================================
The goal of this section is to prove our first main result, Theorem \[thm: Wellposedness\]. Recall the setting. We assume that $(M, g)$ is a globally hyperbolic spacetime of dimension at least $3$ solving the Einstein equation $${\mathrm{Ric}}(g) = 0.$$ We also assume that ${\Sigma}\subset M$ is a spacelike Cauchy hypersurface. It follows that $$\Phi({{\tilde g}}, {{\tilde k}}) = 0,$$ where $({{\tilde g}}, {{\tilde k}})$ are the induced first and second fundamental forms.
Existence of solution {#sec: Existence}
---------------------
We start by proving that given initial data satisfying the linearised constraint equation, there is a solution to the linearised Einstein equation. The basic method is well-known and is analogous to the proof of the classical existence result for the non-linear Einstein equation [@F-B1952]. The crucial point in the proof is to translate the initial data to initial data for a wave equation. We show that the existence result extends to initial data of arbitrary real Sobolev degree. Recall the notation $$\bar h := h-\frac12 {\mathrm{tr}}_g(h) g.$$
\[le: GaugeChoice\] For $k \in {\mathbb{R}}$, let $({{\tilde h}}, {{\tilde m}}) \in H_{loc}^{k}({\Sigma}, S^2{\Sigma}) \times H_{loc}^{k-1}({\Sigma}, S^2{\Sigma})$ . Assume that ${h \in CH_{loc}^k(M,S^{2}M, t)}$ satisfies $$\begin{aligned}
h(X,Y) &= {{\tilde h}}(X,Y), & {\nabla}_\nu h(X,Y) &= 2 {{\tilde m}}(X,Y) - ({{\tilde h}}\circ {{\tilde k}}+ {{\tilde k}}\circ {{\tilde h}})(X,Y), \\
h(\nu, X) &= 0, & {\nabla}_\nu h(\nu, X) &= \tilde {\nabla}\cdot \left( {{\tilde h}}- \frac12 ({\mathrm{tr}}_{{\tilde g}}{{\tilde h}}){{\tilde g}}\right)(X), \\
h(\nu, \nu) &= 0, & {\nabla}_\nu h(\nu, \nu) &= -2 {\mathrm{tr}}_{{\tilde g}}{{\tilde m}},
\end{aligned}$$ for all $X,Y \in T{\Sigma}$, where ${{\tilde h}}\circ {{\tilde k}}(X, Y) := g({{\tilde h}}(X, \cdot), {{\tilde h}}(Y, \cdot))$ for all $X, Y \in T{\Sigma}$. Then ${{\tilde h}}, {{\tilde m}}$ are the first and second linearised fundamental forms induced by $h$ and $${\nabla}\cdot \overline h|_{\Sigma}= 0.$$
The proof is a simple computation. Let us now state the existence theorem.
\[thm: Existence\] Let $k \in {\mathbb{R}}\cup \{\infty\}$ and assume that $({{\tilde h}}, {{\tilde m}}) \in H_{loc}^{k}({\Sigma}, S^2{\Sigma}) \times H_{loc}^{k-1}({\Sigma}, S^2{\Sigma})$ satisfies $$D\Phi({{\tilde h}}, {{\tilde m}}) = 0.$$ Then there exists a unique $$h \in CH_{loc}^k(M,S^2M, t),$$ inducing linearised first and second fundamental forms $({{\tilde h}}, {{\tilde m}})$, such that $h|_{\Sigma}$ and ${\nabla}_\nu h|_{\Sigma}$ are as in Lemma \[le: GaugeChoice\] and $$\begin{aligned}
\square_L h =& 0, \\
{\nabla}\cdot \overline h =& 0.\end{aligned}$$ In particular $$D{\mathrm{Ric}}(h) = 0.$$ Moreover $$\label{eq: FiniteSpeed}
{\mathrm{supp}}(h) \subset J \left({\mathrm{supp}}({{\tilde h}}) \cup {\mathrm{supp}}({{\tilde m}}) \right).$$
From equation , we conclude that in fact $h \in H_{loc}^{\lfloor{k}\rfloor}(M)$.
\[rmk: add\_gauge\] The property is called *finite speed of propagation*. If the initial data are compactly supported, the solution will have spatially compact support. Note however that will not hold for all solutions with initial data $({{\tilde h}}, {{\tilde m}})$. If for example $V \in C^\infty(M, TM)$ with support not intersecting ${\Sigma}$, then $h + {\mathcal{L}}_Vg$ is going to be a solution with the same initial data. The support of ${\mathcal{L}}_Vg$ needs not be contained in $ J \left({\mathrm{supp}}({{\tilde h}}) \cup {\mathrm{supp}}({{\tilde m}}) \right)$.
Using Theorem \[thm: Existence\], we get the following stability result.
\[cor: Stability\] For $k \in {\mathbb{R}}\cup \{\infty\}$, assume that $({{\tilde h}}_i, {{\tilde m}}_i)_{i \in {\mathbb{N}}} \in H_{loc}^{k}({\Sigma}) \times H_{loc}^{k-1}({\Sigma})$ such that $D\Phi({{\tilde h}}_i, {{\tilde m}}_i) = 0$ and $$({{\tilde h}}_i, {{\tilde m}}_i) \to ({{\tilde h}}, {{\tilde m}}) \in H^{k}_{loc}({\Sigma}) \times H_{loc}^{k-1}({\Sigma})$$ in $H_{loc}^{k}({\Sigma}) \times H_{loc}^{k-1}({\Sigma})$. Then there exists a solution $h \in CH_{loc}^k(M,t)$ inducing initial data $({{\tilde h}},{{\tilde m}})$ and a sequence of solutions $h_i \in CH^k(M,t)$, inducing $({{\tilde h}}_i, {{\tilde m}}_i)$ as initial data, such that $$h_i \to h$$ in $CH_{loc}^k(M,t)$ and ${\nabla}\cdot \overline h_i = 0$.
Since $$({{\tilde h}}_i, {{\tilde m}}_i) \to ({{\tilde h}}, {{\tilde m}}),$$ the equations in Lemma \[le: GaugeChoice\] imply that $(h_i|_{\Sigma}, {\nabla}_\nu h_i|_{\Sigma}) \to (h|_{\Sigma}, {\nabla}_\nu h|_{\Sigma})$. Since $$\Box_L h = \Box_L h_i = 0,$$ we conclude by continuous dependence on initial data for linear wave equations (see Corollary \[cor: cont\_dep\_id\]) that $h_i \to h$.
It is important to note that given converging initial data, the previous corollary gives *one* sequence of converging solutions, inducing the correct initial data. Not every sequence of solutions that induce the correct initial data will converge. One could just add a gauge solution similar to Remark \[rmk: add\_gauge\]. This is the reason why the question of continuous dependence on initial data a priori does not make sense. This will be solved in Section \[sec: Main\], by considering *equivalence classes* of solutions. Let us now turn to the proof of the theorem.
\[le: DivergenceFree\] If $h \in \mathcal D'(M, S^{2}M)$, then $${\nabla}\cdot \left( D{\mathrm{Ric}}(h) - \frac12 {\mathrm{tr}}_g(D{\mathrm{Ric}}(h)) g \right) = 0.$$
For any Lorentzian metric $\hat g$, $$\hat {\nabla}\cdot \left( {\mathrm{Ric}}(\hat g) - \frac12 {\mathrm{tr}}_{\hat g} ({\mathrm{Ric}}(\hat g)) \hat g\right) = 0,$$ where $\hat {\nabla}$ is the Levi-Civita connection with respect to $\hat g$. Linearising this equation around $g$, using ${\mathrm{Ric}}(g) = 0$, gives the equation for smooth $h$. Since the smooth sections are dense in the distributional sections, this proves the lemma.
A calculation that will be very useful on many places is the following.
Assume that $(N, \hat g)$ is a semi-Riemannian manifold with Levi-Civita connection $\hat {\nabla}$. Then $$\hat {\nabla}\cdot \left( {\mathcal{L}}_V \hat g - \frac{1}{2}{\mathrm{tr}}_{\hat g}({\mathcal{L}}_V {\hat g})\hat g\right) = - \hat {\nabla}^* \hat {\nabla}V^\flat + {\mathrm{Ric}}(\hat g)(V, \cdot). \label{eq: Killing_wave}$$
Let $(e_1, \hdots, e_n)$ be a local orthonormal frame with respect to $\hat g$ and define $\epsilon_i := g(e_i, e_i) \in \{-1, 1\}$. We have $$\begin{aligned}
\hat {\nabla}\cdot &\left( {\mathcal{L}}_V \hat g - \frac{1}{2}{\mathrm{tr}}_{\hat g}({\mathcal{L}}_V {\hat g})\hat g\right)(X) \\
&= \sum_{i = 1}^n \epsilon_i \left( \hat {\nabla}_{e_i}{\mathcal{L}}_V \hat g(e_i, X) - {\partial}_X\hat g(\hat {\nabla}_{e_i}V, e_i) \right) \\
&= \sum_{i = 1}^n \epsilon_i \left( \hat g(\hat {\nabla}^2_{e_i, e_i} V, X) + \hat g(\hat {\nabla}^2_{e_i, X} V, e_i) - \hat g(\hat {\nabla}^2_{X, e_i}V, e_i) \right) \\
&= - \hat {\nabla}^* \hat {\nabla}V^\flat(X) + {\mathrm{Ric}}(\hat g)(V, X).\end{aligned}$$
Consider the Cauchy problem $$\label{eq: L_d'Alembert}
\Box_L h = 0 \\$$ with $h|_{\Sigma}$ and ${\nabla}_\nu h|_{\Sigma}$ defined as in Lemma \[le: GaugeChoice\], using $({{\tilde h}}, {{\tilde m}})$. One checks that $(h|_{\Sigma}, {\nabla}_\nu h|_{\Sigma}) \in H^k_{loc}({\Sigma}, S^2M|_{\Sigma}) \times H^{k-1}_{loc}({\Sigma}, S^2M|_{\Sigma})$. By Theorem \[thm: WellposednessLinearWaves\] there is a unique solution $h \in CH_{loc}^k(M, S^2M, t)$ to this Cauchy problem. Moreover, it follows that ${\mathrm{supp}}(h) \subset J({\mathrm{supp}}({{\tilde h}}) \cup {\mathrm{supp}}({{\tilde m}}))$. We claim that ${\nabla}\cdot \overline h = 0$. Since $h \in CH_{loc}^k(M, t)$, it follows by Section \[sec: Notation\] that ${\nabla}\cdot \overline h \in CH^{k-1}_{loc}(M, t)$. Lemma \[le: DivergenceFree\] implies that $$\begin{aligned}
0 =& {\nabla}\cdot \left(D{\mathrm{Ric}}(h) - \frac12 {\mathrm{tr}}_g(D{\mathrm{Ric}}(h)) g \right) \\
=& \frac12 {\nabla}\cdot \left({\mathcal{L}}_{({\nabla}\cdot \overline h)^\sharp}g - \frac12 {\mathrm{tr}}_g\left({\mathcal{L}}_{({\nabla}\cdot \overline h)^\sharp}g \right) g \right) \\
\stackrel{\eqref{eq: Killing_wave}}{=}& - \frac12 {\nabla}^*{\nabla}({\nabla}\cdot \overline h),\end{aligned}$$ since ${\mathrm{Ric}}(g) = 0$. From Lemma \[le: GaugeChoice\], we know that ${\nabla}\cdot \overline h|_{\Sigma}= 0$. We now use the assumption that $D\Phi({{\tilde h}}, {{\tilde m}}) = 0$ to show that ${\nabla}_\nu ({\nabla}\cdot \overline h)|_{\Sigma}= 0$. Since we know that $\Box_L h = 0$ and ${\nabla}\cdot \overline{h}|_{\Sigma}= 0$, equations and imply that $$\begin{aligned}
0 &= D\Phi_1({{\tilde h}}, {{\tilde m}}) \\
&= {\mathrm{tr}}_g(D{\mathrm{Ric}}(h)) + 2 D{\mathrm{Ric}}(h)(\nu, \nu) \\
&= \frac12 \left( {\mathrm{tr}}_g({\mathcal{L}}_{({\nabla}\cdot \overline h)^\sharp}g) + 2{\mathcal{L}}_{({\nabla}\cdot \overline h)^\sharp}g(\nu, \nu) \right) \\
&= {\nabla}_\nu ({\nabla}\cdot \overline{h})(\nu), \\
0 &= D\Phi_2({{\tilde h}}, {{\tilde m}})(X) \\
&= D{\mathrm{Ric}}(h)(\nu, X) \\
&= \frac12 {\nabla}_\nu({\nabla}\cdot \overline{h})(X),\end{aligned}$$ for each $X \in T{\Sigma}$. Altogether we have shown that ${\nabla}\cdot \overline h \in CH_{loc}^{k-1}(M,t)$ satisfies $$\begin{aligned}
{\nabla}^*{\nabla}({\nabla}\cdot \overline h) &= 0, \\
{\nabla}\cdot \overline h|_{\Sigma}&= 0, \\
{\nabla}_\nu ({\nabla}\cdot \overline h)|_{\Sigma}&= 0.\end{aligned}$$ Theorem \[thm: WellposednessLinearWaves\] now implies that ${\nabla}\cdot \overline h = 0$. This finishes the proof.
Uniqueness up to gauge {#sec: Uniqueness}
----------------------
We continue by showing that the solution is unique up to addition of a gauge solution.
\[thm: Uniqueness\] Let $k \in {\mathbb{R}}\cup \{\infty\}$. Assume that $h \in CH_{loc}^k(M, S^{2}M, t)$ satisfies $$D{\mathrm{Ric}}(h) = 0$$ and that the induced first and second linearised fundamental forms vanish. Then there exists a vector field $V \in CH_{loc}^{k+1}(M, TM, t)$ such that $$h = {\mathcal{L}}_V g.$$ If ${\mathrm{supp}}(h) \subset J(K)$ for some compact $K \subset {\Sigma}$, we can choose $V$ such that ${\mathrm{supp}}(V) \subset J(K)$.
We start by proving a technical lemma which is reminiscent of elliptic regularity theory. The difference is that we work with finite energy spaces and not Sobolev spaces.
\[le: regularity\_lie\_derivative\] Let $V \in CH_{loc}^k(M, TM, t)$ with ${\mathcal{L}}_V g \in CH^k_{loc}(M, S^2M, t)$. Then $V \in CH_{loc}^{k+1}(M, TM, t)$.
By assumption, $${\nabla}^{j}_{t, \hdots, t} V \in C^0(t(M), H^{k - j}_{loc}({\Sigma}_\cdot))$$ for all integers $j \geq 0$. We would be done if we could show that ${\nabla}^{j}_{t, \hdots, t} V \in C^0(t(M), H^{k - j+1}_{loc}({\Sigma}_\cdot))$ for all integers $j \geq 0$. By commuting derivatives, note that $$\begin{aligned}
{\mathcal{L}}_{{\nabla}^{j}_{t, \hdots, t} V} g(X, Y) &= g({\nabla}_X{\nabla}^{j}_{t, \hdots, t} V, Y) + g({\nabla}_Y {\nabla}^{j}_{t, \hdots, t}V, X) \\
&= ({\nabla}_t)^j{\mathcal{L}}_Vg(X, Y) + P_j(V)(X, Y),\end{aligned}$$ where $P_j$ is some differential operator of order $j$. Using the assumptions, this shows that $${\mathcal{L}}_{{\nabla}^{j}_{t, \hdots, t} V} g \in CH_{loc}^{k-j}(M, t).$$ For each $\tau \in t(M)$, let $({{\tilde g}}_\tau, {{\tilde k}}_\tau)$ be the induced first and second fundamental forms on the Cauchy hypersurface ${\Sigma}_\tau$. Let ${\nabla}^{j}_{t, \hdots, t} V|_{{\Sigma}_\tau} =: ({\nabla}^{j}_{t, \hdots, t} V)^\perp|_{{\Sigma}_\tau} \nu_\tau + ({\nabla}^{j}_{t, \hdots, t} V)^\parallel|_{{\Sigma}_\tau}$ be the projection onto parallel and normal components with respect to ${\Sigma}_\tau$, where $\nu_\tau$ is the future pointing normal vector field along ${\Sigma}_\tau$. Using this, we get a split $TM|_{{\Sigma}_\tau} \cong {\mathbb{R}}\oplus T{\Sigma}_\tau$. Note that $${\mathcal{L}}_{({\nabla}^{j}_{t, \hdots, t} V)^\parallel|_{{\Sigma}_\tau}} {{\tilde g}}_\tau = {\mathcal{L}}_{({\nabla}^{j}_{t, \hdots, t} V)^\parallel}g|_{{\Sigma}_\tau} - 2 ({\nabla}^{j}_{t, \hdots, t} V)^\perp|_{{\Sigma}_\tau} {{\tilde k}}_\tau.$$ It follows that $$\tau \mapsto {\mathcal{L}}_{({\nabla}^{j}_{t, \hdots, t} V)^\parallel|_{{\Sigma}_\tau}} {{\tilde g}}_\tau \in CH_{loc}^{k-j}(M, t) \subset C^0(t(M), H_{loc}^{k-j}({\Sigma}_\cdot)).$$ Since $$X \mapsto {\mathcal{L}}_X {{\tilde g}}_{\tau} \in \mathrm{Diff}_1(T{\Sigma}_\tau, S^2{\Sigma}_\tau)$$ is a differential operator of injective principal symbol, elliptic regularity theory implies that $({\nabla}^{j}_{t, \hdots, t} V)^\parallel \in C^0(t(M), H_{loc}^{k+1-j}({\Sigma}_\cdot))$ for all integers $j \geq 0$. Using this, we conclude that $$\begin{aligned}
{\partial}_X (({\nabla}^{j}_{t, \hdots, t} V)^\perp) &= - {\alpha}g({\nabla}_X {\nabla}^{j}_{t, \hdots, t} V, {\mathrm{grad}}(t)) + g(({\nabla}^{j}_{t, \hdots, t} V)^\parallel, {\nabla}_X({\alpha}{\mathrm{grad}}(t))) \\
&= -{\alpha}{\mathcal{L}}_{{\nabla}^{j}_{t, \hdots, t}V}g ({\mathrm{grad}}(t), X) + {\alpha}g({\nabla}_t {\nabla}^{j}_{t, \hdots, t} V, X) \\
& \quad + g(({\nabla}^{j}_{t, \hdots, t} V)^\parallel, {\nabla}_X({\alpha}{\mathrm{grad}}(t))) \\
&= - {\alpha}{\nabla}^{j}_{t, \hdots, t} {\mathcal{L}}_Vg ({\mathrm{grad}}(t), X) + {\alpha}g(({\nabla}^{j+1}_{t, \hdots, t} V)^\parallel, X) \\
& \quad + Q_j(V)(X) \in C^{0}(t(M), H_{loc}^{k-j}({\Sigma}_\cdot))\end{aligned}$$ for all $X \in T{\Sigma}_\cdot$, since $Q_j$ is some differential operator of order $j$. We conclude that $$d(({\nabla}^{j}_{t, \hdots, t} V)^\perp) \in C^{0}(t(M), H_{loc}^{k-j}({\Sigma}_\cdot, T^*{\Sigma}_\cdot)).$$ Since $d$ is a first order linear differential operator mapping functions to one-forms on ${\Sigma}_\tau$ and its principal symbol is injective, we conclude that $({\nabla}^{j}_{t, \hdots, t} V)^\perp \in C^0(t(M), H_{loc}^{k+1-j}({\Sigma}_\cdot))$ for all integers $j \geq 0$. We conclude that $${\nabla}^{j}_{t, \hdots, t} V \in C^0(t(M), H_{loc}^{k+1-j}({\Sigma}_\cdot))$$ for all integers $j \geq 0$, which is the same as $V \in CH_{loc}^{k+1}(M, TM, t)$.
The proof of the Theorem \[thm: Uniqueness\] is a generalisation of the proof of [@FewsterHunt2013]\*[Thm. 3.3]{} to solutions of low regularity.
By Section \[sec: Notation\], we know that ${\nabla}\cdot \overline h \in CH_{loc}^{k-1}(M, T^*M, t)$. By Theorem \[thm: WellposednessLinearWaves\], we can define $$V \in CH_{loc}^k(M, TM, t)$$ as the unique solution to $$\begin{aligned}
{\nabla}^*{\nabla}V =& - {\nabla}\cdot \overline h^\sharp, \label{eq: DefiningV} \\
V|_{\Sigma}=& 0, \nonumber \\
{\nabla}_\nu V|_{\Sigma}=& \frac12 h(\nu, \nu) \nu + h(\nu, \cdot)^\sharp, \nonumber\end{aligned}$$ where $\sharp: T^*M \to TM$ is the musical isomorphism with inverse $\flat:TM \to T^*M$. If ${\mathrm{supp}}(h) \subset J(K)$ for some subset $K \subset {\Sigma}$, then [@BaerWafo2014]\*[Rmk. 16]{} implies that ${\mathrm{supp}}(V) \subset J(K)$. By equation , we have $${\nabla}\cdot \overline{{\mathcal{L}}_V g} = - {\nabla}^* {\nabla}V^\flat = {\nabla}\cdot \overline h,$$ where $\overline{{\mathcal{L}}_V g} :={\mathcal{L}}_V g - \frac12 {\mathrm{tr}}_g\left({\mathcal{L}}_V g\right) g$. Hence $$\begin{aligned}
0 =& 2 D{\mathrm{Ric}}(h - {\mathcal{L}}_Vg) \\
=& \square_L (h - {\mathcal{L}}_Vg) + {\mathcal{L}}_{{\nabla}\cdot(\overline h- \overline{{\mathcal{L}}_Vg} )^\sharp } g \\
=& \square_L (h - {\mathcal{L}}_Vg).\end{aligned}$$ Since $V \in CH^k_{loc}(M,TM, t)$, we know that ${\mathcal{L}}_V g \in CH^{k-1}_{loc}(M, S^2M, t)$, which implies that $h - {\mathcal{L}}_V g \in CH_{loc}^{k-1}(M, S^2M, t)$. Hence, if we knew that $$\begin{aligned}
(h - {\mathcal{L}}_Vg)|_{\Sigma}&= 0, \label{eq: ValueEq} \\
{\nabla}_\nu(h - {\mathcal{L}}_Vg)|_{\Sigma}&=0, \label{eq: NormalEq}\end{aligned}$$ then Theorem \[thm: WellposednessLinearWaves\] would imply that $h - {\mathcal{L}}_V g = 0$ as asserted. We start by showing . Since $V|_{\Sigma}= 0$ and ${\nabla}_\nu V|_{\Sigma}= \frac12 h(\nu, \nu) \nu + h(\nu, \cdot)^{\sharp}$ and ${{\tilde h}}= 0$, we get for all $X, Y \in T{\Sigma}$, $$\begin{aligned}
h(X, Y) &= {{\tilde h}}(X, Y) \\
&= 0 \\
&= g({\nabla}_X V, Y) + g({\nabla}_Y V, X) \\
&= {\mathcal{L}}_Vg(X, Y), \\
h(X, \nu) &= g({\nabla}_\nu V, X) \\
&= g({\nabla}_\nu V, X) + g({\nabla}_X V, \nu) \\
&= {\mathcal{L}}_Vg(\nu, X), \\
h(\nu, \nu) &= 2g({\nabla}_\nu V, \nu) \\
&= {\mathcal{L}}_Vg(\nu, \nu).\end{aligned}$$ We continue by showing . Since ${{\tilde m}}= 0$, we get for $X, Y \in T{\Sigma}$ (recall Definition \[def: LinearisedFundamentalForms\]) $${\nabla}_\nu h(X,Y) = h(\nu, \nu) {{\tilde k}}(X,Y) + {\nabla}_X h(\nu, Y) + {\nabla}_Y h(\nu, X).$$ Using ${{\tilde h}}= 0$ and $V|_{\Sigma}= 0$, we get $$\begin{aligned}
{\nabla}_\nu {\mathcal{L}}_Vg(X,Y) &= g({\nabla}^2_{\nu, X} V, Y) + g({\nabla}^2_{\nu, Y} V, X) \\
&= g({\nabla}^2_{X, \nu} V, Y) + g({\nabla}^2_{Y, \nu} V, X) + R(\nu, X, V, Y) + R(\nu, Y, V, X) \\
&= {\partial}_X g({\nabla}_\nu V, Y) - g({\nabla}_\nu V, {\nabla}_X Y) + {\partial}_Y g({\nabla}_\nu V, X) - g({\nabla}_\nu V, {\nabla}_Y X) \\
&= {\partial}_X h(\nu, Y) - h(\nu, {\nabla}_XY) - \frac12 h(\nu, \nu) g(\nu, {\nabla}_X Y) \\
& \qquad + {\partial}_Y h(\nu, X) - h(\nu, {\nabla}_YX) - \frac12 h(\nu, \nu) g(\nu, {\nabla}_Y X) \\
&= {\nabla}_Xh(\nu, Y) + {\nabla}_Y h(\nu, X) + h(\nu, \nu){{\tilde k}}(X, Y) \\
&= {\nabla}_\nu h(X, Y),\end{aligned}$$ since ${\nabla}_X\nu \in T{\Sigma}$ and therefore ${\nabla}_{{\nabla}_X \nu}V = 0$. What remains to show is that ${\nabla}_\nu (h - {\mathcal{L}}_V g)|_{\Sigma}(\nu, \cdot) = 0$. Recall that $${\nabla}\cdot \overline{{\mathcal{L}}_V g} = {\nabla}\cdot \overline h,$$ which is equivalent to $$\label{eq: TraceReversalUniqueness}
{\nabla}\cdot {{\mathcal{L}}_V g}(W) - \frac12 \partial_W {\mathrm{tr}}_g({\mathcal{L}}_V g) = {\nabla}\cdot h(W) - \frac12 \partial_W {\mathrm{tr}}_g(h),$$ for all $W \in TM$. Note that from what is shown above, we know that ${\mathrm{tr}}_g({\mathcal{L}}_V g)|_{\Sigma}= {\mathrm{tr}}_g(h)|_{\Sigma}$. Therefore, for $X \in T{\Sigma}$, we have $\partial_X {\mathrm{tr}}_g({\mathcal{L}}_V g) = \partial_X {\mathrm{tr}}_g(h)$, so $${\nabla}\cdot {{\mathcal{L}}_V g}(X) = {\nabla}\cdot h(X),$$ which simplifies to $${\nabla}_\nu {{\mathcal{L}}_V g}(X, \nu) = {\nabla}_\nu h(X, \nu).$$ Instead inserting $\nu$ into equation , gives $$\begin{aligned}
0 =& {\nabla}\cdot \overline{{\mathcal{L}}_V g}(\nu) - {\nabla}\cdot \overline h(\nu) \\
=& {\nabla}\cdot ( {\mathcal{L}}_V g - h )(\nu) - \frac12 \partial_\nu \left( {\mathrm{tr}}_g({\mathcal{L}}_V g) - {\mathrm{tr}}_g (h) \right) \\
=& {\nabla}\cdot ( {\mathcal{L}}_V g - h )(\nu) - \frac12 {\mathrm{tr}}_g ({\nabla}_\nu \left( {\mathcal{L}}_V g - h \right)) \\
=& - {\nabla}_\nu \left( {\mathcal{L}}_V g - h \right)(\nu, \nu) + \frac12 {\nabla}_\nu \left( {\mathcal{L}}_V g - h \right)(\nu, \nu) \\
=& - \frac12 {\nabla}_\nu \left( {\mathcal{L}}_V g - h \right)(\nu, \nu).\end{aligned}$$ We conclude that $${\nabla}_\nu(h - {\mathcal{L}}_V g)(\nu, \nu) = 0.$$ This shows that $h = {\mathcal{L}}_V g$. Lemma \[le: regularity\_lie\_derivative\] implies the regularity of $V$.
Gauge producing initial data and gauge solutions {#sec: DegenerateInitialData}
------------------------------------------------
\[sec: gpid\]
In this section, we study the structure of the space of gauge solutions and gauge producing initial data. We consider from now on *compactly supported initial data* and *spatially compactly supported solutions*. The goal is to show that the spaces $${{\raisebox{.2em}{$\text{Initial data on }{\Sigma}$}\left/\raisebox{-.2em}{$\text{Gauge producing initial data}$}\right.}}$$ and $${{\raisebox{.2em}{$\text{Global solutions on }M$}\left/\raisebox{-.2em}{$\text{Gauge solutions}$}\right.}}$$ equipped with the quotient topology are locally convex topological vector spaces.
\[def: Solutions\] Define the *solutions* of finite energy regularity $k \in {\mathbb{R}}\cup \{\infty\}$ as $${\mathpzc{Sol}}_{sc}^k(M, t) := CH^k_{sc}(M, S^2M, t) \cap \ker(D{\mathrm{Ric}}),$$ with the induced topology.
Since $D{\mathrm{Ric}}$ is a linear differential operator, it is continuous as an operator on distributions. Therefore, the solution space is a closed subspace and hence a locally convex topological vector space. Let us now define the subspace of gauge solutions.
\[def: GaugeSolutions\] Define the *gauge solutions* of finite energy and regularity $k \in {\mathbb{R}}\cup \{\infty\}$ as $$\begin{aligned}
{\mathpzc G}_{sc}^k(M,t) &:= \{ {\mathcal{L}}_V g \mid V \in CH^{k+1}_{sc}(M, TM, t) \} \subset {\mathpzc{Sol}}_{sc}^k(M, t),\end{aligned}$$ with the induced topology.
We show later that the space of gauge solutions is a closed subspace of the solution space, which implies that the quotient space is a locally convex topological vector space. Let us define the space of solutions to the linearised constraint equation.
\[def: InitialData\] Define the *initial data* of Sobolev regularity $k \in {\mathbb{R}}\cup \{\infty\}$ as $${\mathpzc{ID}}_c^{k,k-1}({\Sigma}) := \left( H_c^{k}({\Sigma}, S^2{\Sigma}) \times H_c^{k-1}({\Sigma}, S^2{\Sigma}) \right)\cap \ker(D\Phi),$$ with the induced topology.
Let $$\begin{aligned}
\pi_{\Sigma}: {\mathpzc{Sol}}_{sc}^k(M, t) &\to {\mathpzc{ID}}_c^{k,k-1}({\Sigma})\end{aligned}$$ be the map that assigns to a solution the induced initial data, i.e. the linearised first and second fundamental forms. This map is given by Definition \[def: LinearisedFundamentalForms\] and it is clear that $\pi_{\Sigma}$ is continuous.
\[def: GaugeProdInitialData\] Define the *gauge producing initial data* of Sobolev regularity $k \in {\mathbb{R}}\cup \{\infty\}$ as $$\begin{aligned}
{\mathpzc{GP}}^{k,k-1}_c({\Sigma}) &:= \pi_{\Sigma}({\mathpzc G}^k_{sc}(M,t)) \subset {\mathpzc{ID}}_c^{k,k-1}({\Sigma}).\end{aligned}$$
It will sometimes be necessary to consider only sections supported in a fixed compact set $K \subset {\Sigma}$ or $J(K) \subset M$, for example ${\mathpzc{ID}}_K^k({\Sigma})$ or ${\mathpzc{Sol}}_{J(K)}^k(M)$. The definitions in this case are analogous to Definitions \[def: Solutions\], \[def: GaugeSolutions\], \[def: InitialData\] and \[def: GaugeProdInitialData\].
Let us study the space of gauge producing initial data ${\mathpzc{GP}}^{k,k-1}_c({\Sigma})$ in more detail. For $V \in CH^{k+1}_{sc}(M, TM, t)$, define $(N, {\beta}) \in H_c^{k+1}({\Sigma}, {\mathbb{R}}\oplus T{\Sigma})$ by projecting $V|_{\Sigma}$ to normal and tangential components, i.e. $V|_{\Sigma}=: N\nu + {\beta}$. Now define $$\begin{aligned}
{{\tilde h}}_{N, {\beta}} :=&
{\mathcal{L}}_{{\beta}} {{\tilde g}}+ 2 {{\tilde k}}N, \label{eq: GaugeProducing1} \\
{{\tilde m}}_{N, {\beta}} :=& {\mathcal{L}}_{\beta}{{\tilde k}}+ {\mathrm{Hess}}(N) + \left(2 {{\tilde k}}\circ {{\tilde k}}- {\mathrm{Ric}}(\tilde g) - ({\mathrm{tr}}_{{\tilde g}}{{\tilde k}}){{\tilde k}}\right) N. \label{eq: GaugeProducing2} \end{aligned}$$ We claim that $({{\tilde h}}_{N, {\beta}}, {{\tilde m}}_{N, {\beta}}) = \pi_{\Sigma}({\mathcal{L}}_V g)$. Indeed, for each $X, Y \in T{\Sigma}$, we have $$\begin{aligned}
{{\tilde h}}_{N, {\beta}}(X,Y) &= {\mathcal{L}}_V g(X,Y) \\
&= g({\nabla}_X ({\beta}+ N\nu), Y) + g({\nabla}_Y ({\beta}+ N\nu), X) \\
&= {\mathcal{L}}_{\beta}{{\tilde g}}(X,Y) + 2N{{\tilde k}}(X, Y), \\
{{\tilde m}}_{N, {\beta}}(X,Y) &= - \frac12 {\mathcal{L}}_Vg(\nu, \nu) {{\tilde k}}(X,Y) - \frac12 {\nabla}_X {\mathcal{L}}_Vg(\nu, Y) \\
& \quad - \frac12 {\nabla}_Y {\mathcal{L}}_Vg(\nu, X) + \frac12 {\nabla}_\nu {\mathcal{L}}_V g(X,Y) \\
&= - g({\nabla}_\nu V, \nu) {{\tilde k}}(X, Y) - \frac12 g({\nabla}^2_{X,Y}V + {\nabla}^2_{Y,X}V, \nu) \\
&\quad + \frac12 R(\nu, X, V, Y) + \frac12 R(\nu, Y, V, X) \\
&= {\mathcal{L}}_{\beta}{{\tilde k}}(X, Y) + {\mathrm{Hess}}(N)(X, Y) - \tilde {\nabla}_{{\beta}} {{\tilde k}}(X, Y) \\
&\quad + \frac12 \left( \tilde {\nabla}_X {{\tilde k}}(Y, {\beta}) + \tilde {\nabla}_Y {{\tilde k}}(X, {\beta}) \right) \\
&\quad + \frac12 R(\nu, X, {\beta}, Y) + \frac12 R(\nu, Y, {\beta}, X) \\
&\quad + N R(\nu, Y, \nu, X).
\end{aligned}$$ The classical Gauss and Codazzi equations now imply, using ${\mathrm{Ric}}(g) = 0$, that this coincides with . In particular, ${\mathpzc{GP}}^{k,k-1}_c({\Sigma})$ can be defined intrinsically on ${\Sigma}$ by equations and and is therefore independent of the chosen temporal function $t$ on $M$, as the notation suggests. We have shown the following lemma.
For any $k \in {\mathbb{R}}\cup \{\infty\}$, the space of gauge producing initial data is given by $${\mathpzc{GP}}^{k,k-1}_c({\Sigma}) = \{({{\tilde h}}_{N, {\beta}}, {{\tilde m}}_{N, {\beta}}) \text{ as in } \eqref{eq: GaugeProducing1} \text{ and } \eqref{eq: GaugeProducing2} \ | \ (N, {\beta}) \in H_c^{k+1}({\Sigma}, {\mathbb{R}}\oplus T{\Sigma}) \}.$$
We are now ready to prove that the space of gauge producing initial data is a closed subspace of the space of initial data.
\[le: IDclosedness\] Let $k \in {\mathbb{R}}\cup \{\infty\}$. The space $${\mathpzc{GP}}^{k,k-1}_c({\Sigma}) \subset {\mathpzc{ID}}_{c}^{k, k-1}({\Sigma}),$$ is a closed subspace. The statement still holds if we substitute $c$ with $K$, for a fixed compact subset $K \subset {\Sigma}$.
Consider the linear differential operator given by $$\begin{aligned}
Q: H^{k+1}_c({\Sigma}, {\mathbb{R}}\oplus T{\Sigma}) &\to H^{k}_c({\Sigma}, S^2{\Sigma}) \times H^{k-1}_c({\Sigma}, S^2{\Sigma}), \\
(N, {\beta}) &\mapsto ({{\tilde h}}_{N, {\beta}}, {{\tilde m}}_{N, {\beta}}).\end{aligned}$$ Since ${\mathrm{im}}(Q) = {\mathpzc{GP}}^{k,k-1}_c({\Sigma})$, the lemma is proven if we can show that $Q$ has closed image. We need to show that for each compact subset $K \subset {\Sigma}$, ${\mathrm{im}}(Q) \cap H^k_K({\Sigma}) \times H^{k-1}_K({\Sigma}) \subset H^k_K({\Sigma}) \times H^{k-1}_K({\Sigma})$ is closed. For a fixed compact subset $K \subset {\Sigma}$, let us construct a set $L \subset {\Sigma}$, containing $K$, such that if ${\mathrm{supp}}(Q(N, {\beta})) \subset L$ and ${\mathrm{supp}}(N, {\beta})$ is compact, then ${\mathrm{supp}}(N, {\beta}) \subset L$. We construct $L$ as follows. Since $K$ is compact, $\partial K$ is compact, which implies that $M \backslash \mathring K$ has a finite amount of connected components. Define $L$ to be the union of $K$ with all *compact* connected components of $M \backslash \mathring K$. It follows that $L$ is compact, $K \subset L$ and that all components of $M \backslash \mathring L$ are non-compact. Let us show that $L$ has the desired properties. One calculates that the differential operator $P$, defined by $$\begin{aligned}
P(N,{\beta}) &:= \begin{pmatrix}
-\tilde {\nabla}\cdot (\cdot) + \frac12 d {\mathrm{tr}}(\cdot)& 0 \\
0 & -{\mathrm{tr}}(\cdot)
\end{pmatrix}Q(N, {\beta}) \\
& =
\begin{pmatrix}
\tilde {\nabla}^* \tilde {\nabla}{\beta}^\flat + l.o.t. \\
\tilde {\nabla}^* \tilde {\nabla}N + l.o.t.
\end{pmatrix} \in H^{k-1}_K({\Sigma}, T^*{\Sigma}\oplus {\mathbb{R}})\end{aligned}$$ is a Laplace type operator. If ${\mathrm{supp}}(Q(N, {\beta})) \subset L$, and ${\mathrm{supp}}(N, {\beta})$ is compact, it follows that ${\mathrm{supp}}(P(N, {\beta})) \subset L$ and that $(M \backslash \mathring L) \cap {\mathrm{supp}}(N, {\beta}) = {\mathrm{supp}}(N, {\beta}) \backslash ({\mathrm{supp}}(N, {\beta}) \cap \mathring L)$ is compact. Since each component of $M \backslash \mathring L$ was non-compact, Theorem \[thm: aronszajn\] implies that $(N, {\beta}) = 0$ on $M \backslash \mathring L$ and hence ${\mathrm{supp}}(N, {\beta}) \subset L$ as claimed. Now if $Q(N_n, {\beta}_n) \to ({{\tilde h}}, {{\tilde m}})$ in $H^k_K({\Sigma}) \times H_K^{k-1}({\Sigma})$, then ${\mathrm{supp}}(N_n, {\beta}_n) \subset L$ and $$P(N_n, {\beta}_n) \to \begin{pmatrix}
-{\nabla}\cdot ({{\tilde h}}) + \frac12 d {\mathrm{tr}}({{\tilde h}}) \\
-{\mathrm{tr}}({{\tilde m}})
\end{pmatrix}$$ in $H^{k-1}_K({\Sigma}) \times H^{k-1}_K({\Sigma})$. By Corollary \[cor: Laplace-type closed image\], we conclude that $$P: H^{k+1}_L({\Sigma}) \to H_L^{k-1}({\Sigma})$$ is an isomorphism onto its image and therefore there is a $(N, {\beta}) \in H^{k+1}_L({\Sigma})$ such that $(N_n, {\beta}_n) \to (N, {\beta})$. We conclude that $$({{\tilde h}}, {{\tilde m}}) = \lim_{n \to \infty}Q(N_n, {\beta}_n) = Q(N, {\beta}),$$ which finishes the proof.
For later use, we need the following technical observation.
\[le: SOLclosedness\] For $k \in {\mathbb{R}}\cup \{\infty\}$, $${\mathpzc G}_{sc}^k(M,t) = {\pi_{\Sigma}}^{-1}({\mathpzc{GP}}^{k,k-1}_c({\Sigma})).$$ In particular, $${\mathpzc G}_{sc}^k(M, t) \subset {\mathpzc{Sol}}_{sc}^k(M, t)$$ is a closed subspace. The statement still holds if we substitute $c$ with $K$ and $sc$ with $J(K)$, for a fixed compact subset $K \subset {\Sigma}$.
Assume that $h \in {\mathpzc{Sol}}_{sc}^k(M, t)$ and that $$\pi_{\Sigma}(h) = \pi_{\Sigma}({\mathcal{L}}_Vg)$$ for some ${\mathcal{L}}_V g \in {\mathpzc G}_{sc}^k(M,t)$. Then $D{\mathrm{Ric}}(h - {\mathcal{L}}_Vg) = 0$ and $\pi_{\Sigma}(h - {\mathcal{L}}_Vg) = 0$. By Theorem \[thm: Uniqueness\], there is a $W \in CH^{k+1}_{sc}(M,TM, t)$, such that $$h = {\mathcal{L}}_V g + {\mathcal{L}}_W g = {\mathcal{L}}_{V+W}g$$ which proves the statement. The smooth case is analogous. Since $\pi_{\Sigma}$ is continuous, Lemma \[le: IDclosedness\] implies the second statement.
The next lemmas give a natural way to understand the topology of the quotient spaces.
\[le: GaugeInvariantInitialData\] Let $k \in {\mathbb{R}}\cup \{\infty\}$. The l.c. topological vector space $${{\raisebox{.2em}{${\mathpzc{ID}}_c^{k,k-1}({\Sigma})$}\left/\raisebox{-.2em}{${\mathpzc{GP}}_c^{k, k-1}({\Sigma})$}\right.}},$$ is the strict inductive limit of the l.c. topological vector spaces $${{\raisebox{.2em}{${\mathpzc{ID}}_K^{k,k-1}({\Sigma})$}\left/\raisebox{-.2em}{${\mathpzc{GP}}^{k,k-1}_{K}({\Sigma})$}\right.}},$$ for compact subsets $K \subset {\Sigma}$, with respect to the natural inclusions. In particular, it is an LF-space.
Let us simplify notation by writing ${\mathpzc{ID}}_K := {\mathpzc{ID}}_K^{k,k-1}({\Sigma})$, ${\mathpzc{ID}}_c := {\mathpzc{ID}}_c^{k,k-1}({\Sigma})$, ${\mathpzc{GP}}_K := {\mathpzc{GP}}^{k,k-1}_{K}({\Sigma})$ and ${\mathpzc{GP}}_{c} := {\mathpzc{GP}}^{k,k-1}_{c}({\Sigma})$. Let $K_1 \subset K_2 \subset \hdots \bigcup_{n \in {\mathbb{N}}} K_n = {\Sigma}$ be an exhaustion by compact subsets. Since ${\mathpzc{ID}}_{K_n} \subset {\mathpzc{ID}}_{K_{n+1}}$ is closed for all $n \in {\mathbb{N}}$, the strict inductive limit exists. By (a slight modification of) Lemma \[le: two topologies\] we conclude that ${\mathpzc{ID}}_c$ is the strict inductive limit space of the spaces ${\mathpzc{ID}}_K$. Similarly, ${{\raisebox{.2em}{${\mathpzc{ID}}_{K_n}$}\left/\raisebox{-.2em}{${\mathpzc{GP}}_{K_n}$}\right.}} \subset {{\raisebox{.2em}{${\mathpzc{ID}}_{K_{n+1}}$}\left/\raisebox{-.2em}{${\mathpzc{GP}}_{K_{n+1}}$}\right.}}$ are closed and hence the strict inductive limit topology on ${{\raisebox{.2em}{${\mathpzc{ID}}_c$}\left/\raisebox{-.2em}{${\mathpzc{GP}}_c({\Sigma})$}\right.}}$ exists. Call the quotient topology $\tau_{quot}$ and the strict inductive limit topology $\tau_{ind}$. The map $$\left({{\raisebox{.2em}{${\mathpzc{ID}}_c$}\left/\raisebox{-.2em}{${\mathpzc{GP}}_c$}\right.}}, \tau_{ind}\right) \to \left( {{\raisebox{.2em}{${\mathpzc{ID}}_c$}\left/\raisebox{-.2em}{${\mathpzc{GP}}_c$}\right.}}, \tau_{quot}\right)$$ is continuous if and only if the restriction $${{\raisebox{.2em}{${\mathpzc{ID}}_{K_n}$}\left/\raisebox{-.2em}{${\mathpzc{GP}}_{K_n}$}\right.}} \to \left({{\raisebox{.2em}{${\mathpzc{ID}}_c$}\left/\raisebox{-.2em}{${\mathpzc{GP}}_c$}\right.}}, \tau_{quot} \right)$$ is continuous for all $n \in {\mathbb{N}}$. But this is clear, since ${\mathpzc{ID}}_{K_n} \to {\mathpzc{ID}}_c$ is continuous. Conversely, the map $$\left( {{\raisebox{.2em}{${\mathpzc{ID}}_c$}\left/\raisebox{-.2em}{${\mathpzc{GP}}_c$}\right.}}, \tau_{quot}\right) \to \left({{\raisebox{.2em}{${\mathpzc{ID}}_c$}\left/\raisebox{-.2em}{${\mathpzc{GP}}_c$}\right.}}, \tau_{ind}\right)$$ is continuous if and only if $${\mathpzc{ID}}_c \to \left({{\raisebox{.2em}{${\mathpzc{ID}}_c$}\left/\raisebox{-.2em}{${\mathpzc{GP}}_c$}\right.}}, \tau_{ind}\right)$$ is continuous. This is however equivalent to $${\mathpzc{ID}}_{K_n} \to \left({{\raisebox{.2em}{${\mathpzc{ID}}_c$}\left/\raisebox{-.2em}{${\mathpzc{GP}}_c$}\right.}}, \tau_{ind}\right)$$ being continuous for all $n \in {\mathbb{N}}$. This true if and only if $${{\raisebox{.2em}{${\mathpzc{ID}}_{K_n}$}\left/\raisebox{-.2em}{${\mathpzc{GP}}_{K_n}$}\right.}} \to \left({{\raisebox{.2em}{${\mathpzc{ID}}_c$}\left/\raisebox{-.2em}{${\mathpzc{GP}}_c$}\right.}}, \tau_{ind}\right)$$ is continuous for all $n \in {\mathbb{N}}$. But this is clear, by construction of the strict inductive limit topology. This establishes the claimed homeomorphism. Since the quotient spaces ${{\raisebox{.2em}{${\mathpzc{ID}}_{K_n}$}\left/\raisebox{-.2em}{${\mathpzc{GP}}_{K_n}$}\right.}}$ are Fréchet spaces, its strict inductive limit will be a LF-space by definition.
\[le: GaugeInvariantSolutions\] Let $k \in {\mathbb{R}}\cup \{\infty\}$. The l.c. topological vector space $${{\raisebox{.2em}{${\mathpzc{Sol}}_{sc}^k(M, t)$}\left/\raisebox{-.2em}{${\mathpzc G}_{sc}^k(M,t)$}\right.}},$$ is the strict inductive limit of the l.c. topological vector spaces $${{\raisebox{.2em}{${\mathpzc{Sol}}_{J(K)}^k(M, t)$}\left/\raisebox{-.2em}{${\mathpzc G}_{J(K)}^k(M,t)$}\right.}},$$ for compact subsets $K \subset {\Sigma}$, with respect to the natural inclusions. In particular, it is an LF-space.
The proof is analogous to the proof of Lemma \[le: GaugeInvariantInitialData\], using Lemma \[le: two topologies2\] instead of Lemma \[le: two topologies\].
Continuous dependence on initial data {#sec: Main}
-------------------------------------
Let us now state and prove the main result of this section, the well-posedness of the Cauchy problem of the linearised Einstein equation. Recall Section \[sec: DegenerateInitialData\] for the definitions of the function spaces below.
\[thm: Wellposedness\] Let $k \in {\mathbb{R}}\cup \{\infty\}$. The linear solution map $$\mathrm{Solve}^k: {{\raisebox{.2em}{${\mathpzc{ID}}_c^{k,k-1}({\Sigma})$}\left/\raisebox{-.2em}{${\mathpzc{GP}}^{k,k-1}_{c}({\Sigma})$}\right.}} \to {{\raisebox{.2em}{${\mathpzc{Sol}}_{sc}^k(M, t)$}\left/\raisebox{-.2em}{${\mathpzc G}_{sc}^k(M,t)$}\right.}}$$ is an isomorphism of locally convex topological vector spaces. In fact, both spaces are $LF$-spaces.
The theorem implies that the equivalence class of solutions depends continuously on the equivalence class of initial data. Since projection maps are continuous and surjective, we immediately get the following corollary.
\[cor: ContDependence\] Let $k \in {\mathbb{R}}\cup \{\infty\}$. The linear solution map $$\mathrm{\widetilde{Solve}^k}: {\mathpzc{ID}}_c^{k,k-1}({\Sigma}) \to {{\raisebox{.2em}{${\mathpzc{Sol}}_{sc}^k(M, t)$}\left/\raisebox{-.2em}{${\mathpzc G}_{sc}^k(M,t)$}\right.}}$$ is continuous and surjective.
Before proving the theorem, let us discuss some more remarks and corollaries.
Since any compactly supported distribution is of some real Sobolev regularity, any compactly supported distributional section lies in some $H^k_c({\Sigma})$. Therefore Theorem \[thm: Wellposedness\] covers the case of any compactly supported distributional initial data.
A priori, the solution spaces depend on the time function. After quoting out the gauge solutions, this is not the case anymore.
\[cor: IndependenceOfTime\] Let $t$ and $\tau$ be Cauchy temporal functions on $M$. Then for every $k \in {\mathbb{R}}\cup \{\infty\}$ there is an isomorphism $${{\raisebox{.2em}{${\mathpzc{Sol}}_{sc}^k(M, t)$}\left/\raisebox{-.2em}{${\mathpzc G}_{sc}^k(M,t)$}\right.}} \to {{\raisebox{.2em}{${\mathpzc{Sol}}_{sc}^k(M, \tau )$}\left/\raisebox{-.2em}{${\mathpzc G}^k_{sc}(M,\tau)$}\right.}}$$ which is the identity map on smooth solutions.
The proof is analogous to the proof of [@BaerWafo2014]\*[Cor. 18]{}, using Theorem \[thm: Wellposedness\].
As a final observation, let us note that if ${\Sigma}$ is compact, we obtain a natural Hilbert space structure on the solution space.
Let $k \in {\mathbb{R}}$. In case ${\Sigma}$ is compact, Theorem \[thm: Wellposedness\] implies that $${{\raisebox{.2em}{${\mathpzc{Sol}}^k(M, t)$}\left/\raisebox{-.2em}{${\mathpzc G}^k(M,t)$}\right.}}$$ carries a Hilbert space structure, induced by $\mathrm{Solve}^k$. In case $k = \infty$, it is a Fréchet space.
Since ${\Sigma}$ is compact, ${{\raisebox{.2em}{${\mathpzc{ID}}^{k,k-1}({\Sigma})$}\left/\raisebox{-.2em}{${\mathpzc{GP}}^{k,k-1}({\Sigma})$}\right.}}$ carries a Hilbert space induced from the Sobolev space in case $k < \infty$. In case $k = \infty$, it carries a natural Fréchet space induced by the Fréchet space structure on the smooth sections. Theorem \[thm: Wellposedness\] and Corollary \[cor: IndependenceOfTime\] imply that we get an induced Hilbert space structure on ${{\raisebox{.2em}{${\mathpzc{Sol}}^k(M, t)$}\left/\raisebox{-.2em}{${\mathpzc G}^k(M,t)$}\right.}}$ independent of the choice of Cauchy hypersurface ${\Sigma}$.
Let us turn to the proof of Theorem \[thm: Wellposedness\].
\[le: FixedCompact\] Let $k \in {\mathbb{R}}\cup \{\infty\}$ and fix a compact subset $K \subset {\Sigma}$. The linear map $$\mathrm{Solve}_K^k: {{\raisebox{.2em}{${\mathpzc{ID}}_K^{k,k-1}({\Sigma})$}\left/\raisebox{-.2em}{${\mathpzc{GP}}_K^{k,k-1}({\Sigma})$}\right.}} \to {{\raisebox{.2em}{${\mathpzc{Sol}}_{J(K)}^k(M, t)$}\left/\raisebox{-.2em}{${\mathpzc G}_{J(K)}^k(M,t)$}\right.}}$$ is an isomorphism of topological vector spaces.
Lemma \[le: IDclosedness\] and Lemma \[le: SOLclosedness\] imply that the quotient spaces are well defined Fréchet spaces. By Theorem \[thm: Existence\], Theorem \[thm: Uniqueness\] and Lemma \[le: SOLclosedness\], the map $\mathrm{Solve}^k_K$ is a well defined linear bijection. We prove that it indeed is an isomorphism of topological vector spaces. Recall that the map that assigns to each solution its initial data $$\pi_{\Sigma}: {\mathpzc{Sol}}^k_{J(K)}(M, t) \to {\mathpzc{ID}}^{k,k-1}_K({\Sigma})$$ is continuous. By definition of the quotient space topology, $\pi_{\Sigma}$ induces a continuous map $$\hat \pi_{\Sigma}: {{\raisebox{.2em}{${\mathpzc{Sol}}^k_{J(K)}(M, t)$}\left/\raisebox{-.2em}{${\mathpzc G}^k_{J(K)}(M,t)$}\right.}} \to {{\raisebox{.2em}{${\mathpzc{ID}}^{k,k-1}_K({\Sigma})$}\left/\raisebox{-.2em}{${\mathpzc{GP}}^{k,k-1}_{K}({\Sigma})$}\right.}}$$ between Fréchet spaces. Since $\hat \pi_{\Sigma}$ is the inverse of $\mathrm{Solve}^k_K$, the open mapping theorem for Fréchet spaces implies the statement.
Again, Lemma \[le: IDclosedness\] and Lemma \[le: SOLclosedness\] imply that the quotient spaces are well defined topological vector spaces. By Theorem \[thm: Existence\], Theorem \[thm: Uniqueness\] and Lemma \[le: SOLclosedness\], the map $\mathrm{Solve}^k$ is a well defined linear bijection. Therefore it remains to prove that it is an isomorphism of topological vector spaces. By Lemma \[le: FixedCompact\], the map $$\begin{aligned}
{{\raisebox{.2em}{${\mathpzc{ID}}_K^{k,k-1}({\Sigma})$}\left/\raisebox{-.2em}{${\mathpzc{GP}}_{K}^{k,k-1}({\Sigma})$}\right.}} & \stackrel{\text{Solve}_K^k}{\longrightarrow} {{\raisebox{.2em}{${\mathpzc{Sol}}_{J(K)}^k(M, t)$}\left/\raisebox{-.2em}{${\mathpzc G}_{J(K)}^k(M,t)$}\right.}} \\
& \hookrightarrow {{\raisebox{.2em}{${\mathpzc{Sol}}_{sc}^k(M, t)$}\left/\raisebox{-.2em}{${\mathpzc G}_{sc}^k(M,t)$}\right.}}\end{aligned}$$ is continuous for every compact subset $K \subset {\Sigma}$. By Lemma \[le: GaugeInvariantInitialData\], this implies that $\text{Solve}^k$ is continuous. Similarly, by Lemma \[le: FixedCompact\], the composed map $$\begin{aligned}
{{\raisebox{.2em}{${\mathpzc{Sol}}^k_{J(K)}(M, t)$}\left/\raisebox{-.2em}{${\mathpzc G}^k_{J(K)}(M,t)$}\right.}}&\stackrel{\hat \pi_{\Sigma}}{\longrightarrow} {{\raisebox{.2em}{${\mathpzc{ID}}^{k,k-1}_K({\Sigma})$}\left/\raisebox{-.2em}{${\mathpzc{GP}}^{k,k-1}_{K}({\Sigma})$}\right.}} \\
& \hookrightarrow {{\raisebox{.2em}{${\mathpzc{ID}}^{k,k-1}_c({\Sigma})$}\left/\raisebox{-.2em}{${\mathpzc{GP}}^{k,k-1}_{c}({\Sigma})$}\right.}}\end{aligned}$$ is continuous for every compact subset $K \subset {\Sigma}$. By Lemma \[le: GaugeInvariantSolutions\], this implies that $\left(\text{Solve}^k\right)^{-1}$ is continuous.
The linearised constraint equations {#ch: lin_constraint}
===================================
In order to apply Theorem \[thm: Wellposedness\] in practice, it is necessary to understand the space $${{\raisebox{.2em}{${\mathpzc{ID}}_c^{k,k-1}({\Sigma})$}\left/\raisebox{-.2em}{${\mathpzc{GP}}^{k,k-1}_{c}({\Sigma})$}\right.}}.$$ In this section, we show that this space can be quite well understood if ${\Sigma}$ is compact, ${\mathrm{Scal}}({{\tilde g}}) = 0$ and ${{\tilde k}}= 0$. The idea is inspired by the following classical result: If ${\mathrm{Ric}}({{\tilde g}}) = 0$ and ${\Sigma}$ is compact, then equivalence classes of initial data are essentially in one-to-one correspondence with the divergence- and trace free tensors on ${\Sigma}$ (“transverse traceless tensors” or “TT-tensors”). The advantage of this observation comes from the following well-known fact. For any $(0,2)$-tensor ${\alpha}$ on ${\Sigma}$, there is a unique decomposition $$\label{eq: decomposition}
{\alpha}= {{\tilde h}}+ L{\omega}+ \phi {{\tilde g}},$$ where ${{\tilde h}}$ is a $TT$-tensor, ${\omega}$ is a one-form, $L$ is the conformal Killing operator and $\phi$ is a function. Now, the problem is that if ${\mathrm{Ric}}({{\tilde g}}) \neq 0$, then TT-tensors *do not* solve the linearised constraint equation in general. The goal of this section is to generalise the decomposition to the case when ${\mathrm{Scal}}({{\tilde g}}) = 0$. Let us therefore assume in this section that ${\Sigma}$ is compact, ${\mathrm{Scal}}({{\tilde g}}) = 0$ and ${{\tilde k}}= 0$, which is obviously a solution of the non-linear constraint equations (\[eq: ham\_constraint\] - \[eq: momentum\_constraint\]).
As mentioned in the introduction, it turns out that equation (\[eq: Eq1firstFF\] - \[eq: Eq2secondFF\]) will be relevant for this problem. For any $k \in {\mathbb{R}}\cup \{\infty \}$, let $\Gamma^k({\Sigma}) \subset H^k({\Sigma}, S^2{\Sigma})$ denote the $H^k$-solutions to the equations and and let $\Gamma^{k-1}({\Sigma}) \subset H^{k-1}({\Sigma}, S^2{\Sigma})$ denote the $H^{k-1}$-solutions to and . The following proposition is a special case of Moncrief’s classical splitting theorem [@Moncrief1975], generalised to any Sobolev degree. It can be seen as a “gauge choice” for the initial data.
\[prop: Moncrief\_splitting\] Let $k \in {\mathbb{R}}\cup \{\infty\}$. Assume that $({\Sigma}, {{\tilde g}})$ is a closed manifold with vanishing scalar curvature and that ${{\tilde k}}= 0$. For any $k \in {\mathbb{R}}\cup \{\infty\}$, the map $$\begin{aligned}
\Gamma^{k}({\Sigma}) \times \Gamma^{k-1}({\Sigma}) &\to {{\raisebox{.2em}{${\mathpzc{ID}}^{k,k-1}({\Sigma})$}\left/\raisebox{-.2em}{${\mathpzc{GP}}^{k,k-1}({\Sigma})$}\right.}}, \\
({{\tilde h}}, {{\tilde m}}) &\mapsto [({{\tilde h}}, {{\tilde m}})],\end{aligned}$$ is an isomorphism of Banach spaces.
Since the proof for arbitrary Sobolev degree is hard to find, we give a simple proof of this proposition later. By Theorem \[thm: Wellposedness\], we conclude that the composed map $$\begin{aligned}
\Gamma^{k}({\Sigma}) \times \Gamma^{k-1}({\Sigma}) \to {{\raisebox{.2em}{${\mathpzc{ID}}^{k,k-1}({\Sigma})$}\left/\raisebox{-.2em}{${\mathpzc{GP}}^{k,k-1}({\Sigma})$}\right.}} \stackrel{Solve^k}{\to} {{\raisebox{.2em}{${\mathpzc{Sol}}^k(M, t)$}\left/\raisebox{-.2em}{${\mathpzc G}^k(M,t)$}\right.}},\end{aligned}$$ is an isomorphism of Banach spaces.
Let us now state the main result of this section. Let $$L{\omega}:= {\mathcal{L}}_{{\omega}^\sharp} {{\tilde g}}- \frac{2}{\dim({\Sigma})} (\tilde {\nabla}\cdot {\omega}) {{\tilde g}}$$ denote the conformal Killing operator on one-forms.
\[thm: initial\_data\_split\] Assume that $({\Sigma}, {{\tilde g}})$ is a closed Riemannian manifold of dimension $n \geq 2$ with ${\mathrm{Scal}}({{\tilde g}}) = 0$ and ${{\tilde k}}= 0$. Let $k \in {\mathbb{R}}\cup \{\infty\}$. Then for each $({\alpha}, {\beta}) \in H^{k}({\Sigma}, S^2{\Sigma}) \times H^{k-1}({\Sigma}, S^2{\Sigma})$, there is a unique decomposition $$\begin{aligned}
{\alpha}&= {{\tilde h}}+ L {\omega}+ C {\mathrm{Ric}}({{\tilde g}}) + \phi {{\tilde g}}, \\
{\beta}&= {{\tilde m}}+ L \eta + C' {\mathrm{Ric}}({{\tilde g}}) + \psi {{\tilde g}},\end{aligned}$$ where $({{\tilde h}}, {{\tilde m}}) \in \Gamma^{k}({\Sigma}) \times \Gamma^{k-1}({\Sigma})$, $({\omega}, \eta) \in H^{k+1}({\Sigma}, T^*{\Sigma}) \times H^{k}({\Sigma}, T^*{\Sigma})$, $(C, C') \in {\mathbb{R}}^2$ and $(\phi, \psi) \in H^{k}({\Sigma}, {\mathbb{R}}) \times H^{k-1}({\Sigma}, {\mathbb{R}})$ such that $\phi[1] = 0 = \psi[1]$.
Here $\phi[1]$ means the distribution $\phi$ evaluated on the test function with value $1$. If $\phi \in L^1_{loc}$ then $\phi[1] = \int_{\Sigma}\phi d\mu_{{{\tilde g}}}$.
Let us give two examples of closed Riemannian manifolds with vanishing scalar curvature.
- For each $n \in {\mathbb{N}}$, the flat torus $T^n := {\mathbb{R}}^n / {\mathbb{Z}}^n$ is flat, in particular ${\mathrm{Scal}}({{\tilde g}}) = 0$.
- For each $m \in {\mathbb{N}}$, there is a Berger metric on $S^{4m-1}$ with vanishing scalar curvature. In case $m = 1$, the scalar flat Berger metric is given by $\frac52 {\sigma_1}^2 + {\sigma_2}^2 + {\sigma_3}^2$, where $\sigma_1, \sigma_2, \sigma_3$ are orthonormal left invariant one-forms on $S^3$. Note that this metric does not have vanishing Ricci curvature.
On these manifolds, Theorem \[thm: initial\_data\_split\] applies.
Note that Theorem \[thm: initial\_data\_split\] is equivalent to showing that $$\begin{aligned}
H^{k}({\Sigma}, S^2{\Sigma}) &= \Gamma_1^{k}({\Sigma}) \oplus {\mathrm{im}}(L) \oplus {\mathbb{R}}{\mathrm{Ric}}({{\tilde g}}) \oplus \widehat H^k({\Sigma}, {\mathbb{R}}) {{\tilde g}}, \\
H^{k}({\Sigma}, S^2{\Sigma}) &= \Gamma_2^{k}({\Sigma}) \oplus {\mathrm{im}}(L) \oplus {\mathbb{R}}{\mathrm{Ric}}({{\tilde g}}) \oplus \widehat H^k({\Sigma}, {\mathbb{R}}) {{\tilde g}}, \end{aligned}$$ where $L: H^{k+1}({\Sigma}, T^*{\Sigma}) \to H^{k}({\Sigma}, S^2{\Sigma})$ and $\widehat H^k({\Sigma}, {\mathbb{R}}) := \{ \phi \in H^k({\Sigma}, {\mathbb{R}}) \mid \phi[1] = 0 \}$.
Note that if ${\mathrm{Ric}}({{\tilde g}}) = 0$, equations (\[eq: Eq1firstFF\] - \[eq: Eq2secondFF\]) imply that ${{\tilde h}}$ and ${{\tilde m}}$ are TT-tensors or a constant multiple of the metric. Moreover, if ${\mathrm{Ric}}({{\tilde g}}) = 0$, then Theorem \[thm: initial\_data\_split\] simplifies essentially to the classical split mentioned in the beginning of this section.
Before proving Proposition \[prop: Moncrief\_splitting\] and our main result Theorem \[thm: initial\_data\_split\], let us use Proposition \[prop: Moncrief\_splitting\] to show that there are arbitrarily irregular non-gauge gravitational waves.
\[ex: arbitrarily irregular\] Consider the flat torus $((S^1)^3, {{\tilde g}})$ with coordinates $(x^1, x^2, x^3)$. Let $\delta^{(n)}$ denote the $n$-th derivative of the Dirac distribution on $S^1$ with support at some fixed point in $S^1$. The tensor defined by $${{\tilde h}}(x^1, x^2, x^3) := \delta^{(n)}(x^3)dx^1 \otimes dx^2$$ is a TT-tensor. Moreover, ${{\tilde h}}\in H^{-n-1}({\Sigma}) \backslash H^{-n}({\Sigma})$. Combining Proposition \[prop: Moncrief\_splitting\] with Theorem \[thm: Wellposedness\], this shows that there are arbitrarily irregular non-gauge gravitational waves on the spatially compact Minkowski spacetime $M = {\mathbb{R}}\times (S^1)^3$.
We start by giving a simple proof of Proposition \[prop: Moncrief\_splitting\]. The proof is more elementary than the original one by Moncrief, since we only consider the case when ${{\tilde k}}= 0$.
Note first that $({{\tilde h}}, {{\tilde m}}) \in {\mathpzc{ID}}^{k,k-1}({\Sigma})$ if and only if $$\begin{aligned}
\tilde {\nabla}\cdot (\tilde {\nabla}\cdot {{\tilde h}}- d {\mathrm{tr}}_{{\tilde g}}{{\tilde h}}) - {{\tilde g}}({\mathrm{Ric}}({{\tilde g}}), {{\tilde h}}) &= 0, \\
\tilde {\nabla}\cdot ({{\tilde m}}- ({\mathrm{tr}}_{{\tilde g}}{{\tilde m}}){{\tilde g}}) &= 0.\end{aligned}$$ The gauge producing initial data ${\mathpzc{GP}}^{k,k-1}({\Sigma})$ is in this case given by the image of $$\begin{aligned}
P: H^{k+1}({\Sigma}, T{\Sigma}\oplus {\mathbb{R}}) &\to H^k({\Sigma}, S^2{\Sigma}) \times H^{k-1}({\Sigma}, S^2{\Sigma}), \\
({\beta}, N) &\mapsto ({\mathcal{L}}_{\beta}{{\tilde g}}, {\mathrm{Hess}}(N) - {\mathrm{Ric}}({{\tilde g}}) N).\end{aligned}$$ The formal adjoint of $P$ is given by $$P^*({{\tilde h}}, {{\tilde m}}) = (- 2 \tilde {\nabla}\cdot {{\tilde h}}, \tilde {\nabla}\cdot \tilde {\nabla}\cdot {{\tilde m}}- {{\tilde g}}({\mathrm{Ric}}({{\tilde g}}), {{\tilde m}})).$$ Recall by Lemma \[le: IDclosedness\], that we know that ${\mathrm{im}}(P) = {\mathpzc{GP}}^{k,k-1}({\Sigma}) \subset {\mathpzc{ID}}^{k,k-1}({\Sigma})$ is closed. We claim that $$\label{eq: part_fredholm}
H^k({\Sigma}, S^2{\Sigma}) \oplus H^{k-1}({\Sigma}, S^2{\Sigma}) = {\mathrm{im}}(P) \oplus \ker(P^*).$$ We first prove this when $k \leq 0$. Define $$\begin{aligned}
P_0: H^{1} ({\Sigma}, T{\Sigma}) \times H^{2}({\Sigma}, {\mathbb{R}}) &\to L^2({\Sigma}, S^2{\Sigma}\oplus S^2{\Sigma}), \\
({\beta}, N) &\mapsto P({\beta}, N).\end{aligned}$$ It follows that $$\begin{aligned}
L^2({\Sigma}, S^2{\Sigma}\oplus S^2{\Sigma}) &= \overline{{\mathrm{im}}(P_0)} \oplus \ker(P_0^*) \\
&\subset {\mathrm{im}}(P) \oplus \ker(P^*) \\
&\subset H^{k}({\Sigma}, S^2{\Sigma}) \oplus H^{k-1}({\Sigma}, S^2{\Sigma}).\end{aligned}$$ Since ${\mathrm{im}}(P) \oplus \ker(P^*) \subset H^{k}({\Sigma}, S^2{\Sigma}) \oplus H^{k-1}({\Sigma}, S^2{\Sigma})$ is closed and $L^2({\Sigma}, S^2{\Sigma}\oplus S^2{\Sigma}) \subset H^{k}({\Sigma}, S^2{\Sigma}) \oplus H^{k-1}({\Sigma}, S^2{\Sigma})$ is dense, we have proven when $k \leq 0$. Assume now that $k > 0$ and that $({{\tilde h}}, {{\tilde m}}) \in H^{k}({\Sigma}) \times H^{k-1}({\Sigma})$. Since we know equation when $k = 0$, we conclude that there is $(N, {\beta}) \in H^{1}({\Sigma})$ and $({{\tilde h}}_0, {{\tilde m}}_0) \in L^2({\Sigma}) \times H^{-1}({\Sigma})$ such that $P^*({{\tilde h}}_0, {{\tilde m}}_0) = 0$ and $$({{\tilde h}}, {{\tilde m}}) = P(N, {\beta}) + ({{\tilde h}}_0, {{\tilde m}}_0).$$ It follows that $P^*P(N, {\beta}) = P^*({{\tilde h}}, {{\tilde m}}) \in H^{k-1}({\Sigma}) \times H^{k-3}({\Sigma})$. Note that $$\begin{pmatrix}
\tilde {\nabla}^* \tilde {\nabla}& 0 \\
0 & 1
\end{pmatrix} \circ P^*P: H^{k+1}({\Sigma}) \to H^{k-3}({\Sigma})$$ is an elliptic differential operator. It follows that $(N, {\beta}) \in H^{k+1}({\Sigma})$ and hence $({{\tilde h}}_0, {{\tilde m}}_0) = ({{\tilde h}}, {{\tilde m}}) - P(N, {\beta}) \in H^{k}({\Sigma}) \times H^{k-1}({\Sigma})$. This proves the claim for $k > 0$.
Since ${\mathrm{im}}(P) = {\mathpzc{GP}}^{k,k-1}({\Sigma}) \subset {\mathpzc{ID}}^{k,k-1}({\Sigma})$, it follows now that $${\mathpzc{ID}}^{k,k-1}({\Sigma}) = {\mathpzc{GP}}^{k,k-1}({\Sigma}) \oplus \left( {\mathpzc{ID}}^{k,k-1}({\Sigma}) \cap \ker(P^*) \right).$$ One checks that $\Gamma^{k}({\Sigma}) \times \Gamma^{k-1}({\Sigma}) = {\mathpzc{ID}}^{k,k-1}({\Sigma}) \cap \ker(P^*)$. This concludes the proof.
Let us turn to the proof of Theorem \[thm: initial\_data\_split\]. Note that ${{\tilde h}}= {\alpha}- L {\omega}- C {\mathrm{Ric}}({{\tilde g}}) - \phi {{\tilde g}}\in \Gamma_1^k({\Sigma})$ if and only if $$\begin{aligned}
\Delta \phi - \frac1n {{\tilde g}}({\mathrm{Ric}}({{\tilde g}}), L{\omega}) &= -\frac1n {{\tilde g}}({\mathrm{Ric}}({{\tilde g}}), {\alpha}) + \frac1n \Delta {\mathrm{tr}}_{{\tilde g}}{\alpha}+ \frac C n {{\tilde g}}({\mathrm{Ric}}({{\tilde g}}), {\mathrm{Ric}}({{\tilde g}})), \\
L^*L{\omega}- 2 d\phi &= - 2 \tilde {\nabla}\cdot {\alpha}\end{aligned}$$ and ${{\tilde m}}= {\beta}- L \eta - C' {\mathrm{Ric}}({{\tilde g}}) - \psi {{\tilde g}}\in \Gamma_2^{k-1}({\Sigma})$ if and only if $$\begin{aligned}
\Delta \psi + \frac1n {{\tilde g}}({\mathrm{Ric}}({{\tilde g}}), L\eta) &= \frac1n {{\tilde g}}({\mathrm{Ric}}({{\tilde g}}), {\beta}) + \frac1n \Delta {\mathrm{tr}}_{{\tilde g}}{\beta}- \frac {C'} n {{\tilde g}}({\mathrm{Ric}}({{\tilde g}}), {\mathrm{Ric}}({{\tilde g}})), \\
L^*L\eta + 2(n-1)d\psi &= - 2\tilde {\nabla}\cdot ({\beta}- ({\mathrm{tr}}_{{\tilde g}}{\beta}){{\tilde g}}),\end{aligned}$$ using that $\tilde {\nabla}\cdot {\mathrm{Ric}}({{\tilde g}}) = \frac12 d {\mathrm{Scal}}({{\tilde g}}) = 0$ and ${\mathrm{tr}}_{{\tilde g}}{\mathrm{Ric}}({{\tilde g}}) = {\mathrm{Scal}}({{\tilde g}}) = 0$. The idea is to consider the right hand side as given and find $(\phi, {\omega})$ and $(\psi, \eta)$ solving the equations. The idea is to consider the left hand side as an elliptic operator and calculate its kernel and cokernel.
Let ${\mathcal{L}}{\omega}:= {\mathcal{L}}_{{\omega}^\sharp}{{\tilde g}}$ denote the Killing operator on one-forms ${\omega}$.
\[le: almost bijectivity of P\] Assume that $({\Sigma}, {{\tilde g}})$ is a closed Riemannian manifold of dimension $n \geq 2$ such that ${\mathrm{Scal}}({{\tilde g}}) = 0$. Let $a, b \in {\mathbb{R}}$ such that $0 < ab < 2$. For any $k \in {\mathbb{R}}\cup \{\infty\}$, consider the elliptic differential operator $$\begin{aligned}
&P: H^{k+2}({\Sigma}, {\mathbb{R}}\oplus T^*{\Sigma}) \to H^k({\Sigma}, {\mathbb{R}}\oplus T^*{\Sigma}), \\
&P(\phi, {\omega}) := \begin{pmatrix}
\Delta \phi + a {{\tilde g}}({\mathrm{Ric}}({{\tilde g}}), L{\omega})) \\
L^*L {\omega}+ b d\phi
\end{pmatrix}.\end{aligned}$$ Then $$\begin{aligned}
\ker(P) = \ker(P^*) = \ker(d) \oplus \ker({\mathcal{L}}),\end{aligned}$$ i.e. both kernels consist only of the constant functions and Killing one-forms.
In our case, we have that $(a, b) = (-\frac 1n, -2)$, which implies that $ab = \frac{2}{n}$ and secondly that $(a, b) = (\frac1n, 2(n-1))$, which implies that $ab = \frac{2(n-1)}{n}$. In both cases $0 < ab < 2$, for all $n \geq 2$, so the lemma applies.
We will use the following differential operators acting on one-forms ${\omega}$ and functions $\phi$ on ${\Sigma}$: $$\begin{aligned}
\delta {\omega}:=& - {\nabla}\cdot {\omega}, \\
\Delta {\omega}:=& (d\delta + \delta d) \omega, \\
\Delta \phi :=& (d\delta + \delta d)\phi = \delta d \phi.\end{aligned}$$
Let us first note that $$\label{eq: L*L}
L^*L {\omega}= 2 \Delta {\omega}- 4 {\mathrm{Ric}}({{\tilde g}})({\omega}^\sharp) + \left( 2 - \frac4n \right)d\delta {\omega},$$ for any one form ${\omega}$. We start by showing that $\ker(P) = \ker(d) \oplus \ker({\mathcal{L}})$. For this, assume that $$P(\phi, {\omega}) = 0.$$ It follows that $\delta(L^*L {\omega}) = - b \Delta \phi$. On the other hand, using $\tilde {\nabla}\cdot {\mathrm{Ric}}_{{{\tilde g}}} = \frac12 d{\mathrm{Scal}}({{\tilde g}}) = 0$ it follows that $$\begin{aligned}
\delta(L^*L {\omega}) &= \left(4-\frac4n \right) \Delta \delta {\omega}+ 2 {{\tilde g}}({\mathrm{Ric}}({{\tilde g}}), L{\omega}) \\
&= \left(4-\frac4n \right) \Delta \delta {\omega}- \frac{2}{a} \Delta \phi,\end{aligned}$$ where in the last line we have used that $P(\phi, {\omega}) = 0$. Combining these two results gives $$\Delta \left(\left(4- \frac4n \right)\delta {\omega}+\left(b - \frac2a \right)\phi\right) = 0.$$ Since ${\Sigma}$ is closed, all harmonic functions are constant and hence $$\phi = \frac{4 - \frac4n}{\frac2a - b}\delta {\omega}+ C,$$ where $C$ is constant, which implies that $$L^*L {\omega}= - b d \phi = \frac{4 - \frac4n}{1 - \frac2{ab}} d\delta {\omega}.$$ Since $0 < ab < 2$, it follows that $${\left\lVertL{\omega}\right\rVert}_{L^2}^2 = \frac{4 - \frac4n}{1 - \frac2{ab}} {\left\lVert\delta {\omega}\right\rVert}_{L^2}^2 \leq 0.$$ We conclude that $L{\omega}= 0$ and $\delta {\omega}= 0$. Hence ${\omega}$ is a Killing one-form. It follows that $d \phi = 0$ as claimed.
We continue by calculating $\ker (P^*)$. From equation , we get $$- 2a{\mathrm{Ric}}({{\tilde g}})({\mathrm{grad}}(\phi), \cdot) = -2a \left( 1 - \frac1 n \right) d \Delta \phi + \frac a 2 L^*L d\phi.$$ Assuming that $P^*(\phi, {\omega}) = 0$, it follows that $$L^*L\left({\omega}+ \frac a2 d\phi \right) = 2a\left( 1 - \frac 1n \right)d\Delta \phi.$$ Again using $P^*(\phi, {\omega}) = 0$ we conclude that $$\begin{aligned}
{\left\lVertL\left({\omega}+ \frac a2 d\phi \right)\right\rVert}^2 &= 2a\left( 1 - \frac 1n \right) \langle d \Delta \phi, {\omega}+ \frac a 2 d\phi \rangle \\
&= a^2 \left( 1 - \frac 2 {ab} \right)\left( 1 - \frac 1n \right) {\left\lVert\Delta \phi\right\rVert}^2 \\
&\leq 0,\end{aligned}$$ since $0 < ab < 2$, which implies that $$L\left({\omega}+ \frac a2 d\phi \right) = 0$$ and hence $\Delta \phi = 0$. Since ${\Sigma}$ is closed, it follows that $\phi$ is constant and hence $L{\omega}= 0$. Since $b \neq 0$, it follows that $\delta {\omega}= 0$ and hence ${\omega}$ is a Killing one-form as claimed.
We first show that $\widehat H^k({\Sigma}, {\mathbb{R}}) {{\tilde g}}\oplus {\mathrm{im}}(L) \oplus {\mathbb{R}}{\mathrm{Ric}}({{\tilde g}})$ really is a direct sum. Since ${\mathrm{Scal}}({{\tilde g}}) = 0$, we have for all $f \in \widehat H^k({\Sigma}, {\mathbb{R}})$ that $$f{{\tilde g}}[C {\mathrm{Ric}}({{\tilde g}})] = f[C{{\tilde g}}({{\tilde g}}, {\mathrm{Ric}}({{\tilde g}}))] = f[0] = 0$$ and hence $\widehat H^k({\Sigma}, {\mathbb{R}}) {{\tilde g}}\cap {\mathbb{R}}{\mathrm{Ric}}({{\tilde g}}) = \{0\}$. Since ${\omega}\mapsto L{\omega}$ has injective principal symbol, Lemma \[le: Fredholm alternative closed mfld\] implies that $$H^{k}({\Sigma}, S^2{\Sigma}) = {\mathrm{im}}(L) \oplus \ker(L^*).$$ Since ${\mathrm{Scal}}({{\tilde g}}) = 0$, $L^*({\mathrm{Ric}}({{\tilde g}})) = -2\tilde {\nabla}\cdot {\mathrm{Ric}}({{\tilde g}}) = - d{\mathrm{Scal}}({{\tilde g}}) = 0$ and hence $${\mathbb{R}}{\mathrm{Ric}}({{\tilde g}}) \subset \ker(L^*),$$ which implies that ${\mathbb{R}}{\mathrm{Ric}}({{\tilde g}}) \cap {\mathrm{im}}(L) = \{0\}$. That $\widehat H^k({\Sigma}, {\mathbb{R}}){{\tilde g}}\cap {\mathrm{im}}(L) = \{0\}$ is clear, since ${\mathrm{tr}}_g(L{\omega}) = 0$. This proves the first claim. Let us now prove that $\left( \widehat H^k({\Sigma}, {\mathbb{R}}) {{\tilde g}}\oplus {\mathrm{im}}(L) \oplus {\mathbb{R}}{\mathrm{Ric}}({{\tilde g}}) \right) \cap \Gamma_1^k({\Sigma}) = \{0\}$. For this, assume that $$0 = {{\tilde h}}+ \phi {{\tilde g}}+ L{\omega}+ C {\mathrm{Ric}}({{\tilde g}}) \in \Gamma_1^k({\Sigma}),$$ with $\phi \in \widehat H^k({\Sigma}, {\mathbb{R}})$ and ${\omega}\in H^{k+1}({\Sigma}, T^*{\Sigma})$. We know that ${{\tilde h}}\in \Gamma_1^k({\Sigma})$ if and only if $$P(\phi, {\omega}) = \begin{pmatrix}-\frac{C}{n} {{\tilde g}}({\mathrm{Ric}}({{\tilde g}}), {\mathrm{Ric}}({{\tilde g}})) \\ 0 \end{pmatrix},$$ with $(a, b) = (-\frac{1}{n}, -2)$. By Lemma \[le: Fredholm alternative closed mfld\] and Lemma \[le: almost bijectivity of P\], it follows that $C{{\tilde g}}({\mathrm{Ric}}({{\tilde g}}), {\mathrm{Ric}}({{\tilde g}}))$ must be orthogonal to the constant functions, i.e. $$\int_{{\Sigma}}C{{\tilde g}}({\mathrm{Ric}}({{\tilde g}}), {\mathrm{Ric}}({{\tilde g}})) d\mu_{{\tilde g}}= 0.$$ Since ${{\tilde g}}({\mathrm{Ric}}({{\tilde g}}), {\mathrm{Ric}}({{\tilde g}})) \geq 0$, we conclude that either ${\mathrm{Ric}}({{\tilde g}}) = 0$ or $C = 0$ which in both cases implies $C {\mathrm{Ric}}({{\tilde g}}) = 0$. Hence $(\phi, {\omega}) \in \ker(P)$, which by Lemma \[le: almost bijectivity of P\] implies that $\phi$ is constant and ${\omega}$ is a Killing one-form. Hence $L {\omega}= 0$ and since $0 = \phi[1] = \int_{\Sigma}\phi d \mu_{{\tilde g}}$, it follows that $\phi = 0$. This proves $\left( \widehat H^k({\Sigma}, {\mathbb{R}}) {{\tilde g}}\oplus {\mathrm{im}}(L) \oplus {\mathbb{R}}{\mathrm{Ric}}({{\tilde g}}) \right) \cap \Gamma_1^k({\Sigma}) = \{0\}$. Similarly, one proves $\left( \widehat H^k({\Sigma}, {\mathbb{R}}) {{\tilde g}}\oplus {\mathrm{im}}(L) \oplus {\mathbb{R}}{\mathrm{Ric}}({{\tilde g}}) \right) \cap \Gamma_2^k({\Sigma}) = \{0\}$.
It remains to show that $$H^k({\Sigma}, S^2{\Sigma}) \subseteq \widehat H^k({\Sigma}, {\mathbb{R}}) {{\tilde g}}\oplus {\mathrm{im}}(L) \oplus {\mathbb{R}}{\mathrm{Ric}}({{\tilde g}}) \oplus \Gamma_1^k({\Sigma}).$$ Given ${\alpha}\in H^k({\Sigma}, S^2{\Sigma})$ we want to find $\phi \in \widehat{H}^k({\Sigma}, {\mathbb{R}})$ and ${\omega}\in H^{k+1}({\Sigma}, T^*{\Sigma})$ such that ${{\tilde h}}:= {\alpha}- \phi {{\tilde g}}- L{\omega}- C {\mathrm{Ric}}\in \Gamma_1^k({\Sigma})$. Note that ${{\tilde h}}\in \Gamma_1^k({\Sigma})$ if and only if $$\label{eq: defining_phi_omega}
P(\phi, {\omega}) = \begin{pmatrix} - \frac1n {{\tilde g}}({\alpha}, {\mathrm{Ric}}({{\tilde g}})) + \frac1n \Delta {\mathrm{tr}}_{{\tilde g}}{\alpha}+ \frac{C}{n}{{\tilde g}}({\mathrm{Ric}}({{\tilde g}}), {\mathrm{Ric}}({{\tilde g}})) \\ - 2\tilde {\nabla}\cdot {\alpha}\end{pmatrix}.$$ By Lemma \[le: Fredholm alternative closed mfld\] and Lemma \[le: almost bijectivity of P\] we find $(\phi, {\omega}) \in H^{k}({\Sigma}, {\mathbb{R}}\oplus T^*{\Sigma})$ if and only if we choose $$C:= \frac{{{\tilde g}}({\alpha}, {\mathrm{Ric}}({{\tilde g}}))[1]}{\int_{\Sigma}g({\mathrm{Ric}}, {\mathrm{Ric}}) d \mu_{{\tilde g}}},$$ when ${\mathrm{Ric}}({{\tilde g}}) \neq 0$. If ${\mathrm{Ric}}({{\tilde g}}) = 0$, it does not matter how we choose $C$, $C{\mathrm{Ric}}({{\tilde g}}) = 0$ anyway. What remains is to show that $L{\omega}\in H^{k}({\Sigma}, S^2{\Sigma})$, up to now we only know that $L{\omega}\in H^{k-1}({\Sigma}, S^2{\Sigma})$. But from equation , we know that $L^*L {\omega}= 2 d\varphi - 2 \tilde {\nabla}\cdot {\alpha}\in H^{k-1}({\Sigma}, T^*{\Sigma})$. Elliptic regularity theory implies that in fact ${\omega}\in H^{k+1}({\Sigma}, T^*{\Sigma})$ which implies that $L{\omega}\in H^{k}({\Sigma}, S^2{\Sigma})$. The inclusion $H^k({\Sigma}, S^2{\Sigma}) \subseteq \widehat H^k({\Sigma}, {\mathbb{R}}) {{\tilde g}}\oplus {\mathrm{im}}(L) \oplus {\mathbb{R}}{\mathrm{Ric}}({{\tilde g}}) \oplus \Gamma_2^k({\Sigma})$ is proven analogously.
Some linear differential operators
==================================
The results presented here are to be considered well-known. However, some are only to be found in the literature in a different setting than we need.
Linear elliptic operators
-------------------------
Let $E, F \to M$ be vector bundles equipped with a positive definite metric. We start by the classical “Fredholm alternative” for elliptic operators on closed manifolds.
\[le: Fredholm alternative closed mfld\] Assume that $M$ is a closed manifold, $k \in {\mathbb{R}}$ and $$P: H^{k+m}(M, E) \to H^k(M, F)$$ is a differential operator of order $m$ with injective principal symbol. Then $$\label{eq: fredholm_alt}
H^k(M, F) = {\mathrm{im}}(P) \oplus \ker(P^*),$$ where $P^*$ is the formal adjoint as an operator $$P^*: H^k(M, F) \to H^{k-m}(M, E).$$ Extend or restrict $P$ and $P^*$ to act on the spaces $$\begin{aligned}
\tilde P&: H^{-k+m}(M, E) \to H^{-k}(M, F), \\
\tilde P^*&: H^{-k}(M, F) \to H^{-k-m}(M, E).\end{aligned}$$ Then ${\mathrm{im}}(\tilde P)$ is the annihilator of $\ker(P^*)$ and $\ker(\tilde P^*)$ is the annihilator of ${\mathrm{im}}(P)$ under the isomorphism $H^{-k}(M, E) \cong H^{k}(M, E)'$.
In particular, if $k\geq 0$, the sum in is $L^2$-orthogonal. In case $k = \infty$, equation holds true.
See for example [@Besse1987]\*[Appendix I]{} for equation when $k \geq 0$. Generalising this to any $k \in {\mathbb{R}}$ is straightforward, when using that $$\begin{aligned}
H^{-k}(M, E) &\to H^k(M, E)' \\
f &\mapsto (\varphi \mapsto \langle D^{-k}f, D^k \varphi \rangle_{L^2(M ,E)})\end{aligned}$$ is an isomorphism.
One part of the previous lemma generalises to non-compact manifold.
Let $M$ be a possibly non-compact manifold and let $K \subset M$ be a compact subset and let $k \in {\mathbb{R}}\cup \{\infty\}$. Assume that $$P: H^{k+m}_K(M, E) \to H^k_K(M, F)$$ is a differential operator of order $m$ with injective principal symbol. Assume furthermore that $P$ is injective. Then $${\mathrm{im}}(P) \subset H^k_K(M, F)$$ is closed and $P$ is an isomorphism of Hilbert spaces onto its image.
By [@BaerWafo2014]\*[Sec. 1.6.2.]{}, we can embed an open neighbourhood $U$ of $K$ isometrically into a closed Riemannian manifold $(K', {{\tilde g}}')$. Denote the embedding by $\iota: U \hookrightarrow K'$. Moreover, we can extend the vector bundles in a smooth way. Let us for simplicity still denote them by $E$ and $F$. For any section $f:M \to E$, define $\iota_*f :K' \to E$ such that $f|_K = (\iota_*f) \circ\iota|_K$, just by multiplying by a bump function which equals $1$ on $K$ and vanishes outside $U$. It follows that there is a differential operator with injective principal symbol $$Q: H^{k+m}(K', E) \to H^{k}(K', F)$$ such that the following diagram commutes: $$\xymatrix{
H^{k+m}_K(M,E) \ar[r]^{P} \ar[d]_{\iota_*} & H^k_K(M, F) \ar[d]^{\iota_*} \\
H^{k+m}(K', E) \ar[r]^{Q} & H^{k}(K', F) }.$$ Choose a function $\lambda: K' \to {\mathbb{R}}$ such that $\lambda(x) > 0$ for all $x \in K' \backslash \iota(K)$ and $\lambda|_{\iota(K)} = 0$. We claim that $$Q^*Q + \lambda: H^{k+m}(K', E) \to H^{k-m}(K', E)$$ is an isomorphism of Hilbert spaces (in the smooth case, $k=\infty$, we claim that this is an isomorphism of Fr[é]{}chet spaces). By Lemma \[le: Fredholm alternative closed mfld\], it suffices to show that $\ker(Q^*Q + \lambda) = \{0\}$, since $Q^*Q + \lambda$ is formally self-adjoint. For any $a \in \ker(Q^*Q + \lambda)$ it follows that $a$ is smooth and $$\int_{K'} {\left\lvertQa\right\rvert}^2 + \lambda {\left\lverta\right\rvert}^2 dVol = 0.$$ Hence ${\mathrm{supp}}(a) \subset \iota(K)$ and $Qa = 0$. This implies that $b := \iota^*a$, extended to whole $M$ by zero, solves $P(b) = 0$. Since ${\mathrm{supp}}(b) \subset K$ and $P$ is injective, this implies that $b = 0$ and hence $a = 0$. We conclude the claim.
Assume now that $P(u_n) \to f$ in $H^k_K(M, F)$, with $u_n \in H^{k+m}_K(M, E)$. It follows that $Q(\iota_* u_n) \to \iota_* f$ in $H^k_{\iota(K)}(K', F)$ and $\iota_* u_n \in H^{k+m}_{\iota(K)}(K', E)$. Hence $$(Q^*Q + \lambda)(\iota_* u_n) = Q^*Q(\iota_* u_n) \to Q^*(\iota_*(f))$$ in $H^{k-m}_{\iota(K)}(K', E)$. Therefore, there is a $v \in H^{k+m}(K', E)$ such that $\iota_* u_n \to v$ in $H^{k+m}(K', E)$. Since ${\mathrm{supp}}(\iota_* u_n) \subset \iota(K)$ and $\iota_* u_n \to v$ as distributions, the support of $v$ cannot be larger than $\iota(K)$. Hence $v \in H^k_{\iota(K)}(K', E)$. Now define $$u := \iota^* v \in H^{k+m}_K(U, E)$$ and extend it by zero to an element in $H^{k+m}_K(M, E)$. Note that $u_n \to u$ in $H^{k+m}_K(M, E)$. It follows that $$P(u) = \lim_{n \to \infty} P(u_n) = f,$$ as claimed (in the case $k = \infty$, the last line is to be thought of as a limit of a net).
Let $g$ be a Riemannian metric on $M$. A differential operator $P \in \mathrm{Diff}_2(E, E)$ is called a Laplace type operator if its principal symbol is given by the metric. Equivalently, in local coordinates, $P$ takes the form $$P = - \sum_{i,j} g^{ij}\frac{\partial^2}{\partial x^i \partial x^j} + l.o.t.$$
We will need the following theorem, known as the *Strong unique continuation property*. We quote the statement from [@Baer1997]. For a proof, see [@Aronszajn1957]\*[Thm. on p. 235 and Rmk. 3 on p. 248]{}.
\[thm: aronszajn\] Let $(M,g)$ be a connected Riemannian manifold and let $P$ be a Laplace type operator acting on sections of a vector bundle $E \to M$. Assume that $Pu = 0$ and that $u$ vanishes at some point of infinite order, i.e. that all derivatives vanish at that point. Then $u = 0$.
\[cor: Laplace-type closed image\] Let $k \in {\mathbb{R}}\cup \{\infty\}$. Assume that $M$ is connected. Let $K \subset M$ be a compact subset such that $K \neq M$. Assume that $$P: H^{k+2}_K(M, E) \to H^k_K(M, E)$$ is a Laplace-type operator. Then $${\mathrm{im}}(P) \subset H^k_K(M, E)$$ is closed and $P$ is an isomorphism of Hilbert spaces (Fr[é]{}chet spaces if $k = \infty$) onto its image.
We only need to show that $P$ is injective. Assume that $Pu= 0$. Since $u|_{M \backslash K} = 0$, Theorem \[thm: aronszajn\] implies that $u = 0$.
Linear wave equations {#sec: Waves}
---------------------
In the literature, there are many variants of stating the well-posedness of the Cauchy problem for linear wave equations with initial data of Sobolev regularity. The statement that is relevant for our purposes is not in the form we need it in the literature, but can be derived by standard techniques.
Let $g$ be a Lorentzian metric on $M$. A differential operator $P \in \mathrm{Diff}_m(E, E)$ is called a wave operator if its principal symbol is given by the metric. Equivalently, in local coordinates, $P$ takes the form $$P = - \sum_{i,j} g^{ij}\frac{\partial^2}{\partial x^i \partial x^j} + l.o.t.$$
Wave operators are sometimes also called *normally hyperbolic operators*. We assume here that $(M, g)$ is a globally hyperbolic spacetime and let ${\Sigma}\subset M$ be a Cauchy hypersurface and $t:M \to {\mathbb{R}}$ a Cauchy temporal function such that ${\Sigma}= t^{-1}(t_0)$ for some $t_0 \in t(M)$. Let $E \to M$ be a real vector bundle and let $P$ be a wave operator acting on sections in $E$. Denote by $\nu$ the future pointing unit normal vector field on ${\Sigma}$.
\[thm: WellposednessLinearWaves\] Let $k \in {\mathbb{R}}\cup \{\infty\}$ be given. For each $(u_0, u_1, f) \in H_{loc}^k({\Sigma}, E|_{\Sigma}) \oplus H_{loc}^{k-1}({\Sigma}, E|_{\Sigma}) \oplus CH^{k-1}_{loc}(M, E, t)$, there is a unique $u \in CH_{loc}^k(M, E, t)$ such that $$\begin{aligned}
Pu &= f, \\
u|_{\Sigma}&= u_0, \\
{\nabla}_\nu u|_{\Sigma}&= u_1.\end{aligned}$$ Moreover, we have finite speed of propagation, i.e. $${\mathrm{supp}}(u) \subset J \left({\mathrm{supp}}(u_0) \cup {\mathrm{supp}}(u_1) \cup K \right),$$ for any subset $K \subset M$ such that ${\mathrm{supp}}(f) \subset J(K)$.
The theorem is proven by a standard method, translating the result in ([@BaerWafo2014]\*[Thm. 13]{} and [@BaerGinouxPfaeffle2007]\*[Thm. 3.2.11]{}), where spatially compact support was assumed, to the general case. This can be done due to finite speed of propagation for wave equation. It was done in the smooth case when $k = \infty$ in [@BaerFredenhagen2009]\*[Cor. 5 in ch. 3]{}. Let us remark that from [@BaerWafo2014]\*[Thm. 13]{} we only conclude that $u \in C^0(I, H^k_{loc}) \cap C^1(I, H^{k-1}_{loc})$, but since we assume more regularity on the right hand side $f$, we can use the equation $Pu = f$ to conclude the stated regularity on $u$. A simple corollary is the following.
\[cor: cont\_dep\_id\] Let $k \in {\mathbb{R}}\cup \{\infty\}$ be given. Then the map $$\begin{aligned}
CH^k_{loc}(M, E, t) \cap \ker(P) &\to H_{loc}^k({\Sigma}, E|_{\Sigma}) \oplus H_{loc}^{k-1}({\Sigma}, E|_{\Sigma}) \\
u &\to (u|_{\Sigma}, {\nabla}_\nu u|_{\Sigma})\end{aligned}$$ is an isomorphism between topological vector spaces. In particular, the inverse map is continuous.
By the preceding theorem, this map is continuous and bijective between Fr[é]{}chet spaces. The open mapping theorem for Fr[é]{}chet spaces implies the statement.
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- 'Jakub Pachocki [^1] Liam Roditty [^2] Aaron Sidford [^3]'
- 'Roei Tov [^4] Virginia Vassilevska Williams [^5]'
bibliography:
- 'girth2.bib'
title: |
Approximating Cycles in Directed Graphs:\
Fast Algorithms for Girth and Roundtrip Spanners
---
=1
[^1]: OpenAI, `[email protected]`
[^2]: Bar Ilan University, `[email protected]`
[^3]: Stanford University, `[email protected]`
[^4]: Bar Ilan University, `[email protected]`
[^5]: MIT CSAIL, `[email protected]`
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'As shown in former papers, the nonadiabatic Heisenberg model presents a novel mechanism of Cooper pair formation generated by the strongly correlated atomic-like motion of the electrons in narrow, roughly half-filled “superconducting bands”. These are energy bands represented by optimally localized spin-dependent Wannier functions adapted to the symmetry of the material under consideration. The formation of Cooper pairs is not the result of an attractive electron-electron interaction but can be described in terms of quantum mechanical constraining forces constraining the electrons to form Cooper pairs. There is theoretical and experimental evidence that only this nonadiabatic mechanism operating in superconducting bands may produce [*eigenstates*]{} in which the electrons form Cooper pairs. These constraining forces stabilize the Cooper pairs in any superconductor, whether conventional or unconventional. Here we report evidence that also the experimentally found superconducting state in bismuth at ambient as well as at high pressure is connected with a narrow, roughly half-filled superconducting band in the respective band structure. This observation corroborates once more the significance of constraining forces in the theory of superconductivity.'
author:
- Ekkehard Krüger
title: Constraining Forces Stabilizing Superconductivity in Bismuth
---
Introduction {#sec:introduction}
============
Bismuth shows sequential structure transition as function of the applied pressure, as summarized in an illustrative form by O. Degtyareva [*et al.*]{} [@degty]:
$$\label{eq:1}
\text{Bi--I}\ \xrightarrow{\text{2.55 GPa}}\ \text{Bi--II}\
\xrightarrow{\text{2.7 GPa}}\ \text{Bi--III}\ \xrightarrow{\text{7.7 GPa}}\ \text{Bi--V}\ < \text{122 GPa}$$
At ambient pressure, Bi crystallizes in the structure Bi–I, an As-type structure with a trigonal (rhombohedral) space group and two atoms in the unit cell [@donohue]. This structure is stable up to a pressure of 2.55 GPa. Then, with increasing pressure, Bi undergoes the monoclinic structure Bi–II and the host-guest structure Bi–III. A further structure called Bi–IV exists above the temperature of 450 K and is not relevant in this paper. Between a pressure of 7.7 and (at least) 122 GPa, the cubic Bi–V phase is stable [@degty].
It is interesting, that all these Bi phases become superconducting at low temperatures. The Bi–I phase is superconducting with the extremely low transition temperature $T_c = 0.53$mK [@prakash]. In the Bi–II and Bi–III structures, the transition temperature increases with increasing pressure from about 4 K to 7 K. Finally, in the Bi–V phase, $T_c$ has the maximum value of about 8 K [@li]. The different values of $T_c$ are evidently connected with the different crystal structures since $T_c$ changes discontinuously at the transitions from one structure to another [@li].
This striking symmetry-dependence of the superconducting transition temperature suggests that also in bismuth superconductivity is connected with narrow, roughly half-filled “superconducting bands”. A closed energy band (Definition 2 of Ref. [@theoriewf]) with optimally localized, symmetry-adapted, and spin-dependent Wannier functions is called superconducting band (Definition 22 of Ref. [@theoriewf]) because those metals (and only those metals) that possess such a narrow, roughly half-filled superconducting band in its band structure experimentally prove to be superconductors, see the Introduction of Ref. [@theoriewf]. This observation can be interpreted within the group-theoretical nonadiabatic Heisenberg model (NHM) [@enhm], a new model of strongly correlated atomic-like electrons. Within this model, the formation of Cooper pairs is still mediated by boson excitations (responsible, as usual, for the isotope effect). However, these boson excitations produce constraining forces as they are familiar from classical mechanics: below $T_c$, they reduce the degrees of freedom of the electron system by forcing the electrons to form Cooper pairs. A short description of the NHM and this novel mechanism of Cooper pair formation is given in Secs. 2 and 3, respectively, of Ref. [@josybacuo7]. In Sec. \[sec:discussion\] we shall summarize this new concept of superconductivity in the form of single statements.
There is theoretical evidence that the constraining forces operating in narrow, roughly half-filled superconducting bands are required for the Hamiltonian of the system to possess [ *eigenstates*]{} in which the electrons form Cooper pairs [@josn]. The aim of the present paper is to corroborate this important assertion by showing that also the experimentally established superconductivity in bismuth [@prakash; @li] is evidently connected with superconducting bands.
{width="\textwidth"}
In this context, we consider (in the following Sec. \[sec:supbands\]) only the two structures Bi–I and Bi–V at the beginning and the end of the sequence [(\[eq:1\])]{}. Bi–I and Bi–V possess the lowest and highest superconducting transition temperatures, respectively. Bi–II is not very informative within the NHM since it has only a low monoclinic symmetry. At this stage, it would be complicated to apply the NHM to the incommensurate host-guest structure of Bi–III. Both Bi–I and Bi–V, on the other hand, have clear symmetries with the trigonal space group $R\overline{3}m$ (166) and the cubic space group $Im3m$ (229), respectively [@donohue; @degty]. Bi–V even has the highest possible symmetry in a solid state with allows the NHM to make clear predictions.
Superconducting bands in the band structure of bismuth {#sec:supbands}
======================================================
Band structure of Bi–I
----------------------
The band structure of Bi–I is depicted in Fig. \[fig:bs\_166\]. The Bloch functions of the band highlighted in red are labeled by the single-valued representations $$\label{eq:4}
\begin{array}{llllllllll}
\Gamma^-_2,& \Gamma^+_3; \ &
Z^+_3,& Z^-_3;\ &L^+_1,& L^-_2;\ & F^+_1,& F^-_2.\\
\end{array}$$
It is clear that this band (or any other band in the band structure) does not contain a closed band (Definition 2 of Ref. [@theoriewf]) with the symmetry of band 1 or band 2 in Table \[tab:wf166\], meaning that we cannot unitarily transform the Bloch functions into best localized and symmetry-adapted Wannier functions situated at the Bi atoms. The situation is changed when we consider the double-valued representations of the Bloch functions:
{width="\textwidth"}
According to Table \[tab:falten166\], we may unitarily transform the Bloch functions [(\[eq:4\])]{} into Bloch functions labeled by the double-valued representations, $$\label{eq:3}
\begin{array}{lclp{.3cm}lcl}
\Gamma^-_2 & \rightarrow & \underline{\Gamma^-_4}, &&
\Gamma^+_3 & \rightarrow & \underline{\Gamma^+_4} + \Gamma^+_5 + \Gamma^+_6 ;\\[.2cm]
Z^+_3 & \rightarrow & \underline{Z^+_4} + Z^+_5 + Z^+_6 ,&&
Z^-_3 & \rightarrow & \underline{Z^-_4} + Z^-_5 + Z^-_6;\\[.2cm]
L^+_1 & \rightarrow & \underline{L^+_3} + \underline{L^+_4},&&
L^-_2 & \rightarrow & \underline{L^-_3} + \underline{L^-_4};\\[.2cm]
F^+_1 & \rightarrow & \underline{F^+_3} + \underline{F^+_4},&&
F^-_2 & \rightarrow & \underline{F^-_3} + \underline{F^-_4}.\\[.2cm]
\end{array}$$ The underlined representations belong to the band listed in Table \[tab:wf166Z\]. Thus, we can unitarily transform the Bloch functions of this band into [*spin-dependent*]{} Wannier functions being best localized, centered at the Bi atoms, and symmetry-adapted to the group $R\overline{3}m$. Consequently, according to Definition 22 of Ref. [@theoriewf], the band highlighted in red is a superconducting band.
Band structure of Bi–V
----------------------
The band structure of Bi–V is depicted in Fig. \[fig:bs\_229\]. The Bloch functions of the band highlighted in red now are labeled by the single-valued representations $$\label{eq:5}
\begin{array}{llll}
\Gamma^-_4;\ & H^-_4;\ & P_5;\ & N^-_3.\\
\end{array}$$
Again, this band (or any other band in the band structure) does not contain a closed band (Definition 2 of Ref. [@theoriewf]) with the symmetry of the bands listed in Table \[tab:wf229\]. Hence, we cannot unitarily transform the Bloch functions into best localized and symmetry-adapted Wannier functions situated at the Bi atoms. According to Table \[tab:falten229\], we may unitarily transform the Bloch functions [(\[eq:5\])]{} into Bloch functions labeled by the double-valued representations, $$\label{eq:6}
\begin{array}{lclp{.3cm}lcl}
\Gamma^-_4 & \rightarrow & \underline{\Gamma^-_6} + \Gamma^-_8,\\
H^-_4 & \rightarrow & \underline{H^-_6} + H^-_8,\\
P_5 & \rightarrow & \underline{P_7} + P_8,\\
N^-_3 & \rightarrow & \underline{N^-_5}.\\
\end{array}$$ The underlined representations belong to band 4 listed in Table \[tab:wf229Z\]. Thus, we can unitarily transform the Bloch functions of this band into [*spin-dependent*]{} Wannier functions being best localized, centered at the Bi atoms, and symmetry-adapted to the group $Im3m$. Consequently, according to Definition 22 of Ref. [@theoriewf], the band highlighted in red is a superconducting band.
Interpretation
--------------
Both structures Bi–I and Bi–V possess a superconducting band in their band structure that
- is one of the narrowest bands in the band structure;
- is nearly half filled;
- and comprises a great part of the electrons at the Fermi level.
Consequently, the NHM predicts that both phases become superconducting below a transition temperature $T_c$.
The superconducting band of Bi–I (Fig. \[fig:bs\_166\]) even comprises all the electrons at the Fermi level. However, the small Fermi surface and the small density of states at the Fermi level results in the extremely low superconducting transition temperature of $T_c = 0.53$mK [@prakash].
The superconducting band of Bi–V (Fig. \[fig:bs\_229\]) closely resembles the superconducting band of niobium as depicted, e.g., in Fig. 1 of Ref. [@josn]: both nearly half-filled bands have comparable widths and comprise a comparable part of the Fermi level. Consequently, we may expect that both the Bi–V phase of bismuth and the elemental metal niobium have similar transition temperatures. Indeed, we have $T_c \approx 8$K and $T_c = 9.2$K for Bi–V and niobium, respectively. Narrow and half-filled superconducting bands rarely arise in crystals with the high bcc symmetry. So the elemental bcc metals Ta, W, and Mo possess superconducting bands which are far from being half-filled and, consequently, have lower transition temperatures. In the band structures of the most elemental metals (such as Li, Na, K, Rb, Cs, Ca Cu, Ag, and Au), narrow, roughly half-filled superconducting bands cannot be found and, hence, these metals do not become superconducting [@es2]. Consequently, there is high evidence that the superconducting state in Bi–V is connected with the narrow and almost perfectly half-filled superconducting band in the band structure of this phase.
Results {#sec:results}
=======
In terms of superconducting bands, the NHM confirms the experimental observations that
- the Bi–I phase (i.e., bismuth at ambient pressure) becomes superconducting below an extremely low transition temperature and
- the Bi–V phase (i.e., bismuth at high pressure) becomes superconducting below a transition temperature comparable with the transition temperature of niobium.
Discussion {#sec:discussion}
==========
This group-theoretical result demonstrates again [@theoriewf] the significance of the theory of superconductivity defined within the NHM. We summarize the main features of this novel concept of superconductivity (a more detailed description is given in Ref. [@josybacuo7]):
- The NHM is based on three postulates [@enhm] concerning the [*atomic-like*]{} motion of the electrons in narrow, half-filled energy bands as it was already considered by Mott [@mott] and Hubbard[@hubbard].
- The postulates of the NHM are physically evident and require the introduction of [*nonadiabatic*]{} localized states of well-defined symmetry emphasizing the [*correlated*]{} nature of any atomic-like motion.
- The atomic-like motion is determined by the conservation of the total crystal-spin angular momentum which must be satisfied in the nonadiabatic system. In a narrow, roughly half-filled superconducting band this conservation law plays a crucial role because the localized (Wannier) states are spin-dependent.
- The strongly correlated atomic-like motion in a narrow, roughly half-filled superconducting band produces an interaction between the electron spins and “crystal-spin-1 bosons”: at any electronic scattering process two crystal-spin-1 bosons are excited or absorbed in order that the total crystal-spin angular momentum stays conserved.
- Crystal-spin-1 bosons are the [*energetically lowest*]{} localized boson excitations of the crystal that possess the crystal-spin angular momentum $1\cdot\hbar$ and are sufficiently stable to transport it (as Bloch waves) through the crystal.
- The spin-boson interaction in a narrow, roughly half-filled superconducting band leads to the formation of Cooper pairs below a transition temperature $T_c$.
- The Cooper pairs arise inevitably since any electron state in which the electrons possess their full degrees of freedom violates the conservation of crystal-spin angular momentum.
- This influence of the crystal-spin angular momentum may be described in terms of constraining forces that constrain the electrons to form Cooper pairs. This feature distinguishes the present concept from the standard theory of superconductivity.
- As already mentioned in Sec. \[sec:introduction\], there is evidence that [*only*]{} these constraining forces may produce superconducting [*eigenstates*]{}.
- Hence, the constraining forces are responsible for all types of superconductivity, i.e., conventional, high-$T_c$ and other superconductivity.
- Crystal-spin-1 bosons are coupled phonon-plasmon modes that determine the type of the superconductor.
- In the isotropic lattices of the transition elements, crystal-spin-1 bosons have dominant phonon character and confirm the electron-phonon mechanism that enters the BCS theory [@bcs] in these materials.
- Phonon-like excitations are not able to transport crystal-spin angular-momenta within the anisotropic materials of the high-$T_c$ superconductors [@ehtc], often containing two-dimensional layers. Within these anisotropic materials, the crystal-spin-1 bosons are energetically higher lying excitations of dominant plasmon character leading to higher superconducting transition temperatures [@bcs].
- The theory of superconductivity as developed so far is valid without any restrictions in narrow, roughly half-filled superconducting bands because constraining forces do not alter the energy of the electron system.
- However, the standard theory may furnish inaccurate information if no narrow, roughly half-filled superconducting band exists in the band structure of the material under consideration.
It is clear that this concept of superconductivity as developed in the last 40 years should be further refined in the future.
Group-theoretical tables for the trigonal space group $R\overline{3}m$ (166) of Bi–I
=====================================================================================
It is sometimes useful to represent trigonal (rhombohedral) systems in a hexagonal coordinate system. In this case, the unit cell contains two additional inner points which, however, are connected to each other and to the points at the corners by the translation symmetry of the system. In the framework of the group theory of Wannier functions as presented in Ref. [@theoriewf], the inner points of a unit cell must not be connected by the translation symmetry. Thus, the group theory of Wannier functions is not applicable to trigonal system represented by hexagonal axes. Therefore, in the present paper, we use exclusively the trigonal coordinate system as given in Table 3.1 of Ref. [@bc].
[ccccccc]{}\
& $E$ & $I$ & $S^{\pm}_6$ & $C^{\pm}_3$ & $C'_{2i}$ & $\sigma_{di}$\
$\Gamma^+_1$, $Z^+_1$ & 1 & 1 & 1 & 1 & 1 & 1\
$\Gamma^+_2$, $Z^+_2$ & 1 & 1 & 1 & 1 & -1 & -1\
$\Gamma^-_1$, $Z^-_1$ & 1 & -1 & -1 & 1 & 1 & -1\
$\Gamma^-_2$, $Z^-_2$ & 1 & -1 & -1 & 1 & -1 & 1\
$\Gamma^+_3$, $Z^+_3$ & 2 & 2 & -1 & -1 & 0 & 0\
$\Gamma^-_3$, $Z^-_3$ & 2 & -2 & 1 & -1 & 0 & 0\
\
[ccccc]{}\
& $E$ & $C'_{22}$ & $I$ & $\sigma_{d2}$\
$L^+_1$ & 1 & 1 & 1 & 1\
$L^-_1$ & 1 & 1 & -1 & -1\
$L^+_2$ & 1 & -1 & 1 & -1\
$L^-_2$ & 1 & -1 & -1 & 1\
\
[ccccc]{}\
& $E$ & $C'_{23}$ & $I$ & $\sigma_{d3}$\
$F^+_1$ & 1 & 1 & 1 & 1\
$F^-_1$ & 1 & 1 & -1 & -1\
$F^+_2$ & 1 & -1 & 1 & -1\
$F^-_2$ & 1 & -1 & -1 & 1\
\
\
Notes to Table \[tab:rep166\]
1. $i = 1,2,3.$
2. The symmetry elements are labeled in the Schönflies notation as illustrated, e.g., in Table 1.2 of Ref. [@bc].
3. The character tables are determined from Table 5.7 of Ref. [@bc].
4. The notations of the points of symmetry follow Fig. 3.11 (b) of Ref. [@bc].
$E$ $C^{\pm}_3$ $\sigma_{di}$
-------------- ----- ------------- ---------------
$\bm{d}_{1}$ 1 1 1
$\bm{d}_{2}$ 1 1 -1
$\bm{d}_{3}$ 2 -1 0
: Character tables of the single-valued irreducible representations of the point group $C_{3v}$ of the positions of the Bi atoms (Definitions 11 and 12 of Ref. [@theoriewf]) in Bi–I. \[tab:repBi\]
$i = 1,2,3.$
[cccccc]{}\
$R^+_1$ & $R^+_2$ & $R^-_1$ & $R^-_2$ & $R^+_3$ & $R^-_3$\
$R^+_4$ & $R^+_4$ & $R^-_4$ & $R^-_4$ & $R^+_5$ + $R^+_6$ + $R^+_4$ & $R^-_5$ + $R^-_6$ + $R^-_4$\
\
[cccc]{}\
$R^+_1$ & $R^-_1$ & $R^+_2$ & $R^-_2$\
$R^+_3$ + $R^+_4$ & $R^-_3$ + $R^-_4$ & $R^+_3$ + $R^+_4$ & $R^-_3$ + $R^-_4$\
\
\
Notes to Table \[tab:falten166\]
1. The letter $R$ stands for the letter denoting the relevant point of symmetry. For example, at point $F$ the representations $R^+_1, R^+_2, \ldots$ stand for $F^+_1, F^+_2, \ldots$ .
2. Each column lists the double-valued representation $R_i\times
{\bm d}_{1/2}$ below the single-valued representation $R_i$, where ${\bm d}_{1/2}$ denotes the two-dimensional double-valued representation of the three-dimensional rotation group $O(3)$ given, e.g., in Table 6.1 of Ref. [@bc].
3. The single-valued representations are defined in Table \[tab:rep166\].
4. The notations of double-valued representations follow strictly Table 6.13 (and Table 6.14) of Ref. [@bc]. In this paper the double-valued representations are not explicitly given but are sufficiently defined by this table.
Bi($zzz$) Bi($\overline{z}\overline{z}\overline{z}$) $K$ $\Gamma$ $Z$ $L$ $F$
-------- -------------- -------------------------------------------- ----- ----------------------------- ------------------- ------------------- -------------------
Band 1 $\bm{d}_{1}$ $\bm{d}_{1}$ OK $\Gamma^+_1$ + $\Gamma^-_2$ $Z^+_1$ + $Z^-_2$ $L^+_1$ + $L^-_2$ $F^+_1$ + $F^-_2$
Band 2 $\bm{d}_{2}$ $\bm{d}_{2}$ OK $\Gamma^+_2$ + $\Gamma^-_1$ $Z^+_2$ + $Z^-_1$ $L^-_1$ + $L^+_2$ $F^-_1$ + $F^+_2$
: Single-valued representations of all the energy bands in the space group $R\overline{3}m$ of Bi–I with symmetry-adapted and optimally localized usual (i.e., spin-independent) Wannier functions centered at the Bi atoms. \[tab:wf166\]
\
Notes to Table \[tab:wf166\]
1. $z = 0.23\ldots$ [@degty]; the exact value of $z$ is meaningless in this table. In the hexagonal unit cell, the Bi atoms lie at the Wyckoff positions $6c (00\pm z)$ [@degty]. In the trigonal system, their positions in the unit cell are $\bm{\rho} =
\pm (z\bm{T}_1 + z\bm{T}_2 + z\bm{T}_3)$, where the vectors $\bm{T}_1, \bm{T}_2,$ and $\bm{T}_3$ denote the basic vectors of the trigonal lattice as given, e.g., in Table 3.1 of Ref. [@bc].
2. The notations of the representations are defined in Table \[tab:rep166\].
3. Assume a closed band of the symmetry in one of the two rows of this table to exist in the band structure of Bi–I. Then the Bloch functions of this band can be unitarily transformed into Wannier functions that are
- localized as well as possible;
- centered at the Bi atoms; and
- symmetry-adapted to the space group $R\overline{3}m$ (166) [@theoriewf].
The entry “OK” below the time-inversion operator $K$ indicates that the Wannier functions may even be chosen symmetry-adapted to the magnetic group $$M = R\overline{3}m + K\cdot R\overline{3}m,$$ see Theorem 7 of Ref. [@theoriewf].\
However, a closed band (Definition 2 of Ref. [@theoriewf]) with the symmetry of band 1 or band 2 does not exist in the band structure of Bi–I (see Fig. \[fig:bs\_166\]).
4. The bands are determined following Theorem 5 of Ref. [@theoriewf].
5. The Wannier functions at the Bi atoms listed in the upper row belong to the representation $\bm{d}_i$ of $C_{3v}$ included below the atom. These representations are defined in Table \[tab:repBi\].
6. Each row defines one band consisting of two branches, because there are two Bi atoms in the unit cell.
Bi($zzz$) Bi($\overline{z}\overline{z}\overline{z}$) $K$ $\Gamma$ $Z$ $L$ $F$
-------- ----------- -------------------------------------------- ----- ----------------------------- ------------------- --------------------------------------- ---------------------------------------
Band 1 $\bm{d}$ $\bm{d}$ OK $\Gamma^+_4$ + $\Gamma^-_4$ $Z^+_4$ + $Z^-_4$ $L^+_3$ + $L^+_4$ + $L^-_3$ + $L^-_4$ $F^+_3$ + $F^+_4$ + $F^-_3$ + $F^-_4$
: Double-valued representations of the superconducting band in the space group $R\overline{3}m$ of Bi–I. \[tab:wf166Z\]
\
Notes to Table \[tab:wf166Z\]
1. $z = 0.23\ldots$ [@degty]; the exact value of $z$ is meaningless in this table. In the hexagonal unit cell, the Bi atoms lie at the Wyckoff positions $6c (00\pm z)$ [@degty]. In the trigonal system, their positions in the unit cell are $\bm{\rho} =
\pm (z\bm{T}_1 + z\bm{T}_2 + z\bm{T}_3)$, where the vectors $\bm{T}_1, \bm{T}_2,$ and $\bm{T}_3$ denote the basic vectors of the trigonal lattice as given, e.g., in Table 3.1 of Ref. [@bc].
2. Assume an isolated band of the symmetry listed in this table to exist in the band structure of Bi–I. Then the Bloch functions of this band can be unitarily transformed into spin dependent Wannier functions that are
- localized as well as possible;
- centered at the Bi atoms; and
- symmetry-adapted to the space group $R\overline{3}m$ (166) [@theoriewf].
The entry “OK” below the time-inversion operator $K$ indicates that the spin-dependent Wannier functions may even be chosen symmetry-adapted to the magnetic group $$M = R\overline{3}m + K\cdot R\overline{3}m,$$ see Theorem 10 of Ref. [@theoriewf]. Hence, the listed band forms a superconducting band, see Definition 22 of Ref. [@theoriewf].
3. The listed band is the only superconducting band of Bi–I.
4. The notations of the double-valued representations are (indirectly) defined by Table \[tab:falten166\].
5. Following Theorem 9 of Ref. [@theoriewf], the superconducting band is simply determined from one of the two single-valued bands listed in Table \[tab:wf166\] by means of Equation (97) of Ref. [@theoriewf]. (According to Definition 20 of Ref. [@theoriewf], both single-valued bands in Table \[tab:wf166\] are affiliated bands of the superconducting band.)
6. The superconducting band consists of two branches, because there are two Bi atoms in the unit cell.
7. The point group of the positions of the Bi atoms (Definitions 11 and 12 of Ref. [@theoriewf]) is the group $C_{3v}$. The Wannier functions at the Bi atoms belong to the double-valued representation $$\label{eq:2}
\bm{d} = \bm{d}_1 \otimes \bm{d}_{1/2} = \bm{d}_2 \otimes \bm{d}_{1/2}$$ of $C_{3v}$ where $\bm{d}_1$ and $\bm{d}_2$ are defined in Table \[tab:repBi\] and $\bm{d}_{1/2}$ denotes the two-dimensional double-valued representation of $O(3)$ as given, e.g., in Table 6.1 of Ref. [@bc]. Note that the two representations $\bm{d}_1
\otimes \bm{d}_{1/2}$ and $\bm{d}_2 \otimes \bm{d}_{1/2}$ are equivalent.
Group-theoretical tables for the cubic space group $Im3m$ (229) of Bi–V
=======================================================================
[ccccccccccc]{}\
& $E$ & $I$ & $\sigma_m$ & $C_{2m}$ & $C^{\pm}_{3j}$ & $S^{\pm}_{6j}$ & $C^{\pm}_{4m}$ & $S^{\pm}_{4m}$ & $C_{2p}$ & $\sigma_{dp}$\
$\Gamma^+_1$, $H^+_1$ & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1\
$\Gamma^+_2$, $H^+_2$ & 1 & 1 & 1 & 1 & 1 & 1 & -1 & -1 & -1 & -1\
$\Gamma^-_2$, $H^-_2$ & 1 & -1 & -1 & 1 & 1 & -1 & -1 & 1 & -1 & 1\
$\Gamma^-_1$, $H^-_1$ & 1 & -1 & -1 & 1 & 1 & -1 & 1 & -1 & 1 & -1\
$\Gamma^+_3$, $H^+_3$ & 2 & 2 & 2 & 2 & -1 & -1 & 0 & 0 & 0 & 0\
$\Gamma^-_3$, $H^-_3$ & 2 & -2 & -2 & 2 & -1 & 1 & 0 & 0 & 0 & 0\
$\Gamma^+_4$, $H^+_4$ & 3 & 3 & -1 & -1 & 0 & 0 & 1 & 1 & -1 & -1\
$\Gamma^+_5$, $H^+_5$ & 3 & 3 & -1 & -1 & 0 & 0 & -1 & -1 & 1 & 1\
$\Gamma^-_4$, $H^-_4$ & 3 & -3 & 1 & -1 & 0 & 0 & 1 & -1 & -1 & 1\
$\Gamma^-_5$, $H^-_5$ & 3 & -3 & 1 & -1 & 0 & 0 & -1 & 1 & 1 & -1\
\
[cccccc]{}\
& $E$ & $C_{2m}$ & $S^{\pm}_{4m}$ & $\sigma_{dp}$ & $C^{\pm}_{3j}$\
$P_1$ & 1 & 1 & 1 & 1 & 1\
$P_2$ & 1 & 1 & -1 & -1 & 1\
$P_3$ & 2 & 2 & 0 & 0 & -1\
$P_4$ & 3 & -1 & 1 & -1 & 0\
$P_5$ & 3 & -1 & -1 & 1 & 0\
\
[ccccccccc]{}\
& $E$ & $C_{2z}$ & $C_{2b}$ & $C_{2a}$ & $I$ & $\sigma_z$ & $\sigma_{db}$ & $\sigma_{da}$\
$N^+_1$ & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1\
$N^+_2$ & 1 & -1 & 1 & -1 & 1 & -1 & 1 & -1\
$N^+_3$ & 1 & 1 & -1 & -1 & 1 & 1 & -1 & -1\
$N^+_4$ & 1 & -1 & -1 & 1 & 1 & -1 & -1 & 1\
$N^-_1$ & 1 & 1 & 1 & 1 & -1 & -1 & -1 & -1\
$N^-_2$ & 1 & -1 & 1 & -1 & -1 & 1 & -1 & 1\
$N^-_3$ & 1 & 1 & -1 & -1 & -1 & -1 & 1 & 1\
$N^-_4$ & 1 & -1 & -1 & 1 & -1 & 1 & 1 & -1\
\
\
Notes to Table \[tab:rep229\]
1. $m = x, y, z;\quad p = a, b, c, d, e, f;\quad j = 1, 2, 3, 4.$
2. The symmetry elements are labeled in the Schönflies notation as illustrated, e.g., in Table 1.2 of Ref. [@bc].
3. The character tables are determined from Table 5.7 of Ref. [@bc].
4. The notations of the points of symmetry follow Fig. 3.15 of Ref. [@bc].
[cccccccccc]{}\
$R^+_1$ & $R^+_2$ & $R^-_2$ & $R^-_1$ & $R^+_3$ & $R^-_3$ & $R^+_4$ & $R^+_5$ & $R^-_4$ & $R^-_5$\
$R^+_6$ & $R^+_7$ & $R^-_7$ & $R^-_6$ & $R^+_8$ & $R^-_8$ & $R^+_6$ + $R^+_8$ & $R^+_7$ + $R^+_8$ & $R^-_6$ + $R^-_8$ & $R^-_7$ + $R^-_8$\
\
[ccccc]{}\
$P_1$ & $P_2$ & $P_3$ & $P_4$ & $P_5$\
$P_6$ & $P_7$ & $P_8$ & $P_6$ + $P_8$ & $P_7$ + $P_8$\
\
[cccccccc]{}\
$N^+_1$ & $N^+_2$ & $N^+_3$ & $N^+_4$ & $N^-_1$ & $N^-_2$ & $N^-_3$ & $N^-_4$\
$N^+_5$ & $N^+_5$ & $N^+_5$ & $N^+_5$ & $N^-_5$ & $N^-_5$ & $N^-_5$ & $N^-_5$\
\
\
Notes to Table \[tab:falten229\]
1. In the table for $\Gamma$ and $H$, the letter $R$ stands for the letter denoting the point of symmetry. For example, at point $H$ the representations $R^+_1, R^+_2, \ldots$ stand for $H^+_1, H^+_2, \ldots$ .
2. Each column lists the double-valued representation $R_i\times
{\bm d}_{1/2}$ below the single-valued representation $R_i$, where ${\bm d}_{1/2}$ denotes the two-dimensional double-valued representation of the three-dimensional rotation group $O(3)$ given, e.g., in Table 6.1 of Ref. [@bc].
3. The single-valued representations are defined in Table \[tab:rep229\].
4. The notations of double-valued representations follow strictly Table 6.13 (and Table 6.14) of Ref. [@bc]. In this paper the double-valued representations are not explicitly given but are sufficiently defined by this table.
Bi($000$) $K$ $\Gamma$ $H$ $P$ $N$
-------- -------------- ----- -------------- --------- ------- ---------
Band 1 $\Gamma^+_1$ OK $\Gamma^+_1$ $H^+_1$ $P_1$ $N^+_1$
Band 2 $\Gamma^+_2$ OK $\Gamma^+_2$ $H^+_2$ $P_2$ $N^+_3$
Band 3 $\Gamma^-_2$ OK $\Gamma^-_2$ $H^-_2$ $P_1$ $N^-_3$
Band 4 $\Gamma^-_1$ OK $\Gamma^-_1$ $H^-_1$ $P_2$ $N^-_1$
: Single-valued representations of the space group $Im3m$ of all the energy bands of Bi–V with symmetry-adapted and optimally localized usual (i.e., spin-independent) Wannier functions centered at the Bi atoms. \[tab:wf229\]
\
Notes to Table \[tab:wf229\]
1. The notations of the representations are defined in Table \[tab:rep229\].
2. Assume a closed band of the symmetry in any row of this table to exist in the band structure of Bi–V. Then the Bloch functions of this band can be unitarily transformed into Wannier functions that are
- localized as well as possible;
- centered at the Bi atoms; and
- symmetry-adapted to the space group $Im3m$ (229) [@theoriewf].
The entry “OK” below the time-inversion operator $K$ indicates that the Wannier functions may even be chosen symmetry-adapted to the magnetic group $$M = Im3m + K\cdot Im3m,$$ see Theorem 7 of Ref. [@theoriewf].\
However, a closed band (Definition 2 of Ref. [@theoriewf]) with the symmetry of the bands in this table does not exist in the band structure of Bi–V (see Fig. \[fig:bs\_229\]).
3. The bands are determined following Theorem 5 of Ref. [@theoriewf].
4. The point group of the positions of the Bi atoms (Definitions 11 and 12 of Ref. [@theoriewf]) is the full cubic point group $O_h$. The Wannier functions at the Bi atoms belong to the representations of $O_h$ listed in the second column. These representations are defined in Table \[tab:rep229\].
Bi($000$) $K$ $\Gamma$ $H$ $P$ $N$
-------- ------------------------------------------------- ----- -------------- --------- ------- ---------
Band 1 $\Gamma^+_1 \otimes {\bm d}_{1/2} = \Gamma^+_6$ OK $\Gamma^+_6$ $H^+_6$ $P_6$ $N^+_5$
Band 2 $\Gamma^+_2 \otimes {\bm d}_{1/2} = \Gamma^+_7$ OK $\Gamma^+_7$ $H^+_7$ $P_7$ $N^+_5$
Band 3 $\Gamma^-_2 \otimes {\bm d}_{1/2} = \Gamma^-_7$ OK $\Gamma^-_7$ $H^-_7$ $P_6$ $N^-_5$
Band 4 $\Gamma^-_1 \otimes {\bm d}_{1/2} = \Gamma^-_6$ OK $\Gamma^-_6$ $H^-_6$ $P_7$ $N^-_5$
: Double-valued representations of the space group $Im3m$ of all the energy bands of Bi–V with symmetry-adapted and optimally localized spin-dependent Wannier functions centered at the Bi atoms. \[tab:wf229Z\]
\
Notes to Table \[tab:wf229Z\]
1. Assume an isolated band of the symmetry listed in any row of this table to exist in the band structure of Bi–V. Then the Bloch functions of this band can be unitarily transformed into spin-dependent Wannier functions that are
- localized as well as possible;
- centered at the Bi atoms; and
- symmetry-adapted to the space group $Im3m$ (229) [@theoriewf].
The entry “OK” below the time-inversion operator $K$ indicates that the spin dependent Wannier functions may even be chosen symmetry-adapted to the magnetic group $$M = Im3m + K\cdot Im3m,$$ see Theorem 10 of Ref. [@theoriewf]. Hence, all the listed bands forms superconducting bands, see Definition 22 of Ref. [@theoriewf].
2. The notations of the double-valued representations are (indirectly) defined in Table \[tab:falten229\].
3. Following Theorem 9 of Ref. [@theoriewf], the superconducting bands are simply determined from the single-valued bands listed in Table \[tab:wf229\] by means of Equation (97) of Ref. [@theoriewf]. (According to Definition 20 of Ref. [@theoriewf], each single-valued band in Table \[tab:wf229\] is an affiliated band of one of the superconducting bands.)
4. The superconducting bands consists of one branch each, because there is one Bi atom in the unit cell.
5. The point group of the positions of the Bi atoms (Definitions 11 and 12 of Ref. [@theoriewf]) is the full cubic point group $O_h$. The Wannier functions at the Bi atoms belong to the double-valued representations of $O_h$ listed in the second column, where the single-valued representations $\Gamma^{\pm}_1$ and $\Gamma^{\pm}_2$ are defined by Table \[tab:rep229\], and $\bm{d}_{1/2}$ denotes the two-dimensional double-valued representation of $O(3)$ as given, e.g., in Table 6.1 of Ref. [@bc].
[——-]{} \[1\][\#1]{}
Degtyareva, O.; MCMahon, M.; Nelmes, R. High-pressure structural studies of group-15 elements. , [*24*]{}, 319–356.
Donohue, J. ; Robert E. Krieger Publishing Company, Florida, 1982.
Prakash, O.; Kumar, A.; Thamizhavel, A.; Ramakrishnan, S. Evidence for bulk superconductivity in pure bismuth single crystals at ambient pressure. , [*355*]{}, 52–55.
Li, Y.; Wang, E.; Zhu, X.; Wen, H.H. Pressure-induced superconductivity in Bi single crystals. , [*95*]{}, 024510.
Kr[ü]{}ger, E.; Strunk, H.P. Group Theory of Wannier Functions Providing the Basis for a Deeper Understanding of Magnetism and Superconductivity. , [*7*]{}, 561–598.
Kr[ü]{}ger, E. Nonadiabatic extension of the Heisenberg model. , [*63*]{}, 144403–1–13.
Kr[ü]{}ger, E. Superconducting Bands Stabilizing Superconductivity in YBa2Cu3O7 and MgB2. , [*23*]{}, 213–223.
Kr[ü]{}ger, E. Modified BCS Mechanism of Cooper Pair Formation in Narrow Energy Bands of Special Symmetry I. Band Structure of Niobium. , [*14*]{}, 469–489. Please note that in this paper the term “superconducting band” was abbreviated by “$\sigma$ band”.
Blum, V.; Gehrke, R.; Hanke, F.; Havu, P.; Havu, V.; Ren, X.; Reuter, K.; Scheffler, M. Ab initio molecular simulations with numeric atom-centered orbitals. , [*180*]{}, 2175 – 2196.
Havu, V.; Blum, V.; Havu, P.; Scheffler, M. Efficient O(N)O(N) integration for all-electron electronic structure calculation using numeric basis functions. , [*228*]{}, 8367 – 8379.
Bradley, C.; A.P.Cracknell. ; Claredon, Oxford, 1972.
Kr[ü]{}ger, E. Superconductivity Originating from Quasi-Orbital Electrons II. The Superconducting Ground State of Quasi-Orbital Conduction Electrons. , [*85*]{}, 493–503.
Mott, N.F. On the transition to metallic conduction in semiconductors. , [*34*]{}, 1356 – 1368.
Hubbard, J. Elelectron correlations in narrow energy bands. , [*276*]{}, 238–257.
Bardeen, J.; Cooper, L.N.; Schrieffer, J.R. Theory of superconductivity. , [*108*]{}, 1175.
Kr[ü]{}ger, E. One- and Two-Dimensional Sublattices as Preconditions for High–Tc Superconductivity. , [*156*]{}, 345–354.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'This contribution summarizes the main experimental results presented at the 2009 Quark Matter conference concerning single and dilepton production in proton and heavy ion collisions at high energy. The dilepton invariant mass spectrum has been measured over a range that extends from the $\pi^0$ mass to the $\Upsilon$ mass, and for various collision energies at SPS, Fermilab, Hera and RHIC. This paper focuses on the various contributions (photons, low mass vector mesons, open and hidden heavy flavors) to this spectrum and discuss their implications on our understanding of the matter formed in heavy ion collisions.'
address: 'IRFU/SPhN, CEA Saclay, F-91191, Gif-sur-Yvette, France'
author:
- Hugo Pereira Da Costa
title: 'Early times and thermalization in heavy ion collisions: a summary of experimental results for photons, light vector mesons, open and hidden heavy flavors'
---
Introduction
============
Single and dilepton probes in heavy ion collisions are of particular interest since such probes, once produced, are largely unaffected by the surrounding QCD medium. They carry valuable information on the particle from which they originate and allow one to assess the properties of the medium formed in the early instants of the collision. The following contributions to the dilepton invariant mass spectrum are discussed here, together with what one might learn from their measurement about the properties of the medium formed in the collision:
- Low mass dileptons originating from vector meson leptonic decay ($\rho$, $\phi$ and $\omega$) provide insight on the properties of these mesons in the high temperature expanding fireball produced immediately after the collision, where chiral symmetry may be (at least partially) restored [@Pisarki; @Brown; @Rapp];
- A significant fraction of the virtual and direct photons produced at low ${{p_{\rm T}}}$ (${{p_{\rm T}}}<1$ GeV/c) in heavy ion collisions originates from the thermal black-body radiation of the created fireball [@Stankus; @Turbide]. Measuring these photons therefore allows one to quantify the temperature of the fireball;
- Open heavy flavors, because of their high mass, allow one to study in-medium energy loss mechanisms in addition to what can be learned from light quarks [@Baier; @Gyulassy];
- Heavy quarkonia are of interest because of additional mechanisms that are predicted to occur in the presence of a QGP and that would affect the production of these bound states [@Matsui; @Andronic; @Thews].
Low mass vector mesons
======================
Fig. \[low\_mass\_vector\_mesons\] (left) shows the correlated dimuons invariant mass distribution at the $\rho$ vacuum mass, measured by the NA60 experiment in semi-central In+In collisions [@NA60_rho]. The $\rho$ mass peak differs significantly from the expected vacuum $\rho$ and can be reasonably well described on the low mass side by the model presented in [@Rapp; @NA60_rapp]. This model includes a detailed description of the baryonic matter created in the collision below the formation temperature of a QGP, $T_c$. Interactions with this baryonic matter are responsible for a broadening of the $\rho$ (but no modification of its mass) when approaching chiral symmetry restoration near $T_{c}$.
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![\[low\_mass\_vector\_mesons\] Left: dimuon invariant mass distribution in the $\rho$ mass region in In+In semi-central collisions measured by NA60 at SPS. Right: low mass dielectron invariant mass distribution in Au+Au collisions measured by PHENIX at RHIC.](Fig1a.png "fig:"){height="6cm"} ![\[low\_mass\_vector\_mesons\] Left: dimuon invariant mass distribution in the $\rho$ mass region in In+In semi-central collisions measured by NA60 at SPS. Right: low mass dielectron invariant mass distribution in Au+Au collisions measured by PHENIX at RHIC.](Fig1b.png "fig:"){height="6cm"}
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
A measurement of dilepton invariant mass distributions in the same mass region has been carried out by the PHENIX collaboration at RHIC in Au+Au collisions at $\sqrt{s_{NN}} = 200$ GeV [@low_mass_aa_rhic]. An excess over expected background sources is observed between $0.1$ and $0.6$ GeV/c$^2$ which cannot be described by models similar to the one above [@NA60_rapp], although such models work reasonably well for larger masses (Fig. \[low\_mass\_vector\_mesons\], right). This low mass excess is larger for low ${{p_{\rm T}}}$ dileptons. A possible contribution to this excess, which has not been accounted for in the calculations above, might come from quark-gluon scattering into a quark and a virtual photon ($qg\rightarrow q\gamma^*$). A similar calculation valid for the direct photon production yields at RHIC has been carried out in [@Direct_photon_rapp], which accounts for $q+g$ scattering using a complete leading-order QGP emission rate [@Direct_photon_arnold]. The predicted integrated magnitude of this contribution is about one third of the hadron gas thermal radiation contribution. Applying this to the virtual photon case might explain part of the excess observed at RHIC, but a detailed calculation is still to be carried out.
Direct photons
==============
Direct photon production yields (as a function of ${{p_{\rm T}}}$) can be derived from the dilepton invariant mass spectrum using the following steps [@Akiba]: 1) consider the excess of dileptons over expected hadronic sources in the kinematic range $m\in[0.1,0.3]$ GeV/c$^2$ and ${{p_{\rm T}}}>1$ GeV/c, where the contribution of low mass vector mesons should be negligible (Fig. \[direct\_photons\], left and center panels), 2) interpret this excess as a direct virtual photon signal (with photons decaying into dielectrons) and 3) extrapolate this signal to an invariant mass $m=0$ to get the corresponding real photon production yield.
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![Left and center: dilepton invariant mass distribution as a function of mass for different ${{p_{\rm T}}}$ bins, in ${{p+p}}$ collisions (left) and Au+Au minimum bias collisions (center). Data are compared to expected background sources to derive a possible virtual photon excess. Right: calculated direct photon yield as a function of ${{p_{\rm T}}}$ in different centrality bins, compared to binary scaled ${{p+p}}$ yields.[]{data-label="direct_photons"}](Fig2a.png "fig:"){height="5.2cm"} ![Left and center: dilepton invariant mass distribution as a function of mass for different ${{p_{\rm T}}}$ bins, in ${{p+p}}$ collisions (left) and Au+Au minimum bias collisions (center). Data are compared to expected background sources to derive a possible virtual photon excess. Right: calculated direct photon yield as a function of ${{p_{\rm T}}}$ in different centrality bins, compared to binary scaled ${{p+p}}$ yields.[]{data-label="direct_photons"}](Fig2b.png "fig:"){height="5.2cm"}
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
The resulting yields (as a function or ${{p_{\rm T}}}$) are compared to yields obtained in ${{p+p}}$ collisions scaled by ${{N_{\rm col}}}$, the number of nucleon-nucleon collisions equivalent to one ${{\rm{A}+\rm{A}}}$ collision in a given centrality bin, and the difference is fitted to extract a time averaged (over the medium expansion history) [*black body*]{} radiation temperature (Fig. \[direct\_photons\], right). For central Au+Au collisions at RHIC energy, a temperature of 221$\pm$23 MeV is obtained [@Direct_photon_phenix]. These thermal photon yields can also be compared to various theoretical models in order to derive a medium [*initial*]{} temperature, by making assumptions on how this medium expands and cools down over time [@Direct_photon_enterria]. Depending on how long it takes for the system to thermalize, an initial temperature between $300$ and $600$ MeV is obtained. As one might expect, later thermalization times led to smaller initial temperatures. Similar fits applied to the ${{\rm Pb}+ {\rm Pb}}$ WA98 direct photon measurements [@Direct_photon_wa98] give an initial temperature of about $200$ MeV [@Direct_photon_wa98_theo].
Open heavy flavor
=================
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![Left: total heavy-flavor production cross-section as a function of ${{N_{\rm col}}}$ measured by PHENIX and STAR at RHIC in ${{p+p}}$, ${{d+{\rm{A}}}}$ and ${{\rm{A}+\rm{A}}}$ collisions. Right: ratio between the heavy-flavor differential production cross-section as a function of decay electron ${{p_{\rm T}}}$ measured by PHENIX and STAR in ${{p+p}}$ collisions and a FONLL calculation.[]{data-label="heavy_flavor"}](Fig3a.png "fig:"){height="4.5cm"} ![Left: total heavy-flavor production cross-section as a function of ${{N_{\rm col}}}$ measured by PHENIX and STAR at RHIC in ${{p+p}}$, ${{d+{\rm{A}}}}$ and ${{\rm{A}+\rm{A}}}$ collisions. Right: ratio between the heavy-flavor differential production cross-section as a function of decay electron ${{p_{\rm T}}}$ measured by PHENIX and STAR in ${{p+p}}$ collisions and a FONLL calculation.[]{data-label="heavy_flavor"}](Fig3b.png "fig:"){height="4.5cm"}
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
There is still a disagreement of about a factor two between the STAR and PHENIX heavy flavor (charm and beauty) total cross-section measurements in ${{p+p}}$, ${{d+{\rm{A}}}}$ and ${{\rm{A}+\rm{A}}}$ collisions at $\sqrt{s}=200$ GeV [@heavy_flavor_star; @heavy_flavor_raa_phenix], as well as between the open charm differential cross-section as a function of ${{p_{\rm T}}}$ [@heavy_flavor_star] (Fig. \[heavy\_flavor\]). The main differences between the two experiments are 1) the amount of material in the detector acceptance 2) the rapidity and ${{p_{\rm T}}}$ range of the measured electrons used for heavy flavor identification. Efforts are underway in both collaborations to better understand existing measurements and provide new independent measurements in order to address this discrepancy:
- The PHENIX collaboration is working on refining its understanding of the electron cocktail which is subtracted from the raw single electron spectrum to derive the heavy-flavor signal, and now accounts for the contribution of electrons coming from ${{\rm{J}/\psi}}$, $\Upsilon$ and Drell-Yan [@Dion]. PHENIX also measured the total D+B production cross-section in a largely independent way by estimating all the contributions to the dielectron invariant mass spectrum (as opposed to the single electron spectrum) using data-driven simulations [@low_mass_pp_rhic]. Finally PHENIX reported on a first study of electron-muon correlations to measure $D\overline{D}$ production in a way that is largely free of background [@Tatia];
- The STAR collaboration has removed its central silicon detector in order to reduce the amount of material in the spectrometer and the corresponding photo-conversion background contribution to the raw single electron spectrum. It also measured the production of low ${{p_{\rm T}}}$ $D$ mesons using their decay into a $K,\pi$ pair, and using single muons [@D_star; @D_star_mu];
In ${{\rm{A}+\rm{A}}}$ collisions, the measurement of the heavy flavor nuclear modification factor ${{R_{\rm AA}}}$ agrees between the two collaborations [@heavy_flavor_star; @heavy_flavor_raa_phenix]. The heavy flavor production at high ${{p_{\rm T}}}$ (${{p_{\rm T}}}>3$ GeV/c) exhibits a large suppression with respect to binary scaled cross-sections in ${{p+p}}$ (Fig. \[heavy\_flavor\_aa\], top-left). This indicates that high ${{p_{\rm T}}}$ heavy quarks lose a significant fraction of their energy when traversing the medium created during the collision, and poses a challenge to theoretical models, since heavy quarks, due to their high mass, are expected to loose less energy (via gluon radiation) than light quarks [@Kharzeev]. Additionally, a large elliptic flow $v_2$ is observed for intermediate ${{p_{\rm T}}}$ heavy quarks ($1<{{p_{\rm T}}}<3$ GeV/c) in ${{\rm Au}+{\rm Au}}$ minimum bias collisions (Fig. \[heavy\_flavor\_aa\], bottom-left), indicating that intermediate ${{p_{\rm T}}}$ heavy quarks are rapidly thermalized. These two observations are interpreted as an evidence for a strong coupling between the heavy quarks and the medium produced during the collision. No consensus amongst theorists has been achieved to date concerning the underlying mechanism responsible for this strong coupling (see e.g. [@Armesto; @VanHees; @Moore].
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![Left: Heavy-flavor electron ${{R_{\rm AA}}}$ and elliptic flow measured by PHENIX in Au+Au collisions at RHIC; right: B/D+B production ratio as a function of ${{p_{\rm T}}}$ in ${{p+p}}$ collisions measured by STAR, compared to FONLL calculations.[]{data-label="heavy_flavor_aa"}](Fig4a.png "fig:"){height="5.0cm"} ![Left: Heavy-flavor electron ${{R_{\rm AA}}}$ and elliptic flow measured by PHENIX in Au+Au collisions at RHIC; right: B/D+B production ratio as a function of ${{p_{\rm T}}}$ in ${{p+p}}$ collisions measured by STAR, compared to FONLL calculations.[]{data-label="heavy_flavor_aa"}](Fig4b.png "fig:"){height="4.2cm"}
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Current single lepton measurements do not allow for a separation of charm and beauty in a model independent way. However, separate measurements have been performed to determine the relative contribution of charm and beauty to total heavy flavor yields. These are either direct measurements (using the hadronic decay of D mesons), or indirect measurements (e.g. by studying the correlation of opposite sign electron-hadron pairs in the final state to separate the contributions of D and B semi-leptonic decays). In ${{p+p}}$ collisions, the resulting B/(B+D) ratios agree well between STAR and PHENIX [@Dunlop]. They are consistent with a Fixed Order Next to Leading Log (FONLL) calculation [@Cacciary] (Fig. \[heavy\_flavor\_aa\], right).
Measuring the total heavy flavor ${{R_{\rm AA}}}$ and the B/(B+D) ratio in ${{p+p}}$ collisions allows one to uniquely relate the ${{R_{\rm AA}}}$ of B and D mesons: smaller values of ${{R_{\rm AA}^{\rm D}}}$ bring ${{R_{\rm AA}^{\rm B}}}$ closer to unity. The (negative) slope of the relation between the two is driven by the D/B ratio measured in ${{p+p}}$ collisions whereas its magnitude is controlled by the total heavy flavor ${{R_{\rm AA}}}$. The main conclusion of such an analysis [@Dunlop] is that even in the unlikely case where high ${{p_{\rm T}}}$ charm quarks are entirely suppressed in ${{\rm{A}+\rm{A}}}$ collisions, a significant suppression of high ${{p_{\rm T}}}$ $b$ quarks is still needed to explain the total heavy flavor ${{R_{\rm AA}}}$ measured at RHIC. This poses an even greater challenge to theoretical models than the charm ${{R_{\rm AA}}}$, since $b$ quarks are significantly heavier than $c$ quarks.
More information will be gained on this matter by measuring charm and beauty separately in ${{\rm{A}+\rm{A}}}$ collisions. Both STAR and PHENIX are undergoing silicon vertex detector upgrades for the central tracking that should allow direct measurement of D and B mesons.
Heavy Quarkonia
===============
Heavy quarkonia have been studied extensively at the SPS and the RHIC since they are predicted to melt, via QCD Debye screening, in the presence of a Quark-Gluon Plasma [@Matsui]. Recently, focus has been given to understanding both the heavy quarkonia production mechanism in ${{p+p}}$ collisions and the cold nuclear matter effects which affect the production of heavy quarkonia when colliding two nuclei without the formation of a QGP.
Heavy quarkonia production yields in ${{p+p}}$ collisions serve as a reference to study medium effects in ${{p+{\rm{A}}}}$, ${{d+{\rm{A}}}}$ and ${{\rm{A}+\rm{A}}}$ collisions and help in understanding how these bound states are produced. Fig. \[jpsi\_pp\_rhic\] (left) shows the ${{\rm{J}/\psi}}$ production invariant yields as a function of rapidity measured in ${{p+p}}$ collisions at RHIC by PHENIX using the 2006 high statistics ${{p+p}}$ data sample [@Cesar]. These yields can be compared to calculations that assume different underlying production mechanisms, however both statistical and systematic uncertainties are still too large to uniquely identify the correct mechanism at play. Another way to address the production mechanism is to measure the ${{\rm{J}/\psi}}$ polarization since models have very different predictions for this observable. Fig. \[jpsi\_pp\_rhic\] (right) shows the ${{\rm{J}/\psi}}$ polarization measured in the helicity frame by PHENIX in ${{p+p}}$ collisions at mid and forward rapidity [@Cesar]. The model shown on the figure (a refined version of the Color Singlet Model [@lansberg]) reproduces reasonably well the data at mid-rapidity but misses the measurement at forward rapidity. Similarly, all available measurements on ${{\rm{J}/\psi}}$ polarization have been collected, [*rotated*]{} so that they are all evaluated in the same reference frame (here the Collin-Sopper frame [@collins]) and represented as a function of the ${{\rm{J}/\psi}}$ momentum. A global trend is observed that is largely independent of the collision energy but lacks a theoretical explanation [@jpsi_polarization].
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![Left: ${{\rm{J}/\psi}}$ production yield as a function of ${{\rm{J}/\psi}}$ rapidity measured in ${{p+p}}$ collisions at RHIC. Right: ${{\rm{J}/\psi}}$ polarization measured in the helicity frame by PHENIX in ${{p+p}}$ collisions.[]{data-label="jpsi_pp_rhic"}](Fig5a.png "fig:"){height="4.8cm"} ![Left: ${{\rm{J}/\psi}}$ production yield as a function of ${{\rm{J}/\psi}}$ rapidity measured in ${{p+p}}$ collisions at RHIC. Right: ${{\rm{J}/\psi}}$ polarization measured in the helicity frame by PHENIX in ${{p+p}}$ collisions.[]{data-label="jpsi_pp_rhic"}](Fig5b.png "fig:"){height="4.5cm"}
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Cold nuclear matter effects must be carefully evaluated and properly accounted for when considering yield modifications observed in ${{\rm{A}+\rm{A}}}$ collisions before quantifying the effects of a QGP. They include: modification of the parton distribution functions (pdf) in the nucleus (notably shadowing or gluon saturation at low $x_{\rm Bj}$, anti-shadowing at large $x_{\rm Bj}$); nuclear absorption/dissociation; initial state energy loss and the Cronin effect. The general approach used up to now to quantify the cold nuclear matter effects [@dau_rhic] is to choose a set of modified pdfs, add some effective absorption (or break-up) cross-section to account for the other possible effects, derive the resulting expected heavy quarkonia production yield, and fit this expected yield to the ${{p+{\rm{A}}}}$ or ${{d+{\rm{A}}}}$ available measurements, leaving the effective break-up cross-section as a free parameter. These effects are then extrapolated to ${{\rm{A}+\rm{A}}}$ collisions and compared to the data.
At the SPS, an updated break-up cross-section has been estimated that properly accounts for the fact that the gluon $x$ domain covered by the experiments corresponds to the anti-shadowing region of modified pdfs, for which the gluon content is enhanced with respect to the bare nucleon case (see e.g. [@EPS09]). Consequently, the new cross-section derived from ${{p+{\rm{A}}}}$ data is significantly larger than the previously published value. When extrapolated to In+In, the ${{\rm{J}/\psi}}$ suppression factor estimated from cold nuclear matter effects matches the data rather well and leaves little room for any additional anomalous suppression [@Scompari] (Fig. \[jpsi\_raa\_sps\], left).
At RHIC, updated break-up cross-sections have been derived from the new 2009 ${{d+{\rm{Au}}}}$ data sample which is about 30 times as large as the one used for previous published results [@Cesar]. These cross-sections must still be extrapolated to Au+Au collisions in order to quantify any additional anomalous suppression due to the possible formation of a QGP.
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![Left: ${{\rm{J}/\psi}}$ ${{R_{\rm AA}}}$ at SPS after removal of CNM effects measured by NA60. Right: ${{\rm{J}/\psi}}$ effective break-up cross-section as a function of collision energy in ${{d+{\rm{A}}}}$ or ${{p+{\rm{A}}}}$ collisions.[]{data-label="jpsi_raa_sps"}](Fig6a.png "fig:"){height="5cm"} ![Left: ${{\rm{J}/\psi}}$ ${{R_{\rm AA}}}$ at SPS after removal of CNM effects measured by NA60. Right: ${{\rm{J}/\psi}}$ effective break-up cross-section as a function of collision energy in ${{d+{\rm{A}}}}$ or ${{p+{\rm{A}}}}$ collisions.[]{data-label="jpsi_raa_sps"}](Fig6b.png "fig:"){height="5cm"}
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
A systematic survey of the effective charmonia break-up cross-section has been performed that collects results from SPS, HERA, Fermilab and RHIC [@jpsi_breakup]. When plotted as a function of the collision energy a common (exponentially decreasing) trend is observed although this trend has no theoretical interpretation yet (Fig. \[jpsi\_raa\_sps\], right). When represented as a function of rapidity, and disregarding the collision energy, the effective break-up cross-section also exhibits a somewhat universal trend, that cannot be easily explained in terms of the effects listed above. Note that similar surveys have been performed in the past that led to different conclusions, namely that the current data are consistent with no energy dependency [@Tram].
The first $\Upsilon$ measurements have become available at RHIC (with limited statistics) in ${{p+p}}$, ${{d+{\rm{Au}}}}$ and ${{\rm Au}+{\rm Au}}$ collisions (Fig. \[upsilon\]). Due to limited statistics, it is difficult to disentangle the $\Upsilon$ signal and the underlying correlated background sources (from Drell-Yan and open beauty). One can either ignore these contributions and derive e.g. nuclear modification factors for inclusive high-mass dileptons, or estimate them from simulations and use the corresponding uncertainty as a systematic error. In ${{p+p}}$ collisions a total $\Upsilon$ production cross-section $BR d\sigma/dy (|y|<0.35) = 114^{+46}_{-45}$ pb is measured [@Cesar]; in ${{d+{\rm{Au}}}}$ collisions a nuclear modification factor consistent with unity is observed [@Liu] while in ${{\rm Au}+{\rm Au}}$ collisions this nuclear modification factor is smaller that 0.64 at 90 % confidence level [@Levy], meaning that inclusive high mass dileptons are significantly suppressed by the medium formed in ${{\rm Au}+{\rm Au}}$ collisions at $\sqrt{s_{\rm NN}}=200$ GeV.
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![dielectron invariant mass distributions at high mass in ${{p+p}}$ (left), ${{d+{\rm{Au}}}}$ (center) and Au+Au collisions, measured by PHENIX and STAR at RHIC.[]{data-label="upsilon"}](Fig7a.png "fig:"){height="3.7cm"} ![dielectron invariant mass distributions at high mass in ${{p+p}}$ (left), ${{d+{\rm{Au}}}}$ (center) and Au+Au collisions, measured by PHENIX and STAR at RHIC.[]{data-label="upsilon"}](Fig7b.png "fig:"){height="3.7cm"} ![dielectron invariant mass distributions at high mass in ${{p+p}}$ (left), ${{d+{\rm{Au}}}}$ (center) and Au+Au collisions, measured by PHENIX and STAR at RHIC.[]{data-label="upsilon"}](Fig7c.png "fig:"){height="3.7cm"}
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Conclusion
==========
In short:
- Low mass vector mesons exhibit strong shape modifications with respect to their vacuum properties, that can be well described at SPS but not at RHIC possibly because some contributions to the dilepton spectrum have not been properly accounted for;
- Virtual photons can be used in addition to direct photon measurements to assess the medium temperature averaged over its expansion time and derive its initial temperature;
- A significant suppression of $b$ quarks is necessary to describe the observed heavy flavor ${{R_{\rm AA}}}$ in a way that is consistent with the B/B+D ratio measured in ${{p+p}}$ collisions;
- ${{\rm{J}/\psi}}$ production in heavy-ion collisions is a puzzle. The situation is more complex than the original picture, due to our poor knowledge of its production mechanism in ${{p+p}}$ collisions and to the existence of many cold nuclear matter effects which significantly modify this production even in the absence of a QGP. Efforts are being made to better understand the above so that one can quantify the [*hot*]{}, abnormal effects at both SPS and RHIC. Notably, it appears that the suppression measured at SPS in In-In collisions can be entirely described in terms of such cold nuclear matter effects.
[00]{} Pisarski R D 1982, [*Phys. Lett.*]{} [**110**]{}B 155 Brown G E and Rho M 2002, [*Phys. Rep.*]{} [**363**]{} 85 Rapp R and Wambach J 2000, [*Adv. Nucl. Phys.*]{} [**25**]{} 1 Stankus P 2005, [*Ann. Rev. Nucl. Part. Sci.*]{} [**55**]{} 517 Turbide S [*et al.*]{} 2004, [*Phys. Rev.*]{} C[**69**]{} 014903 Baier R, Schiff D and Zakharov B G 2000, [*Annu. Rev. Nucl. Part. Sci.*]{} [**50**]{} 37 Gyulassy M [*et al.*]{} nucl-th/0302077 Matsui T and Satz H 1986,[*Phys. Lett.*]{} B [**178**]{} 416 Andronic A [*et al.*]{} 2003, [*Phys. Lett.*]{} B[**571**]{} 36 Thews R L 2007, [*Nucl. Phys.*]{} A[**783**]{} 301 Arnaldi R [*et al.*]{} (NA60 collaboration) 2006, [*Phys. Rev. Lett.*]{} [**100**]{} 022302 Rapp R 2002 arXiV:nucl-th/0204003 Adare A [*et al.*]{} (PHENIX collaboration) 2007, arXiv: 0706.3034v1 \[nucl-ex\] Turbide S, Rapp R and Gale 2004, [*Phys. Rev.*]{} C [**69**]{} 014903 Arnold P, Moore G D, Yaffe L G 2001, [*JHEP*]{} 0112, 9 Akiba Y (PHENIX collaboration), this proceedings Adare A [*et al.*]{} (PHENIX collaboration), arXiv: 0804.4168v1 \[nucl-ex\] d’Enterria D and Peressounko D 2006, [*Eur. Phys. J.*]{} C[**46**]{} 451 Aggarwal M M [*et al.*]{} (WA98 collaboration) 2000, [*Phys. Rev. Lett.*]{} [**85**]{} 3595 Turbide S, Rapp R, Gale C 2004, [*phys. Rev.*]{} C [**69**]{} 014903 Abelev B I [*et al.*]{} (STAR collaboration) 2007 [*Phys. Rev. Lett.*]{} [**98**]{} 192301 Adare A [*et al.*]{} (PHENIX collaboration) 2007, [*Phys. Rev. Lett*]{} [**98**]{} 172301 Dion A (PHENIX collaboration), this proceedings Adare A [*et al.*]{} (PHENIX collaboration) 2009, [*Phys Lett.*]{} B [**670**]{} 313 Engelmore T (PHENIX collaboration), this proceedings Abelev B I [*et al.*]{} (STAR collaboration) 2005, [*Phys. Rev. Lett.*]{} [**94**]{} 062301 Abelev B I [*et al.*]{} (STAR collaboration), arXiv:0805.0364 Dokshitzer Y L and Kharzeev D E 2001, [*Phys. Lett.*]{} B [**519**]{} 199. Armesto N [*et al.*]{} 2006, [*Phys. Lett.*]{} B [**637**]{} 262 van Hees H, Greco V and Rapp R 2006, [*Phys. Rev.*]{} C [**73**]{} 034913 Moore G D, Teaney D 2005, [*Phys. Rev.*]{} C [**71**]{} 064904 Dunlop J C, this proceedings Cacciari M [*et al.*]{} 2005, [*Phys. Rev. Lett*]{} [**95**]{} 122001; private communication da Silva C L (PHENIX collaboration), this proceedings Haberzetti H and Lansberg J P 2008, [*phys. Rev. Lett*]{} [**100**]{} 032006 Collins J C and Soper D E 1977, [*Phys. Rev.*]{} D[**16**]{} 2219 Faccioli P, Lourenco C, Seixas J, Woehri H K 2009, arXiv:0902.4462v1 \[hep-ph\] Adare A [*et al.*]{} (PHENIX collaboration) 2008, [*Phys. Rev.*]{} C [**77**]{} 024912 Eskola K J, Paukkunen H, Salgado C A 2009, arXiv:0902.4154v1 \[hep-ph\] Scomparin E (NA60 collaboration), this proceedings Lourenco C, Vogt R, Woehri H K 2009, arXiv:0901.3054 \[hep-ph\] Arleo F and Tram V N 2008, [*Eur. Phys. J*]{} C[**55**]{} 449 Liu H (STAR collaboration), this proceedings Linden Levy L A, this proceedings
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We detect thermally excited surfaces waves on a submicron SiO$_2$ layer, including Zenneck and guided modes in addition to Surface Phonon Polaritons. The measurements show the existence of these hybrid thermal-electromagnetic waves from near- (2.7 $\mu$m) to far- (11.2 $\mu$m) infrared. Their propagation distances reach values on the order of the millimeter, several orders of magnitude larger than on semi-infinite systems. These two features; spectral broadness and long range propagation, make these waves good candidates for near-field applications both in optics and thermics due to their dual nature.'
author:
- Sergei Gluchko
- Bruno Palpant
- Sebastian Volz
- Rémy Braive
- Thomas Antoni
bibliography:
- 'manuscript.bib'
title: 'Thermal Excitation of Broadband and Long-range Surface Waves on SiO$_2$ Submicron Films'
---
Thermal radiation through surface wave diffraction is usually only considered as the result of Surface Phonon Polaritons (SPhPs). SPhPs are hybrid evanescent electromagnetic surface waves generated by the phonon-photon coupling, at the interface of polar and dielectric materials (such as SiO$_2$ and air) [@Joulain; @1film; @3film; @NatureKim]. The influence of SPhPs on the thermal performance of nanostructured materials has been studied intensively over the last decade, providing an alternative channel of heat conduction when the objects are scaled down [@JoseThinFilm; @ChenThinThermalPRB]. Due to this behaviour, they are essential for the improvement of the thermal stability in micro and nanoelectronics[@nanoelectr2; @nanoelectr1; @mikyung], microscopy[@microscopyNature], near-field thermophotovoltaics [@Gelais] and for thermal radiation [@RousseauNature; @nature; @marquierPRB]. In addition, SPhPs provide coherent thermal radiation in mid-infrared [@nature; @marquierPRB]. This feature is now widely used to control thermal radiation but in a frequency range that is limited to the mid-infrared because it implies the coupling to transverse optical phonons [@Joulain; @NatureBN]. But this narrow spectrum (typically $8.6-9.3$ $\mu$m at a SiO$_2$-air interface) in addition to propagation lengths in the range of the wavelength decrease the field of use of SPhPs for many applications such as thermal transport at nanoscale, infrared nanophotonics and coherent thermal emission.
In this letter we demonstrate through experiment that coherent thermal emission, resulting from surface waves, can be extended spectrally. We also prove experimentaly that these surface waves have a long propagation range, when considering isolated submicron layers. Indeed, if the film is thinner than the penetration depth of the wave inside the material, the electromagnetic mode can be coupled on both its interfaces allowing for the long-range propagation of two other types of electromagnetic surface waves; Zenneck and subwavelength Transverse Magnetic (TM) guided modes [@yang]. The propagation length is increased as a consequence of the dramatic decrease in the overlap of the mode with the material, hence its absorption. For example, it is almost two orders of magnitude larger than the wavelength for a $1$ $\mu$m thick suspended SiO$_2$ membrane [@JoseThinFilm]. To prove those predictions, we fabricated a submicron glass layer and characterized its thermal emission by means of Fourier Transform InfraRed (FTIR) spectroscopy. Our experimental results are then compared with both Finite-Difference Time-Domain (FDTD) simulations and theoretical predictions.
The fabrication process of a suspended submicron thick glass membrane is, however, a challenging task due to the poor selectivity of Si/SiO$_2$ etching. Nevertheless, as far as electromagnetism is concerned, a sample consisting of a twice thinner layer of SiO$_2$ deposited over a metallic film - the role of which is to optically isolate the dielectric thin layer from the substrate - is a strictly equivalent structure. To prove this equivalence, Fig. \[Fig.1\](a) reports the calculated dispersion curves of surface waves on a 1 $\mu$m thick suspended SiO$_2$ membrane and a $0.5$ $\mu$m thick SiO$_2$ thin film deposited on an aluminum layer obtained by means of FDTD simulations (MEEP code[@meep]). This is done by considering real and imaginary parts of glass dielectric function of SiO$_2$ shown in Fig. \[Fig.1\](b) as extracted from experimental data [@palik]. The dispersion curves are superimposed, confirming the electromagnetic equivalence of the two systems. In addition, it can be seen that the curves lie beneath the light line over nearly the full spectrum indicating the evanescent behavior of the modes.
![\[Fig.1\] (a) Comparison of the FDTD-calculated dispersion relations of a 1 $\mu$m thick suspended SiO$_2$ membrane (red line) and of a 0.5 $\mu$m thick SiO$_2$ film deposited on aluminum (dashed blue line). (b) Real (magenta line) and imaginary (green line) parts of the relative permittivity of SiO$_2$. Grey region indicates the frequency range of SPhPs.](fig1ab-eps-converted-to.pdf){width="230pt"}
The sample is fabricated by sputtering deposition of an amorphous SiO$_2$ thin layer on an aluminum layer, itself grown on a polished Si wafer in order to optically mimic the suspended layer of SiO$_2$. The thicknesses of SiO$_2$ and aluminum layers are chosen to be $0.75$ $\mu$m and $0.25$ $\mu$m, respectively. A diffraction grating of period $\Lambda = 9.26$ $\mu$m with a filling factor $0.5$ is etched by $0.5$ $\mu$m in the SiO$_2$ film using negative UV lithography and anisotropic reactive ion etching. In order to avoid any artificial broadening of the emission spectrum due to finite size effects, the grating has to be much larger than the propagation length of these surface modes. We then choose the lateral size of the grating to be 1 cm in both length and width (as shown on Fig. \[Fig.3\](a)), that is one order of magnitude larger than the maximum value of the estimated propagation length.
We operate a FTIR spectrometer with a spectral resolution of $1$ $\text{cm}^{-1}$, which provides the required sensitivity in the working frequency range from $700$ $\text{cm}^{-1}$ to $4000$ $\text{cm}^{-1}$ ($14.3$ - $2.5$ $\mu$m). The sample is heated up to $673$ K with a heating stage and the emitted signal is collected for various tilt angles of the sample. KRS-5 holographic wire grid polarizer is used to examine sample emission for TE and TM polarizations providing excellent transmission in the working frequency range. Each experimental spectrum $S^{exp}_{gr}$ is obtained by subtraction of the background radiation $S_{bg}$ and normalization by the emission obtained from a flat region of the sample, without grating $S_{fl}$, under the same conditions, as follows: $$S^{exp}_{gr}(\nu, T_s,\alpha) = \frac{S_{gr}(\nu, T_s,\alpha)-S_{bg}(\nu)}{S_{fl}(\nu, T_s,\alpha)-S_{bg}(\nu)} \textnormal{,}
\label{Eq.1}$$ where $T_s$ is the sample surface temperature and $\alpha$ is the tilt angle. Note that this emission signal $S^{exp}_{gr}(\nu, T_s,\alpha)$ is not the same as emissivity as it also features the influence of the topography of the sample on its emission. The normalized spectrum value reduces to unity for any frequency where the grating has no impact on the sample emission. Fig. \[Fig.3\](b) shows this normalized emission spectrum obtained from our sample for $T_s=673$ K. The sharp peaks, marked with black arrows, are the diffraction orders of the grating which is a clear signature of coherent thermal emission. The peaks without the arrows do not originate from the diffraction by the grating and they are observed due to the different SiO$_2$ effective thickness in the grating and flat regions of the sample.
[![\[Fig.3\] Sample design (a) and its emission signal (b) at $T_s=673$ K and tilt angle $\alpha = 2.6^{\circ}$. Black arrows indicate the emission peaks due to the diffraction grating.](fig2a-eps-converted-to.pdf "fig:"){width="210pt"}]{}
[![\[Fig.3\] Sample design (a) and its emission signal (b) at $T_s=673$ K and tilt angle $\alpha = 2.6^{\circ}$. Black arrows indicate the emission peaks due to the diffraction grating.](fig2b-eps-converted-to.pdf "fig:"){width="210pt"}]{}
We then examine the polarization of the grating emission since the diffraction peaks can only exist for transverse magnetic polarization if they originate from surface waves [@yang; @book:SurfaceWavesBook]. Fig. \[Fig.4\](a, c, e) show the grating emission peaks observed by collecting emission signal from the grating on SiO$_2$ thin film deposited on aluminum substrate heated up to $T=673$ K in three different frequency regions. In all these regions the emission peaks disappear for TE polarized signal as expected.
These frequency regions indicate different surface modes according to the values of the relative permittivity of glass shown in Fig. \[Fig.1\](b). These are Zenneck surface modes ($\epsilon_r > 0$, $\epsilon_i > 0$), Surface Phonon Polaritons ($\epsilon_r < -\epsilon_{\text{air}}$), and subwavelength TM guided modes ($\epsilon_r > 0$, $\epsilon_i \approx 0$) according to the classification of surface electromagnetic modes in thin dielectric films [@yang; @book:SurfaceWavesBook]. Detection of Zenneck surface modes in a single interface system for these frequencies is usually not possible due to their short propagation length [@yang] while a thin film supports long-range Zenneck modes allowing for their diffraction by the grating. Fig. \[Fig.4\](e) shows the existence of thermally excited subwavelength TM guided modes over a broad frequency range around $\nu=2000$ $\text{cm}^{-1}$ where the absorption is almost equal to zero.
[![\[Fig.4\] Emission signal of the SiO$_2$ grating deposited on aluminum in TE and TM polarizations (left) and for different tilt angles (right). Figures (a) and (b): Zenneck region ($\epsilon_r > 0$, $\epsilon_i > 0$). Figures (c) and (d): Surface Phonon Polariton region ($\epsilon_r < -\epsilon_{\text{air}}$). Figures (e) and (f): subwavelength TM guided mode region ($\epsilon_r > 0$, $\epsilon_i \approx 0$).](fig3a-eps-converted-to.pdf "fig:"){width="110pt"}]{} [![\[Fig.4\] Emission signal of the SiO$_2$ grating deposited on aluminum in TE and TM polarizations (left) and for different tilt angles (right). Figures (a) and (b): Zenneck region ($\epsilon_r > 0$, $\epsilon_i > 0$). Figures (c) and (d): Surface Phonon Polariton region ($\epsilon_r < -\epsilon_{\text{air}}$). Figures (e) and (f): subwavelength TM guided mode region ($\epsilon_r > 0$, $\epsilon_i \approx 0$).](fig3b-eps-converted-to.pdf "fig:"){width="110pt"}]{} [![\[Fig.4\] Emission signal of the SiO$_2$ grating deposited on aluminum in TE and TM polarizations (left) and for different tilt angles (right). Figures (a) and (b): Zenneck region ($\epsilon_r > 0$, $\epsilon_i > 0$). Figures (c) and (d): Surface Phonon Polariton region ($\epsilon_r < -\epsilon_{\text{air}}$). Figures (e) and (f): subwavelength TM guided mode region ($\epsilon_r > 0$, $\epsilon_i \approx 0$).](fig3c-eps-converted-to.pdf "fig:"){width="110pt"}]{} [![\[Fig.4\] Emission signal of the SiO$_2$ grating deposited on aluminum in TE and TM polarizations (left) and for different tilt angles (right). Figures (a) and (b): Zenneck region ($\epsilon_r > 0$, $\epsilon_i > 0$). Figures (c) and (d): Surface Phonon Polariton region ($\epsilon_r < -\epsilon_{\text{air}}$). Figures (e) and (f): subwavelength TM guided mode region ($\epsilon_r > 0$, $\epsilon_i \approx 0$).](fig3d-eps-converted-to.pdf "fig:"){width="110pt"}]{} [![\[Fig.4\] Emission signal of the SiO$_2$ grating deposited on aluminum in TE and TM polarizations (left) and for different tilt angles (right). Figures (a) and (b): Zenneck region ($\epsilon_r > 0$, $\epsilon_i > 0$). Figures (c) and (d): Surface Phonon Polariton region ($\epsilon_r < -\epsilon_{\text{air}}$). Figures (e) and (f): subwavelength TM guided mode region ($\epsilon_r > 0$, $\epsilon_i \approx 0$).](fig3e-eps-converted-to.pdf "fig:"){width="110pt"}]{} [![\[Fig.4\] Emission signal of the SiO$_2$ grating deposited on aluminum in TE and TM polarizations (left) and for different tilt angles (right). Figures (a) and (b): Zenneck region ($\epsilon_r > 0$, $\epsilon_i > 0$). Figures (c) and (d): Surface Phonon Polariton region ($\epsilon_r < -\epsilon_{\text{air}}$). Figures (e) and (f): subwavelength TM guided mode region ($\epsilon_r > 0$, $\epsilon_i \approx 0$).](fig3f-eps-converted-to.pdf "fig:"){width="110pt"}]{}
Fig. \[Fig.4\](b, d, f) demonstrate the frequency shift of the emission peaks when the tilt angle is varied. This feature can be used to reconstruct the dispersion relation of these surface modes by applying the grating equation: $$\frac{\omega}{c}\sin{\alpha}=k_{\parallel}+m\frac{2\pi}{\Lambda}{,} \quad {m\in\mathbb{Z}}{,}
\label{Eq.2}$$ where $k_{\parallel}$ is the in-plane wavevector and $m$ is the diffraction order. Fig. \[Fig.5\](a) reports the dispersion relation that has been obtained by following this procedure. Experimental data lie beneath the light line indicating the evanescent nature of the surface waves and is in reasonably good agreement with the numerical predictions. The difference between experimental results and FDTD simulation data can be understood by assuming that the effective thickness of the SiO$_2$ film with the diffraction grating is not exactly 0.5 $\mu$m due to fabrication discrepancies and that the permittivities of the SiO$_2$ sample and of the FDTD computation differ. Note that we did not observe any diffraction peaks from $\nu=1300$ $\text{cm}^{-1}$ to $\nu=1900$ $\text{cm}^{-1}$ due to the presence of water vapor.
![\[Fig.5\] Dispersion relation (a) and coherence length (b) of the surface waves for the thin SiO$_2$ film deposited on aluminum obtained from FDTD direct computations (blue line) and experimental measurements (red circles).](fig4ab-eps-converted-to.pdf){width="220pt"}
Lifetime estimation of these surface modes can be obtained by analyzing the width of the grating emission peaks. Considering the experimental dispersion to be photon-like, the coherence length can be deduced as follows: $$L=\frac{1}{2\Delta\nu}{,}
\label{Eq.3}$$ where $\Delta\nu$ is the full wave number width at half maximum of the emission peak in $\text{cm}^{-1}$. Fig. \[Fig.5\](b) shows that the typical coherence length is in the order of 100 $\mu$m, that is, ten times larger than the typical coherence length of these surface waves in a semi-infinite SiO$_2$-air interface. These values are in agreement with theoretical predictions for a 1 $\mu$m thick SiO$_2$ suspended film reported by Ordonez-Miranda *et al.*[@JoseThinFilm]. The obtained length range is also ten times larger than the typical coherence length of the surface waves of a semi-infinite SiO$_2$-air interface [@JoseThinFilm; @Joulain]. The coherence length $L$ reaches 700 $\mu$m for Zenneck modes which is almost two orders of magnitude larger than their wavelength. Such a large coherence length is achieved due to the fact that most of the electromagnetic energy is propagating in the air close to the interface of the dielectric rather than in the material, decreasing the absorbed power and enhancing the coherence length. Note, that we underestimate the coherence length of the modes since the grating obviously introduces radiative losses and that the values are expected to be even larger for smooth thin films.
In this work we have observed through experiment thermally excited surface waves at the surface of a thin SiO$_2$ film deposited on aluminum from $882$ $\text{cm}^{-1}$ to $3725$ $\text{cm}^{-1}$, whereas an interface between two semi-infinite materials only supports surface waves from $1072$ $\text{cm}^{-1}$ to $1156$ $\text{cm}^{-1}$. This spectral broadening is the result of Zenneck and subwavelength TM guided surface waves excitation from $882$ $\text{cm}^{-1}$ to $1072$ $\text{cm}^{-1}$ and from $1979$ $\text{cm}^{-1}$ to $3725$ $\text{cm}^{-1}$ in addition to Surface Phonon-Polaritons. From their emission spectra, we were able to reconstruct their dispersion relation and to measure the coherence length of these waves. For Zenneck surface waves, it reaches almost 700 $\mu$m which is two orders of magnitude larger than their wavelength. We believe that because of their large coherence length as well as their existence in a very large spectrum, these surface waves can be considered for a wide range of applications in the infrared for both the thermal management of submicron structures and photonics engineering due to their dual nature. Here we focused our experimental study on SiO$_2$ since it is a very common material in microelectronics but the same phenomena will exist for any dielectric material supporting resonant surface waves.
We wish to thank Jose Ordonez-Miranda, Stéphane Collin, Laurent Tranchant and Mikyung Lim for their fruitful discussions. We also want to acknowledge Thuy-Anh Nguyen, Paul Debue and Romaric de Lépinau for their contribution to the early development of the FDTD code. This work was supported by Renatech project “Phonons Polaritons de Surface large bande”.
|
{
"pile_set_name": "ArXiv"
}
|
arXiv: 0805.1102 (hep-th)\
CAS-PHYS-BHU Preprint
[A NOTE ON THE (ANTI-)BRST INVARIANT LAGRANGIAN DENSITIES FOR THE FREE ABELIAN 2-FORM GAUGE THEORY]{}\
[SAURABH GUPTA$^{(a)}$, R. P. MALIK$^{(a, b)}$]{}\
[*$^{(a)}$Physics Department, Centre of Advanced Studies,*]{}\
[*Banaras Hindu University, Varanasi - 221 005, (U.P.), India*]{}\
0.1cm
[**and**]{}\
0.1cm [*$^{(b)}$DST Centre for Interdisciplinary Mathematical Sciences,*]{}\
[*Faculty of Science, Banaras Hindu University, Varanasi - 221 005, India*]{}\
2.5cm
[**Abstract:**]{} We show that the previously known off-shell nilpotent ($s_{(a)b}^2 = 0$) and absolutely anticommuting ($s_b s_{ab} + s_{ab} s_b = 0$) Becchi-Rouet-Stora-Tyutin (BRST) transformations ($s_b$) and anti-BRST transformations ($s_{ab}$) are the symmetry transformations of the appropriate Lagrangian densities of a four (3 + 1)-dimensional (4D) free Abelian 2-form gauge theory which do [*not*]{} explicitly incorporate a very specific constrained field condition through a Lagrange multiplier 4D vector field. The above condition, which is the analogue of the Curci-Ferrari restriction of the non-Abelian 1-form gauge theory, emerges from the Euler-Lagrange equations of motion of our present theory and ensures the [*absolute*]{} anticommutativity of the transformations $s_{(a)b}$. Thus, the coupled Lagrangian densities, proposed in our present investigation, are aesthetically more appealing and more economical.\
PACS numbers:$~$ 11.15.-q; 03.70.+k\
[*Keywords:*]{} Free Abelian 2-form gauge theory in 4D, Lagrangian densities without any constrained field condition, anticommuting (anti-)BRST symmetries, analogue of the Curci-Ferrari restriction
Introduction
============
The principle of local gauge invariance provides a precise theoretical basis for the description of the three (out of four) fundamental interactions of nature. The theories with local gauge symmetries are always (i) described by the singular Lagrangian densities, and (ii) endowed with the first-class constraints in the language of Dirac’s prescription for the classification scheme \[1,2\]. It has been well-established that the latter (i.e. the first-class constraints) generate the above local gauge symmetry transformations for the singular Lagrangian densities of the relevant gauge theories.
One of the most attractive approaches to covariantly quantize such kind of theories is the BRST formalism where (i) the unitarity and “quantum” gauge (i.e. BRST) invariance are respected together, (ii) the true physical states are defined in terms of the BRST charge which turn out to be consistent with the Dirac’s prescription for the quantization of systems with constraints, and (iii) there exists a deep relationship between the physics of the gauge theories (in the framework of BRST formalism) and the mathematics of differential geometry (e.g. cohomology) and supersymmetry (e.g. superfield formalism).
Some of the key and cute mathematical properties, associated with the (anti-)BRST symmetry transformations, are as follows. First, there exist two symmetry transformations (christened as the (anti-)BRST[^1] symmetry transformations $s_{(a)b}$) for a given local gauge symmetry transformation. Second, both the symmetries are nilpotent of order two (i.e. $s_{(a)b}^2 = 0$). Finally, they anticommute (i.e. $s_b s_{ab} + s_{ab} s_b = 0$) with each-other when they act [*together*]{} on any specific field of the theory. These properties are very sacrosanct for any arbitrary gauge (or reparametrization) invariant theory when the latter is described within the framework of the BRST formalism.
Recently, the 2-form ($B^{(2)} = (1/2!) (dx^\mu \wedge dx^\nu) B_{\mu\nu}$) Abelian gauge field $B_{\mu\nu}$ \[6,7\] and corresponding gauge theory have attracted a great deal of interest because of their relevance in the context of (super)string theories. This Abelian 2-form gauge theory has also been shown to provide (i) an explicit field theoretical example of the Hodge theory \[8\], and (ii) a model for the quasi-topological field theory \[9\]. The (anti-)BRST invariant Lagrangian densities of the 2-form theory have been written out and the BRST quantization has been performed \[8-11\]. One of the key observations in (see, e.g. \[8,9\]) is that the above (anti-)BRST transformations, even though precisely off-shell nilpotent, are found to be anticommuting only up to a vector gauge transformation. Thus, the absolute anticommutativity property is lost.
As pointed out earlier, the anticommutativity property of the (anti-)BRST symmetry transformations is a cardinal requirement in the domain of application of the BRST formalism to gauge theories. This key property actually encodes the linear independence of the above two transformations corresponding to a given local gauge symmetry transformation (of a specific gauge theory). In the realm of superfield approach to BRST formalism (see, e.g., \[12,5\]), the absolute anticommutativity of these transformations becomes crystal clear because these are identified with the translational generators along the Grassmannian directions of a (D, 2)-dimensional supermanifold on which any arbitrary D-dimensional gauge theory is considered \[12,5\].
It is worthwhile to mention that, the superfield approach, proposed in \[12\] for the 4D non-Abelian 1-form gauge theory, has been applied, for the first time, to the description of the free 4D Abelian 2-form gauge theory in \[5\]. One of the upshots (of the discussions in \[5\]) is that an analogue of the Curci-Ferrari (CF) type of restriction \[13\] emerges in the context of 4D [*Abelian*]{} 2-form gauge theory. The former happens to be the hallmark of a 4D [*non-Abelian*]{} 1-form gauge theory \[12,13\]. This CF type condition ensures (i) the absolute anticommutativity of the (anti-)BRST symmetry transformations of the Abelian 2-form gauge theory, and (ii) the identification of the (anti-) BRST symmetry transformations with the translational generators along the Grassmannian directions of the (4, 2)-dimensional supermanifold \[5\].
Keeping the above properties in mind, the (anti-)BRST symmetry transformations have been obtained in our earlier works \[3,4\] where the above CF type field condition is invoked for the proof of the absolute anticommutativity of the off-shell nilpotent (anti-)BRST symmetry transformations \[3,4\]. In fact, the above field condition is explicitly incorporated in the Lagrangian densities through a Lagrange multiplier vector field (which is not a basic dynamical field of the theory). Furthermore, due to the above restriction, the kinetic term for the massless scalar field of the theory turns out to possess a negative sign. These are the prices one pays to obtain the absolute anticommutativity of the nilpotent (anti-)BRST symmetry transformations.
The purpose of our present investigation is to show that the (anti-)BRST tansformations of our earlier works \[3,4\] are the [*symmetry*]{} transformations of a pair of coupled Lagrangian densities which do not incorporate the analogue of the CF type restriction explicitly through the Lagrange multiplier 4D vector field[^2]. This condition, however, appears in the theory as a consequence of the Euler-Lagrange equations of motion that are derived from the coupled Lagrangian densities. Furthermore, all the terms of these Lagrangian densities carry standard meaning and there are no peculiar signs associated with any of them. One of the key features of the CF type restriction, for our present Abelian theory, is that it does not involve any kind of (anti-)ghost fields. On the contrary, one knows that the original CF restriction of the non-Abelian 1-form gauge theory \[13\] does involve the (anti-)ghost fields.
The key factors that have propelled us to pursue our present investigation are as follows. First and foremost, it is very important to obtain the correct nilpotent and anticommuting (anti-)BRST symmetry transformations which are respected by the appropriate Lagrangian densities. The latter should, for aesthetic reasons, be economical and beautiful (i.e. possessing no peculiar looking terms). Second, the theory itself should produce all the cardinal requirements and nothing should be imposed from outside through a Lagrange multiplier field. Third, the (anti-)BRST symmetry transformations in the Lagrangian formulation \[3,4\] must be consistent with the derivation of the same from the superfield approach \[5\]. Finally, our present study is the first modest step towards our main goal of applying the BRST formalism to higher p-form ($p > 2$) gauge theories that are relevant in (super)string theories.
The contents of our present investigation are organized as follows. In Sec. 2, we briefly recapitulate the bare essentials of the off-shell nilpotent and anticommuting (anti-)BRST symmetry transformations for a couple of Lagrangian densities of the Abelian 2-form gauge theory. The above Lagrangian densities incorporate a constrained field relationship through a Lagrange multiplier 4D vector field. Our Sec. 3 deals with a pair of coupled and equivalent Lagrangian densities that (i) respect the BRST and anti-BRST symmetry transformations, and (ii) do not incorporate any constrained field relationship explicitly. In Sec. 4, we derive an explicit BRST algebra by exploiting the infinitesimal continuous symmetry transformations. We make some concluding remarks in our Sec. 5.
Preliminaries: Lagrangian Densities Incorporating the Constrained Field Condition
=================================================================================
We begin with the following nilpotent (anti-)BRST symmetry invariant Lagrangian density for the 4D free Abelian 2-form gauge theory [^3] \[3-5\] $$\begin{aligned}
{\cal L}^{(1)} &=& \frac{1}{6} H^{\mu\nu\kappa} H_{\mu\nu\kappa} + B^\mu (\partial^\nu B_{\nu\mu}) + \frac{1}{2} (B \cdot B + \bar B \cdot \bar B)
- \frac{1}{2} \partial_\mu \phi\partial^\mu \phi \nonumber\\
&+& \partial_\mu \bar\beta \partial^\mu \beta
+ (\partial_\mu \bar C_\nu - \partial_\nu \bar C_\mu) (\partial^\mu C^\nu) +
(\partial \cdot C - \lambda) \rho \nonumber\\
&+& (\partial \cdot \bar C + \rho) \lambda + L^\mu (B_\mu - \bar B_\mu - \partial_\mu \phi),\end{aligned}$$ where the kinetic term is constructed with the totally antisymmetric curvature tensor $H_{\mu\nu\kappa}$ which is derived from the 3-form $H^{(3)} = \frac{1}{3!} (dx^\mu \wedge dx^\nu \wedge dx^\kappa) H_{\mu\nu\kappa}$. The exterior derivative $d = dx^\mu \partial_\mu$ (with $d^2 = 0$) and the 2-form $B^{(2)} = \frac{1}{2!} (dx^\mu \wedge dx^\nu) B_{\mu\nu}$ generate the above 3-form (i.e. $H^{(3)} = d B^{(2)}$).
We have the Lorentz vector fermionic (anti-)ghost fields $(\bar C_\mu)C_\mu$ and the bosonic (anti-)ghost fields $(\bar \beta)\beta$ in the theory. The above Lagrangian density also requires fermionic auxiliary ghost fields $\rho = - \frac{1}{2}
(\partial \cdot \bar C)$ and $\lambda = \frac{1}{2} (\partial \cdot C)$. The auxiliary vector fields $B_\mu$ and $\bar B_\mu$ are constrained to satisfy the field equation $B_\mu - \bar B_\mu - \partial_\mu \phi = 0$ where the massless (i.e. $\Box \phi = 0$) field $\phi$ is required for the stage-one reducibility in the theory. The above constrained field equation emerges due to presence of the Lagrange multiplier field $L^\mu$.
The following off-shell nilpotent (i.e. $s_{b}^2 = 0$) BRST symmetry transformations $s_{b}$ for the 4D local fields of the theory, namely; $$\begin{aligned}
&& s_b B_{\mu\nu} = - (\partial_\mu C_\nu - \partial_\nu C_\mu), \qquad s_b C_\mu = - \partial_\mu \beta,
\qquad s_b \bar C_\mu = - B_\mu, \nonumber\\
&& s_b L_\mu = - \partial_\mu \lambda,\; \qquad
s_b \phi = \lambda,\; \qquad s_b \bar \beta = - \rho, \nonumber\\
&& s_b \bar B_\mu = - \partial_\mu \lambda,\; \;\qquad\;
s_b \bigl [\rho, \lambda, \beta, B_\mu, H_{\mu\nu\kappa} \bigr ] = 0,\end{aligned}$$ leave the above Lagrangian density quasi-invariant because it transforms to a total spacetime derivative: $s_b {\cal L}^{(1)} = - \partial_\mu [(\partial^\mu C^\nu - \partial^\nu C^\mu) B_\nu + \lambda B^\mu + \rho \partial^\mu \beta ]$.
In exactly similar fashion, the following off-shell nilpotent ($s_{ab}^2 = 0$) anti-BRST symmetry transformations $s_{ab}$ $$\begin{aligned}
&& s_{ab} B_{\mu\nu} = - (\partial_\mu \bar C_\nu - \partial_\nu
\bar C_\mu),\; \quad s_{ab} \bar C_\mu = - \partial_\mu \bar \beta,
\quad s_{ab} C_\mu = + \bar B_\mu, \nonumber\\
&& s_{ab} L_\mu = - \partial_\mu \rho,
\qquad
s_{ab} \phi = \rho,\; \qquad s_{ab} \beta = - \lambda, \nonumber\\
&& s_{ab} B_\mu = +
\partial_\mu \rho, \qquad s_{ab} \bigl [\rho, \lambda, \bar\beta,
\bar B_\mu, H_{\mu\nu\kappa} \bigr ] = 0,\end{aligned}$$ leave the following Lagrangian density $$\begin{aligned}
{\cal L}^{(2)} &=& \frac{1}{6} H^{\mu\nu\kappa} H_{\mu\nu\kappa} + \bar B^\mu
(\partial^\nu B_{\nu\mu}) + \frac{1}{2} (B \cdot B + \bar B \cdot \bar B)
- \frac{1}{2} \partial_\mu \phi\partial^\mu \phi \nonumber\\
&+& \partial_\mu \bar\beta \partial^\mu \beta
+ (\partial_\mu \bar C_\nu - \partial_\nu \bar C_\mu) (\partial^\mu C^\nu) +
(\partial \cdot C - \lambda) \rho \nonumber\\
&+& (\partial \cdot \bar C + \rho) \lambda + L^\mu (B_\mu - \bar B_\mu - \partial_\mu \phi),\end{aligned}$$ quasi-invariant because it transforms to a total spacetime derivative as is evident from $s_{ab} {\cal L}^{(2)} = - \partial_\mu [(\partial^\mu \bar C^\nu
- \partial^\nu \bar C^\mu) \bar B_\nu
- \rho \bar B^\mu + \lambda \partial^\mu \bar \beta ]$. It is interesting to point out that both the Lagrangian densities (1) and (4) respect the off-shell nilpotent (anti-)BRST symmetry transformations (cf. (2) and (3)) on a constrained surface defined by a field equation (see, e.g. equation (5) below).
Both the above nilpotent transformations $s_{(a)b}$ (cf. (2) and (3)) are absolutely [*anticommuting*]{} (i.e. $s_b s_{ab} + s_{ab} s_b \equiv \{s_b, s_{ab} \} = 0$) in nature if the whole 4D free Abelian 2-form gauge theory is defined on a constrained surface parametrized by the following field equation [^4] $$B_\mu - \bar B_\mu -\partial_\mu \phi = 0.$$ This is due to the fact that $\{ s_b, s_{ab} \} B_{\mu\nu} = 0$ is true only if the above equation is satisfied. This condition has been incorporated in the above Lagrangian densities through the Lagrange multiplier Lorentz 4D vector field $L^\mu$.
The Lagrangian densities (1) and (4) are coupled Lagrangian densities on the constrained field surface defined by (5). It would be very nice if one could obtain Lagrangian densities that respect the nilpotent and anticommuting (anti-)BRST symmetry transformations (2) and (3) and are free of any Lagrange multiplier field. The latter fields are required when we wish to put some restriction, from outside, on the theory. A beautiful theory should produce this restriction on its own strength. Thus, it is desired that the Lagrangian density of a theory should be devoid of Lagrange multipliers. Furthermore, it would be better if we could avoid the negative kinetic term for the massless scalar field $\phi$ that is present in the Lagrangian densities (1) and (4) of our present theory. We address these issues in our next section.
Lagrangian Densities Without Any Constrained Field Condition: Symmetries
========================================================================
It is interesting to note that the following coupled and equivalent (cf. (5)) Lagrangian densities for the 4D free Abelian 2-form gauge theory, namely; $$\begin{aligned}
{\cal L}_B &=& \frac{1}{6} H^{\mu\nu\kappa} H_{\mu\nu\kappa} + B^\mu (\partial^\nu B_{\nu\mu} - \partial_\mu \phi) + B \cdot B + \partial_\mu \bar\beta \partial^\mu \beta\nonumber\\
&+& (\partial_\mu \bar C_\nu - \partial_\nu \bar C_\mu) (\partial^\mu C^\nu) +
(\partial \cdot C - \lambda) \rho + (\partial \cdot \bar C + \rho)\; \lambda,\end{aligned}$$ $$\begin{aligned}
{\cal L}_{\bar B} &=& \frac{1}{6} H^{\mu\nu\kappa} H_{\mu\nu\kappa} + \bar B^\mu (\partial^\nu B_{\nu\mu} + \partial_\mu \phi) + \bar B \cdot \bar B + \partial_\mu \bar\beta \partial^\mu \beta\nonumber\\
&+& (\partial_\mu \bar C_\nu - \partial_\nu \bar C_\mu) (\partial^\mu C^\nu) + (\partial \cdot C - \lambda) \rho + (\partial \cdot \bar C + \rho)\; \lambda,\end{aligned}$$ remain quasi invariant under the nilpotent and anticommuting (anti-)BRST symmetry transformations (2) and (3), respectively. However, these Lagrangian densities do not incorporate explicitly the constrained field condition (5). Neither do they possess negative kinetic term for the massless scalar field $\phi$. Thus, above Lagrangian densities are the appropriate ones.
The above Lagrangian densities (6) and (7) are equivalent on the constrained surface (defined by the field equation (5)) because they respect both the BRST and anti-BRST symmetry transformations separately and independently. To clarify this statement explicitly, it can be checked that the Lagrangian density (6) transforms under the off-shell nilpotent (anti-)BRST symmetry transformations as given below $$\begin{aligned}
s_b {\cal L}_B &=& s_b {\cal L}^{(1)}, \nonumber\\
s_{ab} {\cal L}_B &=& - \partial_\mu [ (\partial^\mu \bar C^\nu - \partial^\nu \bar C^\mu) B_\nu + \lambda \partial^\mu \bar \beta \nonumber\\
&-& \rho (\partial_\nu B^{\nu\mu} + \bar B^\mu)] + (B^\mu - \bar B^\mu - \partial^\mu \phi) \partial_\mu \rho
\nonumber\\
&+& \partial^\mu (B^\nu - \bar B^\nu - \partial^\nu \phi) (\partial_\mu \bar C_\nu - \partial_\nu \bar C_\mu).\end{aligned}$$ In an exactly similar fashion, the Lagrangian density (7) changes under the (anti-)BRST symmetry transformations as $$\begin{aligned}
s_{ab} {\cal L}_{\bar B} &=& s_{ab} {\cal L}^{(2)}, \nonumber\\
s_{b} {\cal L}_{\bar B} &=& - \partial_\mu [ (\partial^\mu C^\nu - \partial^\nu C^\mu) \bar B_\nu + \rho \partial^\mu \beta \nonumber\\
&+& \lambda (\partial_\nu B^{\nu\mu} + B^\mu)] + (B^\mu - \bar B^\mu - \partial^\mu \phi) \partial_\mu \lambda
\nonumber\\
&-& \partial^\mu (B^\nu - \bar B^\nu - \partial^\nu \phi) (\partial_\mu C_\nu - \partial_\nu C_\mu).\end{aligned}$$ Thus, on the constrained surface (defined by (5)), the Lagrangian densities (6) and (7) are equivalent and both of them respect the (anti-)BRST symmetry invariances. The condition (5), however, has to be imposed from outside.
The following Euler-Lagrange equations of motion $$\begin{aligned}
B_\mu = - \frac{1}{2} (\partial^\nu B_{\nu\mu} - \partial_\mu \phi), \qquad
\bar B_\mu = - \frac{1}{2} (\partial^\nu B_{\nu\mu} + \partial_\mu \phi),\end{aligned}$$ from the above Lagrangian densities (6) and (7) imply that $$\begin{aligned}
&& \partial \cdot B = 0,\; \qquad \partial \cdot \bar B = 0,\; \qquad \Box \phi = 0, \nonumber\\
&& B_\mu - \bar B_\mu - \partial_\mu \phi = 0,\; \qquad
B_\mu + \bar B_\mu + \partial^\nu B_{\nu\mu} = 0.\end{aligned}$$ Thus, the analogue of the Curci-Ferrari restriction \[13\] of the non-Abelian 1-form gauge theory, is hidden in the above coupled Lagrangian densities in the form of the Euler-Lagrange equation of motion (cf. (11) [*vis-[à]{}-vis*]{} (5)).
To capture the above (anti-)BRST invariance in a simpler setting, it can be seen that the Lagrangian densities (6) and (7) can be re-expressed as the sum of the kinetic term and the BRST and anti-BRST exact forms, namely; $${\cal L}_B = \frac{1}{6} H^{\mu\nu\kappa} H_{\mu\nu\kappa} + s_b \Bigl [ - \bar C^\mu \bigl \{ (\partial^\nu B_{\nu\mu} - \partial_\mu \phi)
+ B_\mu \bigr \} + \bar \beta \bigl (\partial \cdot C - 2 \lambda \bigr ) \Bigr ],$$ $${\cal L}_{\bar B} = \frac{1}{6} H^{\mu\nu\kappa} H_{\mu\nu\kappa} +
s_{ab} \Bigl [ + C^\mu \bigl \{ (\partial^\nu B_{\nu\mu} + \partial_\mu \phi)
+ \bar B_\mu \bigr \} + \beta \bigl (\partial \cdot \bar C + 2 \rho \bigr ) \Bigr ].$$ The above equations provide a simple and straightforward proof for the nilpotent symmetry invariance of the Lagrangian densities (6) and (7) because of (i) the nilpotency (i.e. $ s_{(a)b}^2 = 0$) of the transformations $s_{(a)b}$, and (ii) the invariance of the curvature term (i.e. $s_{(a)b} H_{\mu\nu\kappa} = 0$) under $s_{(a)b}$.
It will be noted that the following interesting expressions[^5] $$\begin{aligned}
&&s_b s_{ab} \Bigl [ 2 \beta \bar\beta + \bar C_\mu C^\mu - \frac{1}{4}
B^{\mu\nu} B_{\mu\nu} \Bigr ] =
B^\mu (\partial^\nu B_{\nu\mu}) + B \cdot \bar B
+ \partial_\mu \bar\beta \partial^\mu \beta \nonumber\\
&& + (\partial_\mu \bar C_\nu - \partial_\nu \bar C_\mu) (\partial^\mu C^\nu) +
(\partial \cdot C - \lambda) \rho + (\partial \cdot \bar C + \rho) \lambda,\end{aligned}$$ $$\begin{aligned}
&&- s_{ab} s_b \Bigl [ 2 \beta \bar\beta + \bar C_\mu C^\mu - \frac{1}{4}
B^{\mu\nu} B_{\mu\nu} \Bigr ] =
\bar B^\mu (\partial^\nu B_{\nu\mu}) + B \cdot \bar B
+ \partial_\mu \bar\beta \partial^\mu \beta \nonumber\\
&& + (\partial_\mu \bar C_\nu - \partial_\nu \bar C_\mu) (\partial^\mu C^\nu) +
(\partial \cdot C - \lambda) \rho + (\partial \cdot \bar C + \rho) \lambda,\end{aligned}$$ allow us to express the Lagrangian densities (6) and (7) in yet another forms $${\cal L}_B = \frac{1}{6} H^{\mu\nu\kappa} H_{\mu\nu\kappa} +
s_b s_{ab} \Bigl [ 2 \beta \bar\beta + \bar C_\mu C^\mu - \frac{1}{4}
B^{\mu\nu} B_{\mu\nu} \Bigr ],$$ $${\cal L}_{\bar B} = \frac{1}{6} H^{\mu\nu\kappa} H_{\mu\nu\kappa}
- s_{ab} s_b \Bigl [ 2 \beta \bar\beta + \bar C_\mu C^\mu - \frac{1}{4}
B^{\mu\nu} B_{\mu\nu} \Bigr ],$$ where one has to make use of (5) (or (11)) to express $(B \cdot \bar B)$ either equal to ($B \cdot B - B^\mu \partial_\mu \phi$) or equal to ($\bar B \cdot \bar B + \bar B^\mu \partial_\mu \phi$). Once again, one can note the (anti-)BRST invariance of the Lagrangian densities (17) and (16) due to the nilpotency ($ s_{(a)b}^2 = 0$) and invariance of the curvature term ($s_{(a)b} H_{\mu\nu\kappa} = 0$). It is worthwhile to mention that the Lagrangian densities in (1) and (4) cannot be recast into the forms like equations (12), (13), (16) and (17). The central obstacle in this attempt is created by the Lagrange multiplier term and kinetic term for the massless scalar field $\phi$ (cf. (1) and (4)).
The following global transformations of the fields $$\begin{aligned}
&&B_{\mu\nu} \to B_{\mu\nu}, \qquad B_\mu \to B_\mu, \qquad \bar B_\mu \to \bar B_\mu,
\qquad \phi \to \phi, \nonumber\\
&& \beta \to e^{+ 2 \Omega} \beta,\; \qquad \bar \beta \to e^{- 2 \Omega} \bar\beta,\; \qquad
C_\mu \to e^{+ \Omega} C_\mu, \nonumber\\ && \bar C_\mu \to e^{- \Omega} \bar C_\mu,\; \qquad
\lambda \to e^{+ \Omega} \lambda,\; \qquad \rho \to e^{-\Omega} \rho,\end{aligned}$$ (where $\Omega$ is an infinitesimal global parameter) leave the Lagrangian densities (6) and (7) invariant. A close look at the above transformations shows that all the ghost terms of (6) and (7) remain invariant under the above transformations. The infinitesimal version of the above global ghost transformations $s_g$ (modulo parameter $\Omega$) is such that $s_g \beta = 2 \beta, s_g \bar \beta = - 2 \bar\beta,
s_g C_\mu = + C_\mu, s_g \bar C_\mu = - \bar C_\mu, s_g \lambda = + \lambda,
s_g \rho = - \rho$. The factors of $\pm 2$ and $\pm 1$, present in the exponentials of equation (18), correspond to the ghost numbers of the corresponding ghost fields which would play very significant roles in the next section where we shall compute some commutators with the ghost charge.
Generators of the Continuous Symmetry\
Transformations: BRST Algebra
======================================
The nilpotent (anti-)BRST symmetry transformations (3) and (2) and the infinitesimal version of the global transformations in (18) lead to the derivation of the Noether conserved currents. These are as follows $$\begin{aligned}
J^\mu_{(ab)} &=& \rho \bar B^\mu
-(\partial^\mu C^\nu - \partial^\nu C^\mu) \partial_\nu \bar \beta
- H^{\mu\nu\kappa} (\partial_\nu \bar C_\kappa - \partial_\kappa \bar C_\nu) \nonumber\\
&-& \lambda \partial^\mu \bar \beta - (\partial^\mu \bar C^\nu
- \partial^\nu \bar C^\mu) \bar B_\nu, \nonumber\\
J^\mu_{(b)} &=& (\partial^\mu \bar C^\nu - \partial^\nu \bar C^\mu) \partial_\nu \beta
- H^{\mu\nu\kappa} (\partial_\nu C_\kappa - \partial_\kappa C_\nu) \nonumber\\
&-& \rho \partial^\mu \beta - \lambda B^\mu - (\partial^\mu C^\nu - \partial^\nu C^\mu) B_\nu,
\nonumber\\
J^\mu_{(g)} &=& 2 \beta \partial^\mu \bar\beta - 2 \bar\beta \partial^\mu \beta
+ \lambda \bar C^\mu - \rho C^\mu \nonumber\\
&+& (\partial^\mu \bar C^\nu - \partial^\nu \bar C^\mu) C_\nu +
(\partial^\mu C^\nu - \partial^\nu C^\mu) \bar C_\nu.\end{aligned}$$ It is straightforward to check that the continuity equation $\partial_\mu J^\mu_{(i)} = 0$ (with $ i = b, ab, g$) is satisfied if we exploit the Euler-Lagrange equations of motion derived from the Lagrangian densities (6) and (7).
The above Noether conserved currents lead to the definition of the conserved and nilpotent ($Q^2_{(a)b} = 0$) (anti-)BRST charges ($Q_{(a)b} = \int d^3 x J^0_{(a)b}$) and the conserved ghost charge ($Q_g = \int d^3 x J^0_{(g)}$) as given below $$\begin{aligned}
Q_{ab} &=& {\displaystyle \int} d^3 x \Bigl [ \rho \bar B^0 - \lambda \partial^0 \bar \beta
- H^{0ij} (\partial_i \bar C_j - \partial_j \bar C_i) \nonumber\\
&-& (\partial^0 \bar C^i - \partial^i \bar C^0) \bar B_i -
(\partial^0 C^i - \partial^i C^0) \partial_i \bar \beta \Bigr ], \nonumber\\
Q_{b} &=& {\displaystyle \int} d^3 x \Bigl [
(\partial^0 \bar C^i - \partial^i \bar C^0) \partial_i \beta
- H^{0ij} (\partial_i C_j - \partial_j C_i) \nonumber\\
&-& (\partial^0 C^i - \partial^i C^0) B_i - \lambda B^0 - \rho \partial^0 \beta
\Bigr ], \nonumber\\
Q_{g} &=& {\displaystyle \int } d^3 x \Bigl [ 2 \beta \partial^0 \bar\beta - 2 \bar \beta
\partial^0 \beta + (\partial^0 \bar C^i - \partial^i \bar C^0) C_i \nonumber\\
&-& \rho C^0 + \lambda \bar C^0 + (\partial^0 C^i - \partial^i C^0) \bar C_i \Bigr ].
\end{aligned}$$ These conserved charges $Q_{(a)b}$ and $Q_g$ obey the following BRST algebra $$\begin{aligned}
&& Q_b^2 = \frac{1}{2} \{ Q_b, Q_b \} = 0,\; \qquad Q_{ab}^2 =
\frac{1}{2} \{ Q_{ab}, Q_{ab} \} = 0, \nonumber\\
&& Q_b Q_{ab} +
Q_{ab} Q_b \equiv \{Q_b, Q_{ab} \}= 0 \equiv \{ Q_{ab}, Q_b \}, \nonumber\\
&& i [Q_g, Q_b] = + Q_b,\; \qquad i [Q_g, Q_{ab}] = - Q_{ab}.\end{aligned}$$ The above algebra plays a key role in the cohomological description of the states of the quantum gauge theory in the quantum Hilbert space (QHS).
The algebra in (21) can be derived by exploiting the infinitesimal transformations $s_{(a)b}$ and $s_g$ and the expressions for $Q_{(a)b}$ and $Q_g$. These are $$\begin{aligned}
&& s_b Q_b = - i \{ Q_b, Q_b \} = 0,\; \qquad s_{ab} Q_{ab} = - i \{Q_{ab}, Q_{ab} \} = 0, \nonumber\\
&& s_b Q_{ab} = - i \{ Q_{ab}, Q_b \} = 0,\; \qquad
s_{ab} Q_b = - i \{ Q_b, Q_{ab} \} = 0, \nonumber\\ && s_{g} Q_{ab} = - i [Q_{ab}, Q_{g}] = - Q_{ab},
\qquad s_g Q_{b} = - i [ Q_{b}, Q_g ] = Q_b, \nonumber\\
&& s_{b} Q_{g} = - i [Q_{g}, Q_{b}] = - Q_{b},\;
\qquad s_{ab} Q_{g} = - i [ Q_{g}, Q_{ab} ] = Q_{ab}.\end{aligned}$$ In the above computations, the factors of $\pm 2$ and $\pm 1$ present in the ghost transformations (18), play a very crucial role. Furthermore, some of the computations in the above are really non-trivial and algebraically more involved. In particular, in the proof of $\{Q_b, Q_{ab} \} \equiv \{Q_{ab}, Q_b \} = 0$, one has to exploit the restriction (5) and equations of motion.
The physical state of the QHS is defined as $Q_{(a)b} |phys> = 0$. This condition comes out to be consistent with the Dirac’s prescription for the quantization of theories with first-class constraints \[1,2\]. The details of the constraints analysis has been performed in our earlier work \[4\] where it has been shown that the constrained field equation (5) can be incorporated in the physicality condition $Q_{(a)b} |phys> = 0$ in a subtle manner (see, e.g. \[4\] for details). For our present Abelian 2-form gauge theory, the BRST and anti-BRST charges play their separate and independent roles as has been established in \[4\] by performing a detailed constraint analyses of this theory.
Conclusions
===========
In our present investigation, we have concentrated on the appropriate Lagrangian densities of the 4D free Abelain 2-form gauge theory that (i) respect the off-shell nilpotent and absolutely anticommuting (anti-)BRST symmetry transformations that were derived in our earlier works \[3-5\], (ii) are free of a specific Lagrange multiplier 4D vector field which was introduced in our earlier endeavours to incorporate the analogue of the CF type restriction \[3-5\], (iii) are endowed with terms that carry standard meaning of the quantum field theory[^6], and (iv) can be generalized so as to prove that the present 4D theory is a field theoretic model for the Hodge theory \[14,15\].
It is pertinent to point out that the Lagrangian densities in (6) and (7) can be recast into different simple and beautiful forms as is evident from equations (12), (13), (16) and (17). This should be contrasted, however, with the Lagrangian densities (1) and (4) which cannot be recast into the above beautiful forms because of (i) the Lagrange multiplier term (i.e. $L^\mu (B_\mu - \bar B_\mu - \partial_\mu \phi)$), and (ii) the kinetic term for the massless scalar field (i.e. $- (1/2) \partial_\mu \phi \partial^\mu \phi$). Thus, it is clear that the Lagrangian densities (6) and (7), that respect the same symmetry transformations as (1) and (4), are more appealing and more economical than their counterparts in (1) and (4).
The anticommutativity property of the nilpotent (anti-)BRST symmetry transformations owes its origin to the analogue of the CF condition (cf. (5), (11)) which describes a constrained surface on the 4D spacetime manifold. The key insight, for the existence of this relation, comes from the superfield approach to BRST formalism in the context of our present theory \[5\]. It is very interesting to note that, despite our present 4D gauge theory being an [*Abelian*]{} 2-form gauge theory, an analogue of the CF condition (which is the hallmark of a [*non-Abelian*]{} 1-form gauge theory) exists for the sanctity of the anticommutativity property of the (anti-)BRST symmetry transformations. Recently, we have been able to show the time-evolution invariance of this restriction in the Hamiltonian formalism \[16\].
There are a few relevant points that have to be emphasized. First, unlike non-Abelian 1-form theory \[13\], the above CF type restriction does not connect the auxiliary vector fields $B_\mu$ and $\bar B_\mu$ with any kind of (anti-)ghost fields of the theory. Rather, the above condition (5) is a relationship between the auxiliary fields and scalar field of the theory which are all bosonic in nature. Second, the analogue of the CF restriction present in our Abelian 2-form gauge theory has been shown \[3\] to have deep connection with the concept of gerbes. These geometrical objects, at the moment, are one of the very active areas of research in theoretical high energy physics. Finally, it would be nice to establish connection between the above fermionic (anti-)BRST charges and the twisted supercharges of the extended supersymmetry algebra. We plan to pursue the above cited issues further for our future investigations in the realm of 2-form and higher-form (non-)Abelian gauge theories \[17\].
[**Acknowledgements:**]{} Financial support from DST, Government of India, under the SERC project: SR/S2/HEP-23/2006, is gratefully acknowledged.
[99]{}
P. A. M. Dirac, [*Lectures on Quantum Mechanics (Belfer Graduate School of Science)*]{} (Yeshiva University Press, New York, 1964).
See, for a review, K. Sundermeyer, [*Constrained Dynamics: Lecture Notes in Physics*]{}, Vol. 169 (Springer-Verlag, Berlin, 1982).
L. Bonora and R. P. Malik, [*Phys. Lett.*]{} B [**655**]{}, 75 (2007),\
arXiv: hep-th/ 0707.3922.
R. P. Malik, [*Eur. Phys. J.*]{} C [**55**]{}, 687 (2008), arXiv: hep-th/ 0802.4129.
R. P. Malik, [*Eur. Phys. J.*]{} C [**60**]{}, 457 (2009)), arXiv: hep-th/0702039.
See, e.g., V. I. Ogievetsky and I. V. Palubarinov, [*Yad. Fiz.*]{} [**4**]{}, 216 (1966).
See, e.g., V. I. Ogievetsky and I. V. Palubarinov, [*Sov. J. Nucl. Phys.*]{} [**4**]{}, 156 (1967).
See, e.g., E. Harikumar, R. P. Malik and M. Sivakumar, [*J. Phys. A: Math. Gen.*]{} [**33**]{}, 7149 (2000), arXiv: hep-th/0004145.
R. P. Malik, [*J. Phys. A: Math. Gen.*]{} [**36**]{}, 5095 (2003),\
arXiv: hep-th/0209136.
See, e.g., H. Hata, T. Kugo and N. Ohta, [*Nucl. Phys.*]{} B [**178**]{}, 527 (1981).
See, e.g., T. Kimura, [*Prog. Theor. Phys.*]{} [**64**]{}, 357 (1980).
See, e.g., L. Bonora and M. Tonin, [*Phys. Lett.*]{} B [**98**]{}, 48 (1981).
G. Curci and R. Ferrari, [*Phys. Lett.*]{} B [**63**]{}, 91 (1976).
R. P. Malik, [*Eur. Phys. Lett.*]{} (EPL) [**84**]{}, 31001 (2008),\
arXiv: hep-th/0805.4470.
Saurabh Gupta and R. P. Malik, [*Eur. Phys. J.*]{} C [**58**]{}, 517 (2008),\
arXiv: hep-th/0807.2306.
R. P. Malik, B. P. Mandal and S. K. Rai, [*Int. J. Mod. Phys.*]{} A [**24**]{}, 6157 (2009), arXiv: hep-th/0901.1433 \[hep-th\].
S. Gupta and R. P. Malik, in preparation.
[^1]: We follow here the standard notations and conventions adopted in our recent works on 4D free Abelian 2-form gauge theory within the framework of BRST formalism \[3-5\].
[^2]: This feature is exactly like the discussion of the absolutely anticommuting (anti-)BRST symmetry transformations in the context of the 4D non-Abelian 1-form gauge theory.
[^3]: We choose the 4D spacetime metric $\eta_{\mu\nu}$ with the signatures $(+ 1, - 1, -1, -1)$ so that $P \cdot Q = \eta_{\mu\nu} P^\mu Q^\nu = P_0 Q_0 - P_i Q_i$ is the dot product between non-null four vectors $P_\mu$ and $Q_\mu$. Here $\mu, \nu, \kappa, \sigma...= 0 , 1, 2, 3$ and $i, j, k.... = 1, 2, 3$. We also adopt, in the whole body of our text, the field differentiation convention: $(\delta B_{\mu\nu}/\delta B_{\kappa\sigma})
= \frac{1}{2!} (\delta_{\mu\kappa} \delta_{\nu\sigma}
- \delta_{\mu\sigma} \delta_{\nu\kappa})$, etc.
[^4]: This restriction comes out from our previous work \[5\] that is devoted to the discussion of the free 4D Abelian 2-form gauge theory within the framework of superfield formalism.
[^5]: These relations are similar to the case of non-Abelian 1-form gauge theory where the CF restriction is [*not*]{} explicitly incorporated in the Lagrangian densities (see, e.g., \[12\]).
[^6]: It will be noted that, in our previous attempts \[3-5\], the kinetic term for the massless scalar field turned out to possess a negative sign due to the constraint field equation.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'The focus of this paper is to quantify measures of aggregate fluctuations for a class of consensus-seeking multi-agent networks subject to exogenous noise with $\alpha$-stable distributions. This type of noise is generated by a class of random measures with heavy-tailed probability distributions. We define a cumulative scale parameter using scale parameters of probability distributions of the output variables, as a measure of aggregate fluctuation. Although this class of measures can be characterized implicitly in closed-form in steady-state, finding their explicit forms in terms of network parameters is, in general, almost impossible. We obtain several tractable upper bounds in terms of Laplacian spectrum and statistics of the input noise. Our results suggest that relying on Gaussian-based optimal design algorithms will result in non-optimal solutions for networks that are driven by non-Gaussian noise inputs with $\alpha$-stable distributions.'
author:
- 'Christoforos Somarakis and Nader Motee [^1]'
bibliography:
- 'bibliography.bib'
title: 'Aggregate Fluctuations in Networks with Drift-Diffusion Models Driven by Stable Non-Gaussian Disturbances'
---
Consensus, Heavy-tailed noise, Performance Measures, $\alpha$-stable processes, $p$-norm.
Introduction
============
he level of complexity in modern real-world networks can make them vulnerable to enviromental or structural disturbances often with severe, if not catastrophic, consequences. Recent crises in various sectors of our society show specific frailties of dynamical networks due to weaknesses in their structures, e.g., the air traffic congestion problem [@craig88], power outages [@poweroutage03], the financial crisis of 2008 (see Ch. 17-18 in [@fouque13]) and other major disruptions.
Thus the problem of performance and robustness in high dimensional networks is pivotal in designing inter-dependent systems that withstand negative effects of disturbances. Application areas include, but are not limited to, co-operative control of multi-agent systems, collaborative autonomy, transportation networks, power networks, metabolic pathways and financial networks (see for example [@6248170; @Tahbaz13] and references therein).
Standard models of uncertainty in dynamic processes assume underlying probability distributions with well-defined first and second moments. A particularly prominent example is this of white noise, where the underlying distribution is Gaussian. Its main advantage is classic theory of stochastic differential equations (SDE) [@arnold74], that provides clean and tractable results. This gives rise to Engineers to leverage Gaussian-induced sources of perturbation on networked control systems and design optimal structures that mitigate undesirable noise-related effects [@Bamieh12; @yaserecc16; @Siami16TACa; @somyasnader17].
Despite elegant formulation, systems perturbed by purely Gaussian perturbations have attracted considerable criticism. The primary dispute relies on the claim that Gaussian approximations fail to model real-world uncertainty that occasionally exhibits susceptibility to large and unexpected events [@taleb2010black; @mandelbrot2010mis; @schoutens2003levy].
Shortcomings of Gaussian Assumptions
------------------------------------
Systems perturbed by white noise generate stochastic processes that fluctuate around the expected (unperturbed) value in an amorphous yet highly regularized manner. The resulting dynamics essentially preserve the Gaussian nature of perturbations, along with its light-tailed property. Thus, there is no reasonable possibility for abrupt and outlying values to emerge, in other words faithful signs of large and unexpecrted fluctuations, or shocks, in the network. As explained in [@taleb2010black], shock events are ubiquitous in real world situations. Furthermore, they are identified as such, if they lie outside the realm of regular expectations, carry an extreme impact and have likelihood of happening. It is precisely the light-tailed property of Gaussian measures that hinders realistic possibility of shocks. To gain a better understanding, the qualitative difference in a solution trajectory perturbed by light-tailed and a solution trajectory perturbed by heavy-tailed noise, is illustrated in Figure \[fig: orbit\].
Related Literature & Contributions
----------------------------------
Mathematical models driven by non-Gaussian and heavy-tailed disturbances have been proposed in various disciplines [@shoutens05; @duan15]. To the best of our knowledge, control community lacks relevant studies and results, with the exception of [@7084617].
In this paper, we develop the theoretical framework of heavy-tailed consensus seeking networks. These are types of drift-diffusion stochastic differential equations, driven by $\alpha$-stable noise. The drift (deterministic) part is selected to be an average consensus protocol. This is the standard control law for asymptotic agreement over agents and it enjoys lasting interest in problems of cooperative dynamics, formation control, distributed computation and optimization [@Mesbahi_Egerstedt_2010]. The diffusion part consists of additive $\alpha$-stable random measures that model noise as exogenous disturbance on the unperturbed (drift) dynamics. The objective of our work to lay the groundwork analysis of this class of systems. It is found that the systemic (i.e., network-wide) response towards this class of noise is quantitatively and qualitatively different when compared to gaussian-induced systemic fluctuations. Furthermore we highlight the perplexed interplays between network topology and noise as a means to understand the manner with which shocks are propagated through the network, affecting its ability to remain in equilibrium. In particular, we introduce a measure to quantify aggregate flcutuation for networks driven by $\alpha$-stable noise. We derive an implicit formulation of the metric for system outputs and we explore its basic properties. Moreover, we highlight its connection with the $\mathcal H_2$-based performance measure for linear systems with white-noise inputs [@Siami16TACa] as well as with other $p$-metrics. Explicit expression of the systemic performance metric appears to be generally impossible. Exceptions are communication topologies with uniform, all-to-all connectivity or purely gaussian noise perturbations. For this reason we obtain tractable bounds of the performance metric which we believe to be useful in network design problems. Numerical examples are discussed and validate our theoretical results. We suggest that $\mathcal H_2$-based design algorithms are not only incompatible for the case of heavy-tailed disturbances, but they also deliver sub-optimal topologies. It is acknowledged that the present work is an outgrowth of [@som_acc2018]. This version considers more general (i.e. not necessarily symmetric) $\alpha$-stable random measures, and it includes detailed proofs of technical results.
![Simulation of output dynamics of system for $n=7$ agents, in the face of white ($\alpha=2$) and heavy-tailed ($\alpha<2$) noise inputs. The latter type of perturbations results in dynamics with jumps that represent, more realistically, the effect of shocks on the nominal process.[]{data-label="fig: orbit"}](orbit-eps-converted-to.pdf)
Preliminaries {#section: prelim}
=============
By $\mathbb R^n$ we denote the $n$-dimensional Euclidean space, with elements $x=\big[x^{(1)},\dots,x^{(n)}\big]^T \in \mathbb R^n$. For any $x\in \mathbb R^n$, $\|x\|_p:=\sqrt[p]{\sum_{j}|x^{(j)}|^p}$, for $p>0$. The fundamental property on the equivalence of norms in $\mathbb R^n$: $$\| x\|_p \leq \| x\|_r \leq n^{\frac{1}{r}-\frac{1}{p}}\|x \|_p~~~\text{for}~p>r>0.$$
Given a probability space $(\Omega,\mathcal F, \mathbb P)$ we say that a random variable $z(\omega): \Omega\rightarrow \mathbb R$ follows a stable distribution, and we write $$z\sim S_{\alpha}(\sigma,\beta,\mu),$$ if there exist parameters $0<\alpha\leq 2$, $\sigma\geq 0$, $-1\leq \beta \leq 1$ and $\mu \in \mathbb R$, such that its characteristic function is of the form: $$\phi_z(\theta)=\mathbb E\big[e^{i\theta z}\big]=\text{exp}\big\{\sigma^\alpha\big (-|\theta|^\alpha+i\theta \omega(\theta,\alpha,\beta)\big)+i\mu \theta \big\}$$ where $\omega(\theta,\alpha,\beta)$ stands for the function $$\omega(\theta,\alpha,\beta)=\begin{cases}
\beta |\theta|^{\alpha-1} \tan\frac{\pi \alpha}{2}, ~ \hspace{0.2in} \alpha\neq 1\\
-\beta \frac{2}{\pi} \ln |\theta| , ~ \hspace{0.46in} \alpha = 1.
\end{cases}$$ The parameter $\alpha$ is called the stability index of the distribution. Parameter $\alpha$ basically characterizes the impulsiveness (i.e. frequency and magnitude) of the shocks. The parameter $\sigma$ is the scale of the distribution and it is closely related to the standard deviation: The larger the scale parameter is, the more spread out the distribution becomes. Parameter $\beta$ is the skeweness of the distribution, an indicator of asymmetry. Finally, $\mu$ is the shift of the distribution and it plays the role of the mean value[^2]. The following results summarize basic properties of stable random variables. They are drawn from [@samorodnitsky1994stable] and stated below as Propositions \[prop: propstrv\], \[prop: stableindep\] and \[prop: stableintegralprop\] to enhance readability and keep our manuscript self-contained.
\[prop: propstrv\] Let $z\sim S_{\alpha}(\sigma,\beta,\mu)$. It holds that:
[1.]{} For any $a\in \mathbb R$, $z+a\sim S_{\alpha}(\sigma,\beta,\mu+a)$.
[2.]{} For any $a\neq 0$ $$az\sim \begin{cases}S_{\alpha}\big(|a|\sigma,\text{sgn}(a)\beta,a\mu\big), & \alpha\neq 1 \\
S_{\alpha}\big(|a|\sigma,\text{sgn}(a)\beta,a\mu-2a\ln(|a|)\sigma \beta\big), & \alpha=1
\end{cases}$$
[3.]{} If $\alpha<2$ $$\mathbb E\big[|z|^p\big]\begin{cases}
<\infty, &~\text{for}~0<p<\alpha\\
=\infty, &~\text{for}~p\geq \alpha\\
\end{cases}$$ In addition, if $\mu=0$, and $\beta=0$ only if $\alpha=1$, it holds $$\mathbb E[|z|^p]=c^p\sigma^p,$$ where $c=c(\alpha,\beta,p)=\big(\mathbb E[|z_0|^{p}]\big)^{\frac{1}{p}}$ for $z_0\sim S_{\alpha}(1,\beta,0)$.
A closed form expression of constant $c$ is reported in [@samorodnitsky1994stable]. Its most remarkable property is its limit at $\alpha$: $$\lim_{p\rightarrow \alpha^-}c(\alpha,\beta,p)=\begin{cases}
+\infty, & \alpha <2\\
\sqrt{2}, & \alpha=2
\end{cases} \hspace{0.2in}~\text{for all}~\beta \in [-1,1].$$
\[prop: stableindep\] Let $z_i\sim S_{\alpha}(\sigma_i,\beta_i,\mu_i),~i=1,2$ be independent. Then $$z_1+z_2\sim S_{\alpha}\bigg((\sigma_1^\alpha+\sigma_2^\alpha\big)^{\frac{1}{\alpha}},\frac{\beta_1\sigma_1^\alpha+\beta_2\sigma_2^\alpha}{\sigma_1^\alpha+\sigma_2^\alpha},\mu_1+\mu_2\bigg).$$
The random variable $z\sim S_{\alpha}(\sigma,0,0)$ is called symmetric $\alpha$-stable, for which we write $z\sim S\alpha S$. Its characteristic function takes the form $$\phi_z(\theta)=e^{-\sigma^\alpha|\theta|^{\alpha}}.$$ A finite collection of $\alpha$-stable random variables $z_i\sim S_{\alpha}(\sigma_i,\beta_i,\mu_i),~i=1,\dots, d$ can form an $\alpha$-stable vector $z=\big[z^{(1)},\dots, z^{(d)}\big]^T$.
A scalar-valued stochastic process $\{z_t,~t\in [0,\infty]\}$ is stable if all its finite dimensional distributions are stable. A nominal example is this of $\alpha$-stable Lévy process $z=\{z_t,~t\geq 0\}$ with the properties:
\[1.\] $z(0)=0$ a.s.
\[2.\] $z$ attains independent increments
\[3.\] $z_t-z_s\sim S_{\alpha}\big( (t-s)^{1/\alpha},\beta,0\big)$, for $0\leq s<t<\infty$.
A vector valued $\alpha$-stable random process $z=\{z_t\}_t$ with $z_t=\big[z_t^{(1)},\dots, z_t^{(d)}\big]^T$, $t\geq 0$ is a family of $\alpha$-stable vectors parametrized by $t$.
*Stable Integrals*. The building blocks of stable integrals are random measures. Let $(\Omega, \mathcal F, P)$ be a probability space and $L^0(\Omega)$ the set of all real random variables defined on it. Let also $(B, \mathcal B , m)$ be a measure space. Take $\beta: B\rightarrow [-1,1]$ a measurable function and $\mathcal B_{0}\subset \mathcal B$ that contains sets of finite $m$-measure.
A set function $M: \mathcal B_0\rightarrow L^{0}(\Omega)$ is a random measure, if it satisfies the following properties:
I. It is independently scattered, i.e. for any finite collection of disjoint sets $A_1,\dots,A_k \in \mathcal B_0$, the random variables $M(A_1),\dots,M(A_k)$ are independent.
II\. It is $\sigma$-additive on $\mathcal B_0$. III. For every $A\in \mathcal B_0$, $$M(A)\sim S_\alpha\bigg((m(A))^{1/\alpha},\frac{\int_A \beta(y)\, m(dy)}{m(A)},0\bigg)$$
The next example establishes an intimate connection between random measures and stable processes.
\[exmpl: ex1\] Let $M$ be an $\alpha$-stable random measure on $\big( [0,\infty), \mathcal B \big)$ with $m(dx)=\frac{1}{\alpha}dx$ and constant skewness density $\beta$, $0\leq x < \infty$. The process $Z=\{ Z_t, t\geq 0\}$ defined through $Z_t=M([0,t]),~0\leq t<\infty$ is an $\alpha$-stable Lévy motion.
The stable integral defined as $$I(f):=\int_{B}f(y)M(dy)$$ are taken over integrands that are members of $$\label{eq: spacef}\begin{split}
F_\alpha =\bigg\{ f\in \mathcal B: \int_B |f(y)|^{\alpha}\,m(dy)<\infty\bigg\}.
\end{split}$$
\[prop: stableintegralprop\]The integral $I(f)$ attains the properties:
[1.]{} $I(f)\sim S_{\alpha}(\sigma_f,\beta_f,\mu_f)$ with $ \sigma_f^{\alpha}=\int_B |f(x)|^\alpha\, m(dx)$,\
$\beta_f=\frac{\beta}{\sigma_f}\int_B f(x)^{<\alpha>}\,m(dx)$, $$\mu_f=\begin{cases} 0, & \alpha \neq 1 \\
-\frac{2}{\pi}\beta\int_{B}f(x)\ln |f(x)|\,m(dx), & \alpha =1.
\end{cases}$$ The notation $q^{<\alpha>}$ stands for $$q^{<\alpha>}=\begin{cases}
|q|^\alpha &~\text{if}~q>0, \\
-|q|^{\alpha} & ~\text{if}~q<0.
\end{cases}$$ [2.]{} $I(a_1f_1+a_2f_2)=a_1I(f_1)+a_2I(f_2)$, for any $f_1,~f_2\in F_\alpha$, and constants $a_1,~a_2\in \mathbb R$.
\[exmp: stableintegral\] Let the $\alpha$-stable random measure $M$ of Example \[exmpl: ex1\]. Then $f(x)=e^{-\lambda x}$, $\lambda>0$, clearly belongs to $F_\alpha$, $\alpha\in (0,2]$. For fixed $t>0$, the integral $$\int_0^{t}e^{-\lambda(t-s)}\,M(ds)~ \sim ~ S_\alpha(\sigma,\beta,\mu_f)$$ defines a stable process so that for $\sigma^\alpha=\frac{1-e^{-\alpha \lambda t}}{\alpha^2 \lambda}$, $\beta_f=\beta$ and $\mu_f=-\frac{2}{\pi}\beta\big[te^{-\lambda t}-\frac{1}{\lambda}(1-e^{-\lambda t} )\big]$ if $\alpha=1$ and 0, otherwise. At $t \rightarrow \infty $ the stable integral converges, in distribution, to $S_\alpha\big(\frac{1}{\alpha \sqrt[\alpha]{\lambda}},\beta,0\big)$ for $\alpha\neq 1$ or $S_1\big(\frac{1}{\lambda},\beta,\frac{2}{\pi}\beta\big)$.
[*Algebraic Graph Theory.*]{} The vector of all ones is denoted by $\mathbf {1}$ and the $n \times n$ centering matrix is $$M_n := I_n - \frac{1}{n} \mathbf 1 \mathbf 1^T.$$ An undirected weighted graph $\mathcal{G}$ is defined by the triple $\mathcal{G}=(\mathcal V,\mathcal {E},w)$, where $\mathcal V$ is the set of nodes of ${{\mathcal{G}}}$, $\mathcal{E}$ is the set of links of the graph, and $w: \mathcal{E} \rightarrow \mathbb{R}_{+}$ is the weight function that maps each link to a non-negative scalar $a_{ij}$. The matrix $L=[l_{ij}]$ with $$l_{ij}=\begin{cases} -a_{ij}, & i\neq j \\
\sum_{j=1}^{n}a_{ij}, & i= j \end{cases}$$ is the Laplacian matrix of ${{\mathcal{G}}}$. The following condition holds true throughout the paper.
\[assum0\] The coupling graphs of all networks considered in this paper are simple, undirected, and connected.
A number of important consequences immediately follow. At first, $a_{ij}=a_{ji}$ for all $i,j\in \mathcal V$ that makes $L$ symmetric. Then its eigenvalues are real and they can be ordered as $$0=\lambda_1 <\lambda_2 \leq\dots\leq \lambda_{n}.$$ Furthermore, $L$ can be represented as $L=Q \Lambda Q^T $, where $\Lambda=\text{diag}(\lambda_1,\dots,\lambda_n)$ and $Q=[q_1~|~\dots~|~q_n]$ is a matrix the $i^{th}$ column of which is corresponds to the eigenvector associated with the eigenvalue $\lambda_i$ of $L$. Finally, $\{q_i\}_{i\in [n]}$ can be chosen to satisfy $$q_i^T q_j=\left\{\begin{array}{ccc}
1 & \textrm{if} & i=j \\
0 & \textrm{if} & i\neq j.
\end{array}\right.$$ Under this normalization condition, the eigenvector of the smallest eigenvalue $\lambda_1=0$, takes the form $q_1=\frac{1}{\sqrt{n}} \mathbf 1$. For the sake of convenience, we define below a few graph laplacian related functions: $$\begin{aligned}
f_{ij}(t)&=&\sum_{k=2}^n q_{ik}q_{jk}e^{-\lambda_k t} \label{eq: fij} \\
g(t) &=&\sum_{k=2}^n e^{-\lambda_k t} \label{eq: zeta}\\
G_{\alpha} &=& \int_0^{\infty}g^\alpha(s)\,ds \label{eq: gfunction}\end{aligned}$$ where $\lambda_k$ the $k^{th}$ eigevnalues of $L=Q\Lambda Q^T$, $q_{ij}$ the $(i,j)$ element of $Q$, and $\alpha\in (0,2]$. Note that $f_{ij}$ clearly belong to $F_{\alpha}$. In addition, $|q_{ij}|\in [0,1]$ implies $|f_{ij}(t)|\leq g(t)$. Additionally, we define $$\label{eq: kappaalpha}\begin{split}
\Lambda_{\alpha,p}^{(k)}=\Gamma^\frac{1}{p}(\alpha+1)\bigg[\sum_{m=2}^{k-1}&\frac{(\lambda_k-\lambda_m)^{\frac{\alpha}{p}}}{(\alpha \lambda_m)^{\frac{\alpha+1}{p}}}+\sum_{m=k+1}^{n}\frac{\big(\lambda_m-\lambda_k\big)^\frac{\alpha}{p}}{(\alpha\lambda_k)^{\frac{\alpha+1}{p}}}\bigg].
\end{split}$$ where $\Gamma(z)$ stands for the Gamma function. With a little abuse of notation, we define $$\label{eq: kappaalphasum}\Lambda_{\alpha,p}=\sum_{k=2}^n \Lambda_{\alpha,p}^{(k)}.$$
Problem Statement
=================
Consider a collection of $1,\dots,n$ autonomous agents, defined through the state $x^{(i)}\in \mathbb R$, $i=1,\dots,n$. The agents execute a consensus algorithm on a network with symmetric couplings to align their states. This alignment process is perturbed by $n$ noise sources powered by stable random motions. Every source is attached to node $i$ and it acts independently of the rest of the sources. This setting leads to the following system of stochastic differential equations: $$\label{eq: model}
dx_t=-L\,x_t\,dt+dz_t, \hspace{0.2in} t>0$$ where $x_t=\big[x_t^{(1)},\dots,x^{(n)}_{t}\big]^T$ is the state vector, $L$ is the graph laplacian matrix that satisfies Assumption \[assum0\]. Evidently, $dz_t=M(dt)$ is a multi-dimensional stable process under the next condition:
\[assum: noise\] $dz_t=\big[M_1(dt),\dots,M_n(dt)\big]^T$ is a vector of $n$ independent random measures. For every $i=1,\dots,n$, the measure $M_i(dt)$ is defined on the measure space $([0,\infty),\mathcal B\big([0,\infty)\big),|\cdot|\big)$ such that $$M_i(t-s)\sim S_{\alpha}\big(|t-s|^{1/\alpha},\beta_i,0\big),~~\beta_i\in [-1,1]$$ is a random measure.
The initial vector in system , $x_0=\big[x_0^{(1)},\dots,x_0^{(n)}\big]^T$, is arbitrary but fixed and it is chosen independently of $dz_t$. System is the differential form of a multi-dimensional generalized Ornstein-Uhlenbeck process, with integral representation $$\label{eq: integralmodel}
x_t=e^{-Lt}x_0+\int_{0}^t e^{-L(t-s)}dz_s$$ Processes of this type have been studied in the past (see for example [@sato1983] and [@CIS-462223]) for $dz_s$ a generic stable measure and $-L$ being Hurwitz (i.e. $\lim_{t\rightarrow +\infty} e^{-L t}= O_{n\times n}$ ).
The first objective of this paper is to study the fundamental properties of the solution of , define concepts of performance for , and calculate them explicitly, whenever possible. Otherwise we obtain faithful approximations and validate their efficiency.
Output Signal Statistics
=========================
Unlike the models discussed in [@sato1983] and [@CIS-462223], $-L$ in is not Hurwitz. The interest in the study of consensus seeking systems is on observables that measure types of state differences. For example, we are interested in the relative agent displacement (i.e., $x^{(i)}-x^{(j)}$), or agents’ deviation from network average $\big($i.e., $x^{(i)}-\frac{1}{n}\sum_{j=1}^n x^{(j)}\big)$. For the latter case, stacking all the elements $i=1,\dots,n$ yields $$\label{eq: output}y=M_n x$$ where $M_n=I_n-\frac{1}{n}\mathbf 1\mathbf 1^T$ is the centering matrix. Applying this transformation to sets the marginal eigenvalue unobservable so that noise-free output is asymptotically stable. Also, the noisy output process $y=\{ y_t = M_n\,x_t, t\geq 0\}$ enjoys a number of remarkable properties summarized below.
\[prop: ydist\] Under Assumptions \[assum0\] and \[assum: noise\], the process $y=\{y_t,~t\geq 0\}$ in generated by $x=\{x_t,~t\geq 0\}$ to be the realization of , satisfies: $$\label{eq: outputdynamics}
y_t=Q \Phi(t) Q^T y_0+\int_{0}^{t}Q \Phi(t-s) Q^T dz_s,$$ where $$\Phi(t)=\text{Diag}\big[0,e^{-\lambda_2 t},\dots, e^{-\lambda_n t}\big]$$ and $\{\lambda_i\}_{i=2}^n$ the eigenvalues of $L$. For every fixed $t$, $y_t$ is a stable vector, with the $i^{th}$ element $y_t^{(i)}$ a stable random variable with $t$-dependent distribution parameters. As $t\rightarrow \infty$, the $l^{th}$ element of $\overline{y}=\lim_{t} y_t$, is distributed as $$\overline{y}^{(l)}~\sim~S_{\alpha}\big(\sigma_l,\beta_l,\mu_l\big)$$ where $$\label{eq: stableparameters}
\begin{split}
\sigma_{l}^\alpha&=\sum_{j=1}^n\sigma_{lj}^\alpha\\
\beta_{l}&=\frac{1}{\alpha}\frac{\sum_{j}\beta_j\sigma_{lj}^\alpha \int_0^{\infty}f_{lj}(s)^{<\alpha>}\,ds}{\sigma_j^\alpha}\\
\mu_l&=\begin{cases}
0, & \alpha\neq 1\\
-\frac{2}{\pi}\sum_{j}\beta_j\int_{0}^\infty f_{lj}(s)\ln |f_{lj}(s) |\,ds , & \alpha =1
\end{cases}
\end{split}$$ with $$\label{eq: sigmaij}
\sigma_{lj}^\alpha=\frac{1}{\alpha}\int_0^{\infty}|f_{lj}(s)|^\alpha\,ds.$$
For the first part of the proof we observe that $M_n$ can be expressed as $M_n=Q E Q^T$, where $Q$ is the eigenvector matrix of $L$, and $E$ the $n\times n$ diagonal matrix with structure $E=\text{Diag}[0,1,\dots, 1]$. For $y_t=M_n x_t$, we have $$\begin{split}
y_t&= QEQ^T Q e^{-\Lambda t} Q^T x_0+QEQ^T \int_{0}^t Q e^{-\Lambda(t-s)}Q^T dz_s\\
&= Q \Phi(t) Q^T x_0+\int_{0}^t Q\Phi(t-s) Q^T dz_s \\
&=Q \Phi(t)\big(EQ^T x_0\big)+\int_{0}^t Q\Phi(t-s) Q^T dz_s \\
&=Q \Phi(t) Q^T y_0+\int_{0}^t Q\Phi(t-s) Q^T dz_s.
\end{split}$$ The second step is due to the linearity of the integral operator in Proposition \[prop: stableintegralprop\]. The $l^{th}$ element of $y_t$, equals[^3] $$y_t^{(l)}=\sum_{j=1}^n f_{lj}(t)y_0^{(j)}+\sum_{j=1}^{n}\int_{0}^t f_{lj}(t-s)\,M_j(ds),$$ where $f_{ij}(t)$ as in . In other words, $y_t^{(l)}$ is equal to a transient constant term plus the sum of $n$ independent $\alpha$-stable integrals, each of which involves an $m$-measurable function. From Proposition \[prop: stableintegralprop\], the $j^{th}$ stable integral $$\int_{0}^t f_{lj}(t-s)\,M_j(ds) \sim S_{\alpha}\big(\sigma_{lj}(t),\beta_{lj}(t),\mu_{lj}(t)\big)$$ with
$\sigma_{lj}(t)^\alpha=\frac{1}{\alpha}\int_0^{t}|f_{lj}(s)|^\alpha\,ds$, $\beta_{lj}(t)=\frac{\beta_j}{\alpha} \frac{\int_{0}^t f_{lj}(s)^{<\alpha>}\,ds}{\sigma_{lj}(t)} $, and $\mu_{lj}(t)\equiv 0$ if $\alpha\neq 1$ and $\mu_{lj}(t)=-\frac{2\beta_j}{\pi}\int_0^t f_{lj}(s)\ln|f_{lj}(s)|\,ds$, otherwise. An inductive application of Proposition \[prop: stableindep\] implies that the sum of $n$ independent stable integrals, is a stable random variable: $$\sum_{j=1}^{n}\int_{0}^t f_{lj}(t-s)\,M_j(ds)\sim S_{\alpha}\big(\sigma_l(t),\beta_l(t),\mu_l(t)\big)$$ with $\sigma_{l}^\alpha(t)=\sum_{j} \sigma_{lj}^\alpha(t)$, $\beta_l(t)=\frac{1}{\alpha}\frac{\sum_{j}\beta_{lj}(t)}{\sum_{j}\sigma_{lj}^\alpha(t)}$, and $\mu_{l}(t)=\sum_{j}\mu_{lj}(t)$. The result follows immediately after taking the limit in $t$.
Theorem \[prop: ydist\] explains that the distance of agents from network average follows a well-defined stable distribution for all times. It is remarked that the network topology affects the spread of the distribution, the symmetry and if $\alpha=1$, also the shift parameter. Network topology does not, however, impact stability index $\alpha$. We conclude that the deterministic process (in our case the network topology) cannot affect the tail of the distribution. The impulsiveness and frequency of the shocks will continue to affect the system regardless of its structure. The network can, to some extend, handle its ability to remain rigid in the face of these shocks. Another observation due at this point, is that distribution parameters, although valuable, are quite difficult to be expressed in closed form. Unfortunately, $\alpha$-stable processes are not famous for yielding elegant formulas, especially for multi-dimensional systems [@samorodnitsky1994stable]. In an interesting turn of events, there is a remarkable exception to this major difficulty for linear consensus systems.
\[cor: complete\] If for the graph laplacian spectrum, it holds that $\lambda_2=\lambda_n=:\lambda$ then for any $t\geq 0$ $$y_t^{(l)}~\sim~S_{\alpha}\big(\sigma_l(t),\beta_l(t),\mu_l(t)\big)$$ with $$\begin{split}
\sigma_l(t)&=\frac{(n-1)+(n-1)^\alpha}{n^\alpha \alpha^2\lambda}\big(1-e^{-\alpha\lambda t}\big)\\
\beta_l(t)&=\frac{(1-e^{-\alpha \lambda t})\big(\beta_i (n-1)^\alpha - \sum_{j\neq i}\beta_j\big)}{n^\alpha \alpha^3 \lambda\big[(n+1)(1+(n-1)^{\alpha-1}) \big]} \\
\mu_l(t)&=\begin{cases}
0, & \alpha \neq 1 \\
2\frac{\lambda^{-1}(1-e^{-\lambda t})-te^{-\lambda t}}{n\pi}\big((n-1)\beta_l - \sum_{j\neq l}\beta_j \big),&\alpha=1.
\end{cases}
\end{split}$$
Condition $\lambda_2=\lambda_n$ implies $\lambda_2=\lambda_3=\dots=\lambda_n=\lambda>0$. Also, by virtue of symmetry on $L$ the matrix $Q$ consists of unit length mutually orthogonal columns as well as rows. In view of $q_1=\frac{1}{\sqrt{n}}\mathbf 1$, it is straightforward $$f_{ij}(t)=\begin{cases}
-\frac{1}{n}e^{-\lambda t}, & i\neq j \\
\frac{n-1}{n}e^{-\lambda t}, & i=j.
\end{cases}$$Consequently, $$\sigma_{ij}^\alpha(t)=\begin{cases}
\frac{1}{n^\alpha\alpha^2\lambda}\big(1-e^{-\alpha\lambda t}\big), & i\neq j\\
\frac{(n-1)^\alpha}{n^{\alpha}\alpha^2\lambda}\big(1-e^{-\alpha\lambda t}\big), & i=j.
\end{cases}$$ The result follows by straightforward algebra.
Canonical example of a graph with identical non-zero laplacian eigenvalues is the complete graph with uniform coupling weights[^4]. Although Corollary \[cor: complete\] assumes such a special case of connectivity, one can make a few network related significant remarks. Corollary \[cor: complete\] suggests that for fixed number of agents and increased connectivity (i.e. $\lambda>> 1$) the scale, the skew and the shift of the distribution deteriorate as $\mathcal O(\lambda^{-1})$. On the other hand, growth of network with fixed communication weights (i.e. $n>>1$) reveals essentially $\alpha$-dependent behavior. To see this let us for a moment focus on on symmetric $\alpha$-stable noise (i.e. $\beta=\mu=0$). In such case, scale $\sigma_l$ grows as $\mathcal O(n^{1-\alpha})$ when noise sources do not attain finite first moments (i.e. $\alpha $ in the range of $(0,1)$). On the other hand, scale converges to $\frac{1-e^{-\alpha \lambda t}}{\alpha^2 \lambda} $ if noise has finite first moments (i.e. $\alpha$ in the range of $(1,2]$). The direct implication of Corollary \[cor: complete\] is that large-scale networks (in terms of number of nodes) may exhibit higher deviations than small-scale networks, when additive noise induce shocks of increased frequency and impact (i.e. with infinite expectation). The situation is reversed when noise is less impulsive (i.e. $\alpha\in [1,2]$).
Measures of Aggregate Deviations
================================
For stability index $\alpha=2$, we recover the Gaussian-based stochastic behavior of $y=\{y_t,~t\geq 0\}$. The statistical properties of interest are rendered from their first and second moments, both of which are well-defined and asymptotically constant. For networks like researchers focus on the aggregate variability of the output, $\mathbb E\big[\| y_t \|^2\big]$, in order to measure its behavior in the face of noise. As Proposition \[prop: propstrv\] explains, this is not possible for stable noise with $\alpha<2$. This poses the question on how could one quantify the impact of noise to a dynamical system hit by heavy-tailed noise. One answer could be the sum of scales in a $\alpha$-stable vector.
\[defn: vectorscale\] The cumulative scale of an $\alpha$-stable vector $y=[y_1,\dots,y_m]^T$ is defined to be $$\Sigma_\alpha(y)=\|\sigma\|_{\alpha}^{\alpha}=\sum_{l=1}^{m}\sigma_l^\alpha$$ where $\sigma=[\sigma_1,\dots,\sigma_m]^T$ and $\sigma_l$ is the scale parameter of the $l^{th}$ element of $y$.
For $\overline{y}$, the long term output vector of , $\Sigma_\alpha(\overline{y})$ can be trivially expressed in terms of the stable integrals :
The steady-state aggregate fluctuations of output dynamics are defined to be $$\label{eq: Sigmaalpha}
\Sigma_\alpha(\overline{y})=\frac{1}{\alpha}\sum_{i=1}^n \sum_{j=1}^n \int_0^\infty |f_{ij}(t)|^\alpha \,dt.$$
Evidently, $\Sigma_{\alpha}(y)$ for $y$ as in , is a measure of steady-state dispersion of agents around the moving average. The larger the $\Sigma_{\alpha}(\overline{y})$, the more impulsive and magnified the fluctuation of the agents around the moving average is. The spectral functions $f_{ij},~i,j \in \mathcal V$ are as in and represent the network contribution in the form of the steady-state distribution of $\overline{y}$. In other words, $\sigma_{ij}$ contains all the information that is for primary interest to a network analyst. The next result asserts that $\Sigma_{\alpha}(\overline{y})$ decreases with $\alpha$.
\[prop: monotonicity\] Assume the network dynamics of with the output process . Then $$\frac{\partial}{\partial \alpha}\Sigma_{\alpha}(\overline{y})<0.$$
From the definition of $\Sigma_\alpha$ in it suffices to prove $\frac{\partial}{\partial \alpha}\sigma_{ij}^\alpha<0$. This is equivalent to $$-\frac{1}{\alpha^2}\int_0^{\infty}|f_{ij}(t)|^\alpha\,dt+\frac{1}{\alpha}\int_0^{\infty}\ln \big(|f_{ij}(t)|\big) |f_{ij}(t)|^\alpha\,dt<0.$$ The latter condition is true if, $\ln \big(|f_{ij}(t)|\big)<\frac{1}{\alpha}$. This is in turn equivalent to $|f_{ij}(t)|<e^{1/\alpha}$. The latter inequality is, however, true in view of $$\begin{split}|f_{ij}(t)|&\leq e^{-\lambda_2 t} \sum_{k}|q_{ik}| |q_{jk}|\\
&\leq e^{-\lambda_2 t} \sqrt{\sum_{k}|q_{ik}|^2} \sqrt{\sum_{k}|q_{jk}|^2}<1\end{split}$$ by virtue of the Cauchy-Schwarz inequality and the properties of normalized Laplacian eigenvectors.
In conclusion, the more impulsive the noise, the more the states of the network are prone to exhibit large and frequent deviations. For $\alpha=2$, Assumption \[assum0\] and Property \[prop: propstrv\] yield $$\label{eq: h2norm}
\begin{split}\hspace{-0.1in}
\Sigma_{2}(\overline{y})&=\frac{1}{2}\sum_{i,j}\int_0^\infty f_{ij}^{2}(t)\,dt=\frac{1}{2}\sum_{k=2}^n\frac{1}{2\lambda_k}=\frac{1}{2}\mathbb E\big[\|\overline{y}\|_2^2\big],
\end{split}$$ where $\lambda_k$ are the eigenvalues of $L$, and the last step is in view of Property 3 of Proposition \[prop: propstrv\]. $\Sigma_2$ is intimately related to the cumulative variance of the output $\overline{y}$ of system , i.e. the $\mathcal H_2$-norm of the consensus network; a central measure of performance in stochastically driven dynamical systems [@Siami16TACa]. The Gaussian case is unique in its kind, in the sense that leads to a closed form expressions of $\Sigma_2$. Clearly, the calculation above is not correct when $\alpha<2$. It seems that no other value of the stability index offers this elegance, with the exception of complete topological graph, that can be directly calculated using Corollary \[cor: complete\] as: $$\label{eq: completescale}
\Sigma_{\alpha}(\overline{y})=\frac{(n-1)\big(1+(n-1)^{\alpha-1}\big)}{\alpha^2 n^{\alpha-1} \lambda}$$ where $\alpha\in (0,2]$ and $\lambda:=\lambda_2=\lambda_3,\dots=\lambda_n>0$.
Spectral Based Bounds
=====================
Stable integrals as in are indicative of the extent to which $\Sigma_\alpha$ can be calculated in closed form. With the exception of , one may need to rely on estimates of aggregate steady-state scale $\Sigma_\alpha(\overline{y})$ for dynamical networks such as . The purpose of this section is to elaborate on and establish upper estimates on $\Sigma_{\alpha}$. It is desirable to express these estimates as explicit functions of the eigenstructure of $L$, given the feature of noise. Our strategy is to construct estimates that become sharp as $\lambda_2\uparrow \lambda_n$ and/or as $\alpha \uparrow 2$, so as to resonate with the two extreme cases of connectivity and noise.
\[thm: main1\] Assume network with Assumptions \[assum0\] and \[assum: noise\] to hold and the stability parameter $\alpha \in (0,2]$ and consider the output vector-valued process $y=\{y_t,~t\geq 0\}$ from . The following estimates on $\Sigma_{\alpha}(\overline{y})$ hold:
If $\alpha \in (0,1]$,
$$\Sigma_\alpha(\overline{y}) \leq c_1 \sum_{k=2}^{n}\|q_k\|_{\alpha}^{2\alpha}\Lambda_{\alpha,1}^{(k)}+ c_2 \,G_{\alpha},$$
for $c_1$, $c_2$ the constants $$c_1=\frac{1}{\alpha (n-1)^\alpha}\hspace{0.1in}\text{and}\hspace{0.1in} c_2=\frac{1+(n-1)^{1-\alpha}}{\alpha n^{\alpha -1}}.$$
If $\alpha \in [1,2]$,
$$\begin{split}
\Sigma_{\alpha}(\overline{y})\leq \min\bigg\{&d_1 \Lambda_{\alpha,\alpha}^{\alpha-1}\sum_{k=2}^n \| q_k \|_\alpha^{2\alpha}\Lambda_{\alpha,\alpha}^{(k)}+ d_2 G_\alpha, \\
&\hspace{0.33in}d_3 \Lambda_{\alpha,\alpha}^{\alpha-1} \sum_{k=2}^n \| q_k \|_\alpha^{2\alpha}\Lambda_{\alpha,\alpha}^{(k)}+d_4 G_\alpha\bigg\}
\end{split}$$
for $d_1,d_2,d_3,d_4$ defined to be $$d_1=\frac{2^{\alpha-1}}{\alpha (n-1)^\alpha}, \hspace{0.2in} d_2=\frac{2^{\alpha-1}n^{1-\alpha}}{\alpha}(1+(n-1)^{1-\alpha}),$$ $$d_3=\frac{1}{\alpha (n-1)^\alpha},\hspace{0.3in}d_4=\frac{(1+(n-1)^{1-\alpha})(1+\alpha\Lambda_{\alpha,\alpha}^{\alpha-1})}{n^{\alpha-1}(n-1)^{-\alpha}}$$ The sum is taken over the non-zero eigenvalues $\lambda_k$ of the graph Laplacian $L$ with $q_{k}$ to be the $k^{th}$ eigenvector that corresponds to the $\lambda_k$ eigenvalue. Also, $\Lambda_{\alpha,\alpha}^{(k)}$ as in , $\Lambda_{\alpha,\alpha}$ as in and $G_\alpha$ as in .
From Definition \[defn: vectorscale\] $$\label{eq: sigmaaexpansion}
\Sigma_\alpha=\sum_{i=1}^n \sigma_i^\alpha=\sum_{i=1}^n \sum_{j=1}^n \sigma_{ij}^\alpha=\sum_{j\neq i=1}^n\sigma_{ij}^\alpha+\sum_{i=1}^n\sigma_{ii}^\alpha$$ as it occurs from Proposition \[prop: propstrv\] and Theorem \[prop: ydist\]. The following Claims are central estimates of $\sigma_{ij}^\alpha$ for $\alpha \in (0,1]$ and $\alpha \in [1,2]$ respectively. Their proof is put in the Appendix.
We begin with the case $\alpha\in (0,1]$.
\[lem: sigmaijestimates\] If $\alpha\in (0,1]$, the following estimates hold true: $$\sigma_{ij}^{\alpha}\leq \begin{cases} c_1\sum_{k}|q_{ik}|^\alpha |q_{jk}|^{\alpha}\Lambda_{\alpha,1}(k)+\overline{c}_2 \,G_\alpha,& i\neq j \\
\\
c_1\sum_{k}|q_{ik}|^{2\alpha}\Lambda_{\alpha,1}(k)+\underline{c}_2\,G_\alpha , & i=j
\end{cases}$$ where $\overline{c}_2=\frac{1}{n^\alpha \alpha (n-1)^{\alpha}}$ and $\underline{c}_2=\frac{1}{\alpha n^{\alpha}}$.
the first part of the result follows by direct application of the bounds of $\sigma_{ij}^\alpha$ of Claim \[lem: sigmaijestimates\] in . We continue with the case $\alpha \in [1,2)$. We make a similar claim on upper bounds of $\sigma_{ij}^\alpha$.
\[lem: sigmaijestimates2\] If $\alpha \in [1,2]$ then, for $i\neq j$, either $$\sigma_{ij}^\alpha\leq \frac{2^{\alpha-1}}{\alpha(n-1)^\alpha}\bigg[\Lambda_{\alpha,\alpha}^{\alpha-1}\sum_k |q_{ik}|^\alpha |q_{jk}|^\alpha \Lambda_{\alpha,\alpha}(k)+\frac{G_\alpha}{n^\alpha}\bigg]$$ or $$\begin{split}
\sigma_{ij}^\alpha\leq & \frac{1}{\alpha(n-1)^\alpha}\bigg[\Lambda_{\alpha,\alpha}^{\alpha-1}\sum_k |q_{ik}|^\alpha |q_{jk}|^\alpha \Lambda_{\alpha,\alpha}(k)+\\
&\hspace{1.8in}+\frac{G_\alpha}{n^\alpha}\big(1+\alpha \Lambda_{\alpha,\alpha}^{\alpha-1}\big)\bigg]
\end{split}$$ Also, for $i=j$, either
$$\sigma_{ii}^\alpha\leq \frac{2^{\alpha-1}}{\alpha(n-1)^\alpha}\bigg[\Lambda_{\alpha,\alpha}^{\alpha-1}\sum_k |q_{ik}|^{2\alpha} \Lambda_{\alpha,\alpha}(k)+\frac{(n-1)^\alpha}{n^\alpha}G_\alpha\bigg]$$
or $$\begin{split}
\sigma_{ii}^\alpha&\leq \frac{1}{\alpha(n-1)^\alpha}\bigg[\Lambda_{\alpha,\alpha}^{\alpha-1}\sum_k |q_{ik}|^{2\alpha} \Lambda_{\alpha,\alpha}(k)\\
&\hspace{1.4in}+\frac{(n-1)^\alpha}{n^\alpha} G_\alpha\big(1+\alpha \Lambda_{\alpha,\alpha}^{\alpha-1}\big)\bigg]
\end{split}$$
The second part of the result follows in a similar manner to the first.
A worth mentioning technical remark that occurs from Theorem \[thm: main1\] is the technical distinction between estimates obtained with noise sources for finite first moments, i.e. $\alpha \in (1,2]$, and estimates for noise with infinite first moment, i.e. $\alpha \in (0,1]$. In either case, bounds are generally constituted of two terms: The first term equals the weighted sum of the $\alpha$-norm of the $n-1$ eigenvectors of $L$. The weight of the $k^{th}$ term in this sum is an eigenvalue-based function that essentially measures the deviation of the $k^{th}$ eigenvalue with respect to the rest $n-2$. The second term effectively involves the sum of the inverse non-zero eigenvalues of $L$ that it is expressed in integral form. One can sacrifice additional sharpness and use the simple bound $G_\alpha \leq \frac{(n-1)}{\lambda_2}$.
We remark that $\lambda_2 \uparrow \lambda_n$ implies $\Lambda_{\alpha,\alpha} \downarrow 0$. The estimates of Theorem \[thm: main1\] coincide with the exact value of $\Sigma_\alpha(\overline{y})$ in . However, for $\alpha=2$, the estimates in Theorem \[thm: main1\] do not match with the value in . This non-negligible discrepancy motivates the additional upper bound of $\Sigma_{\alpha}(\overline{y})$.
Estimates near $\alpha=2$.
--------------------------
Together with Theorem \[thm: main1\] we propose a different, yet particularly simple approach, in establishing estimates of $\Sigma_\alpha$, via a harmless perturbation of the scale parameter from the Gaussian case $\alpha=2$.
\[thm: main2\] Assume network with Assumptions \[assum0\] and \[assum: noise\] to hold and the stability parameter $\alpha \in (1,2]$. Let the output vector-valued process $y=\{y_t,~t\geq 0\}$ as in . Then, $$\begin{split}
&\Sigma_\alpha(\overline{y})\leq \frac{1}{\alpha}\sum_{k=2}^{n}\frac{1}{2\lambda_k} +\frac{1}{\alpha}\int_{\alpha}^2 \int_{0}^\infty n^{2-w} g^{w}(s)\big|\ln g(s)\big|\,dsdw.
\end{split}$$ where $g(t)$ is as in .
Rewrite $\sigma_{ij}^\alpha$ as a harmless perturbation of $\frac{1}{\alpha}\int_0^\infty |f_{ij}(s)|^2\,ds$, as follows: $$\begin{split}
&\sigma_{ij}^{\alpha}=\frac{1}{\alpha}\int_{0}^\infty |f_{ij}(s)|^\alpha\,ds\\
&=\frac{1}{\alpha}\int_{0}^\infty |f_{ij}(s)|^2 \,ds+\frac{1}{\alpha}\int_{0}^\infty\int_{2}^\alpha \ln(|f_{ij}(s)|)|f_{ij}(s)|^{w} \,dwds\\
&=\frac{1}{\alpha} A_{ij}+\frac{1}{\alpha} B_{ij}
\end{split}$$ The integrand in $A_{ij}$ reads $$\sum_{k=2}^{n}q_{ik}^2 e^{-2\lambda_k s}+\sum_{k_1\neq k_2=2}^{n}q_{ik_1}q_{jk_1}q_{ik_2}q_{jk_2} e^{-(\lambda_{k_1}+\lambda_{k_2}) s}$$ so that summing over $j$ $$\label{eq: I1}\sum_{k=2}^{n}q_{ik}^2 e^{-2\lambda_k s},$$ from the eigenvectors property $\sum_{j=1}^{n}q_{jk}\equiv 0~\forall~k>1$. We proceed with $B_{ij}$. From the convexity of $r(t)=|t|^w$ for every $w\in [\alpha,2]$, Lemma \[lem: jensen\] yields $$|f_{ij}(s)|^{w}\leq g^{w-1}(s)\sum_{k=2}^{n} |q_{ik}|^w |q_{jk}|^w e^{-\lambda_k s}$$ and $|f_{ij}(s)|\leq g(s) \Rightarrow |\ln(|f_{ij}(s)|)|\leq \big| \ln g(s)\big|$ where $g(t)=\sum_{k=2}^n e^{-\lambda_k t}$.$$\begin{split}
B_{ij}&\leq \sum_{k=2}^{n}\int_{0}^\infty\int_{\alpha}^2 |q_{ik}|^w |q_{jk}|^w g^{w-1}(s)\big |\ln g(s)\big|e^{-\lambda_k s} \,dwds\\
\end{split}$$ Taking the double sum over $i$ and $j$, in view of $\sum_{i}q_{ik}^2\equiv 1$, $$\begin{split}
&\Sigma_\alpha=\sum_{i,j}\sigma_{ij}^\alpha \leq \frac{1}{\alpha}\sum_{k=2}^{n}\frac{1}{2\lambda_k} +\\
&+\frac{1}{\alpha}\sum_{k=2}^n\int_{0}^\infty \int_{\alpha}^2 \| q_k\|_{w}^{2w} g^{w-1}(s)\big|\ln g(s)\big|e^{-\lambda_k s}\,dwds
\end{split}$$ The result follows in view of the norm estimate $$\|q_k\|_{w}\leq n^{\frac{1}{w}-\frac{1}{2}}\|q_k\|_2=n^{\frac{1}{w}-\frac{1}{2}}$$
The upper bound above, although true for $\alpha \in (1,2]$, it is not expected to provide efficient estimates for values of $\alpha$ very far away from 2, due mainly to the way it was obtained.
Spectral based estimates such as these of Theorem \[thm: main1\] and Theorem \[thm: main2\], could possibly be leveraged when developing optimal design algorithms, that reform the communication parameters into a network, more robust to the imposed noise. In order to verify the qualification of the estimates we must validate their efficiency on different network topologies. This is in part the subject of §\[sect: examples\].
Connection with the $p^{th}$-moment, for $p<\alpha$.
----------------------------------------------------
Theorem \[prop: ydist\] asserts that the output dynamics are stable vectors with the same stability parameter as the noise sources. Following Property 3 of Proposition \[prop: propstrv\], the distribution attains moments up to any $p<\alpha$, for $\alpha <2$. In particular, $$\label{eq: hpnorm}
\mathbb E\big[ \|y\|_p^p \big]=c^p(\alpha,\beta,p) \|\sigma\|_p^p$$ When $y$ is Gaussian (i.e. $\alpha=2$), $\mathbb E\big[ \|y\|_p^p \big]$ exists for $p\leq \alpha$. As Proposition \[prop: propstrv\] explains, for the non-Gaussian range of $\alpha$, the $p^{th}$ moments diverge at $p=\alpha$. It is thus unreasonable to try to obtain $\mathcal H_2$-norm based measures of performance for heavy-tailed consensus systems. This is why cumulative scale $\Sigma_\alpha(\overline{y})$ may be regarded as extension of the classic input/output $\mathcal H_2$ performance measure. Indeed, for $\alpha=2$, and $p=2$, the statistics of $\overline{y}$ recover the well-known formula .
We conclude this section with reporting the relation of $\mathbb E\big[ \|y\|_p^p \big]$ and $\Sigma_\alpha$, through the basic equivalence properties of Euclidean norms. Straightforward calculations yield for $p<a$, $$c^p \sqrt[p / \alpha]{\Sigma_{\alpha}(\overline{y})} \leq \mathbb E[ \| \overline{y} \|_p^p ] \leq n^{1-\frac{p}{\alpha}} c^p \sqrt[p / \alpha]{\Sigma_{\alpha}(\overline{y})}$$ where $c=c(\alpha,\beta,p)$ is the constant in Proposition \[prop: propstrv\]. Interestingly enough, the double inequality becomes exact at $\alpha=2$ and in the limit $p\rightarrow 2^{-}$.
Numerical Examples {#sect: examples}
==================
In this section, we discuss four examples related to output . The first three, regard elementary network design problems. Their objective is to demonstrate that the basic design strategies (addition/removal of links and re-weighting) are critically affected by parameter $\alpha$ of input noise. The fourth example is a validation of the estimates in Theorems \[thm: main1\] and \[thm: main2\]. Our focus is on consensus systems driven by symmetric $\alpha$-stable noise (i.e. $\beta=0$ and $\mu=0$).
\[scale=.6,auto=left,every node/.style=[circle,fill=blue!20]{}\] (n2) at (8,12) [2]{}; (n4) at (5,12) [4]{}; (n5) at (3,13) [5]{}; (n1) at (9,10) [1]{}; (n6) at (8,14) [6]{}; (n3) at (12,9) [3]{}; (n8) at (5,5) \[fill=none\] [$\boldsymbol{\mathcal G_1}$]{}; /in [n2/n3,n2/n6,n3/n5,n5/n2]{} () edge \[bend left\] () \[thick\]; /in [n3/n6]{} () edge \[bend right\] () \[thick\]; /in [n2/n4,n1/n2]{} ()–() \[thick\] ; /in [n3/n1]{} /in [n1/n4]{} () edge \[bend left\]() \[blue,dashed,thick\]; /in [n1/n3]{} () –() \[red,dashed,thick\]; /in [n3/n4]{} () edge \[bend left\]() \[red,dashed,thick\];
\[scale=.6,auto=left,every node/.style=[circle,fill=blue!20]{}\] (n1) at (1,10) [1]{}; (n2) at (3,10) [2]{}; (n3) at (5,10) [3]{}; (n4) at (7,10) [4]{}; (n5) at (9,10) [5]{}; (n6) at (1,5) [6]{}; (n7) at (3,5) [7]{}; (n8) at (5,5) [8]{}; (n9) at (7,5) [9]{}; (n10) at(9,5) [10]{};
(n12) at (5,1) \[fill=none\] [$\boldsymbol{\mathcal G_2}$]{}; /in [n1/n2,n1/n3,n1/n4]{} () edge \[bend left\] ()\[thick\]; /in [n1/n6,n1/n9]{} ()–()\[thick\]; /in [n2/n4]{} () edge \[bend left\] ()\[blue, dashed,very thick\]; /in [n2/n8,n2/n9,n2/n10]{} ()–()\[thick\]; /in [n2/n6]{} ()–()\[green,dashed,very thick\]; /in [n3/n4]{} () edge \[bend left\] ()\[thick\]; /in [n3/n7,n3/n8]{} ()–() \[thick\]; /in [n4/n7,n4/n8,n4/n9,n4/n10]{} ()–() \[thick\]; /in [n5/n6,n5/n8,n5/n9,n5/n10]{} ()–()\[thick\]; /in [n6/n8,n6/n9,n6/n10]{} () edge \[bend right\] ()\[thick\]; /in [n7/n9,n7/n10]{} () edge \[bend right\] ()\[thick\]; /in [n8/n10]{} () edge \[bend right\] ()\[red,dashed,very thick\];
\[scale=.6,auto=left,every node/.style=[circle,fill=blue!20]{}\] (n1) at (1,10) [1]{}; (n2) at (4,10) [2]{}; (n5) at (7,10) [5]{}; (n3) at (4,8) [3]{}; (n4) at (7,8) [4]{}; /in [n1/n2,n2/n3,n2/n5,n5/n4]{} ()–()\[very thick\]; (n1) – (n2) node \[midway,fill=none\] [**1**]{} ; (n2) – (n5) node \[midway,fill=none\] [$a_{25}$]{} ; (n2) – (n3) node \[midway,fill=none\] [$a_{23}$]{} ; (n4) – (n5) node \[midway,fill=none\] [**1**]{} ; (n12) at (5,1) \[fill=none\] [$\boldsymbol{\mathcal G_3}$]{};
\[exmpl: ex2\]\[Design via Expansion\] We consider a network over $n=6$ agents that seek consensus. The communication network, illustrated as $\boldsymbol{\mathcal G_1}$ in Figure \[fig: graph\], is a linear time-invariant with unit coupling links. The network is hit by stable noise forming the dynamics of . In this problem, we have the option to add a new unit-weight link to the network so as to improve its performance. In other words, we look for the link location, that upon establishing, $\Sigma_{\alpha}$ is minimized. Numerical explorations signify that the optimal selection is a function of $\alpha$. For $\alpha=2$ to $\alpha=1.6655$ the optimal location is a link between nodes 1 and 4 (blue dotted curve). From $\alpha=1.6655$ to $\alpha=0.3312$ there appear to be two equivalent alternatives: one is the pair (1,3) and the other is (3,4) (red dashed curves). For stability values below $0.3313$ the optimal pair is (1,3).
\[exmpl: sparsification\]\[Design via Sparsification\] Next, we consider a dense linear network over 10 nodes. It is depicted as graph $\boldsymbol{\mathcal G_2}$ in Figure \[fig: graph\]. The working hypothesis is that the existence of too many links, makes for an expensive communication structure. The problem in this network is to choose the one link of the network that, upon removal, increases $\Sigma_\alpha$, the least. Our findings suggest that within the stability range $\alpha=2$ to $\alpha=1.8932$, the optimal pair is (2,4) (removal blue dashed curve). From $\alpha=1.8937$ to $\alpha=0.1971$ the optimal pair is (8,10) (removal of the red dashed curve). Finally, for $\alpha<0.1971$ the optimal pair appears to be (2,6) (removal of the green dashed curve).
\[exmpl: reweighting\]\[Design via Re-weighting\] In this last example, we regard a small network of 4 agents, illustrated as $\boldsymbol{\mathcal G_3}$ in Figure \[fig: graph\]. All but links between nodes (2,3) and (2,5) are fixed and of unit weight. On the other hand, the edges $a_{23}$ and $a_{25}$ are assumed to satisfy $a_{23}=2-b, a_{25}=b$ for some $b\in (0,2) $. In other words, keeping the overall network budget constant and equal to $a_{12}+a_{23}+a_{25}+a_{45}=4$ we seek to calibrate the control parameter $b$ towards the value that minimizes $\Sigma_\alpha$. The simulations are illustrated in Figure \[fig: example3\] where we essentially depict the dependence of the optimal calibration (the black dots) as a function of $\alpha$.
![The cumulative scale parameter $\Sigma_{\alpha}$ as function of the control $b$, in Example \[exmpl: reweighting\]. The different curves correspond to stable noises of various stability parameters. The lowest curve is this of the Gaussian case, $\alpha=2$. The $\Sigma_\alpha$ curves increase monotonically as $\alpha$ varies from 2 to 0, verifying Proposition \[prop: monotonicity\]. The sequence of black dots signify the global minimum in each type of noise.[]{data-label="fig: example3"}](reweighting-eps-converted-to.pdf)
All three network design problems lead to a definitive conclusion: performance evaluation tools that are associated with a particular type of stochastic uncertainty (e.g. the Gaussian and the associated $\mathcal H_2$ performance measure) become obsolete in other types of uncertainty (e.g. non-Gaussian cases).
\[exmpl: estimates\] We test the scale estimates of Theorems \[thm: main1\] and \[thm: main2\]. We choose two graphs. The first graph has a significantly larger eigenvalue ratio than the second one. The curves are depicted in Figure \[fig: example3\] and are compared with the exact value. There are generally two remarks due. The estimates perform better in graphs with ratio $\lambda_n/ \lambda_2$ close to 1. Also, as the noise distribution becomes more and more impulsive (smaller values of $\alpha$) the estimates becomes less and less efficient.
![Simulation of Example \[exmpl: estimates\]. Graphs with smaller $\frac{\lambda_n}{\lambda_2}$ ratio provide scale estimates closer to the actual value. Estimate 1, regards $\Sigma_\alpha$ with $\alpha\in (0,1]$. Estimate 2, regards $\Sigma_\alpha$ with $\alpha\in [1,2]$. Estimate 3, regards $\Sigma_\alpha$ with $\alpha\in [1,2]$ as in Theorem \[thm: main2\]. ](estimates1-eps-converted-to.pdf "fig:") ![Simulation of Example \[exmpl: estimates\]. Graphs with smaller $\frac{\lambda_n}{\lambda_2}$ ratio provide scale estimates closer to the actual value. Estimate 1, regards $\Sigma_\alpha$ with $\alpha\in (0,1]$. Estimate 2, regards $\Sigma_\alpha$ with $\alpha\in [1,2]$. Estimate 3, regards $\Sigma_\alpha$ with $\alpha\in [1,2]$ as in Theorem \[thm: main2\]. ](estimates2-eps-converted-to.pdf "fig:")
DISCUSSION
==========
Modeling of uncertainty in networked control systems typically assumes noise sources generated by Brownian motion. Albeit popular, such perturbations are not rich enough to incorporate real-world uncertainties that incorporate impulsive shocks. In this paper, we considered consensus seeking systems in the presence of sources induced by heavy-tailed probability measures. We defined extensions of measures of performance that quantify systemic response in the presence of heavy-tailed noise. These were cumulative scale parameters of $\alpha$-stable vectors, that demonstrate close relations with the $p$-norms of output dynamics. It is argued that heavy-tailed performance measures may be regarded as generalization of $\mathcal H_2$-norm based measures of performance for linear systems with white noise inputs. Unless certain types of networks or noise are assumed, explicity calculation of heavy-tailed performance measures is not possible. Our estimates perform quite well for types of networks with the property that the graph laplacian eigenvalues satisfy $\lambda_n/\lambda_2 \approx 1$. In addition to complete graph connectivity (where $\lambda_n/\lambda_2 = 1$) expander graphs also satisfy ratio $\lambda_n/\lambda_2 $, [@spielman]. Finally, we presented simple network design examples on $\alpha$-stable consensus network where we demonstrate that any optimal synthesis strategy must take into account the shock-impulsiveness of infused noise.
Appendix {#appendix .unnumbered}
========
We proceed with reviewing some fundamental inequalities related to the function $s(t)=|t|^p$ for $p>0$. These inequalities play a crucial role in the derivation of the technical results of our paper.
\[lem: ineq\] Let $u,v\in \mathbb R$. If $0<p\leq 1$, then $$|u+v|^{p}\leq |v|^p+|u|^p.$$ If $p\in (1,2)$, then $$|u+v|^p\leq \min\big\{ 2^{p-1}( |u|^p+|v|^p ), |u|^p+|v|^p +p\,|v|^{p-1}\,|u|^p\big\}.$$
For the first inequality as well as $|u+v|^p \leq 2^{p-1}(|v|^p+|u|^p)$ for $p>1$, we refer to [@samorodnitsky1994stable]. It remains to show that for $p\in (1,2]$ $|v+u|^p\leq |v|^p+|u|^p+p |v|^{p-1} |u|^p$. For this we write $$\begin{split}
|u+v|^p&=|v|^p+|u+v|^p-|v|^p\\
&\leq |v|^p+p |u| \int_0^1 |q(u+v)+(1-q)v|^{p-1}\,dq \\
&\leq |v|^p+p |u| \int_0^1 |q u|^{p-1}\,dq+ p |u| |v|^{p-1}
\end{split}$$ where the last step is due to the first inequality.
The estimate of $|\cdot|^p$ for $p \in [1,2)$ relies on two inequalities. The first one coincides with the inequality on $p \in (0,1]$, providing sharper estimates. The second inequality becomes exact if and only if either $u$ or $v$ is zero.
\[lem: jensen\] Let $\phi$ be a positive homogeneous of degree $p>1$ and convex function, defined on $\mathbb R$. Let real numbers $y_1,\dots,y_m$ and $b_1,\dots,b_m$ non-negative with $\sum_i b_i>0$. Then $$\phi\bigg(\sum_{i=1}^m b_i y_i\bigg)\leq \bigg(\sum_{i=1}^m b_i\bigg)^{p-1} \sum_{i=1}^m b_i\phi(y_i)$$
We write $$\begin{split}
\phi\bigg(\sum_{i=1}^m b_i y_i\bigg)&=\phi\bigg(w \cdot \frac{\sum_{i} b_i y_i}{w}\bigg)=w^{p}\phi\bigg(\frac{\sum_{i} b_i y_i}{w}\bigg)
\end{split}$$ where $w=\sum_{i} b_i>0$. The result follows by direct application of Jensen’s inequality [@royden] on $\phi\big(\frac{\sum_{i} b_i y_i}{\sum_{i} b_i}\big)$.
\[lem: minksowski\] Let $p\geq 1$ and $f,~g$ real-valued, integrable functions on $E\subset \mathbb R$. Then $$\bigg(\int_E |f+g|^p\,ds\bigg)^{\frac{1}{p}}\leq \bigg(\int_E |f|^p\,ds\bigg)^{\frac{1}{p}}+ \bigg(\int_E |g|^p\,ds\bigg)^{\frac{1}{p}}.$$
Observe that Assumption \[assum0\] implies, $$\sum_{k=2}^n q_{ik}q_{jk}= \begin{cases} -\frac{1}{n}, & i\neq j\\
\frac{n-1}{n}, & i=j.
\end{cases}$$Based on this property and elementary algebra we observe that $f_{ij}(t)$ can be re-written as: $$f_{ij}(t)=\begin{cases} W_{i,j,n}(t)-\frac{1}{n(n-1)}g(t), & i\neq j \\
W_{i,j,n}(t)+\frac{1}{n}g(t), & i=j
\end{cases}$$ where $$W_{i,j,n}(t)=\frac{1}{n-1}\sum_{k=2}^n\sum_{m\neq 1,k}(e^{-\lambda_k t}-e^{-\lambda_m t})q_{ik}q_{jk},$$ and $g(t)=\sum_{k=2}^n e^{-\lambda_k t}$. We elaborate only for $i\neq j$. Repeated application of Lemma \[lem: ineq\], followed by $e^{-x}\geq 1-x$ gives
$$\begin{split}
&|f_{ij}(t)|^{\alpha}\leq \frac{1}{(n-1)^{\alpha}}\sum_{k=2}^{n}|q_{ik}|^\alpha |q_{jk}|^{\alpha}\times \\
&\bigg[ \sum_{m=2}^{k=1}e^{-\lambda_m t}\big(1-e^{-(\lambda_k-\lambda_m)t}\big) +\\
&\hspace{0.4in}+e^{-\lambda_k t}\sum_{m=k+1}^{n}\big(1-e^{-(\lambda_m-\lambda_k)t}\big) \bigg]^{\alpha}+\frac{g^{\alpha}(t)}{n^{\alpha}(n-1)^{\alpha}}
\end{split}$$
and
$$\begin{split}
|f_{ij}(t)|^{\alpha}&\leq \frac{1}{(n-1)^{\alpha}}\sum_{k=2}^{n}|q_{ik}|^\alpha |q_{jk}|^{\alpha}\times\\&\bigg[\sum_{m=2}^{k-1}e^{-\alpha\lambda_m t}(\lambda_k-\lambda_m)^{\alpha}t^{\alpha}+\\
&+e^{-\alpha\lambda_k t}\sum_{m=k+1}^{n}\big(\lambda_m-\lambda_k\big)^\alpha t^\alpha\bigg]+\frac{g^{\alpha}(t)}{n^{\alpha}(n-1)^{\alpha}}
\end{split}$$
Consequently,
$$\begin{split}
&\int_0^{\infty}|f_{ij}(s)|^{\alpha}\,ds\leq \frac{1}{(n-1)^{\alpha}}\sum_{k=2}^{n}|q_{ik}|^\alpha |q_{jk}|^{\alpha}\times\\
&\Gamma(\alpha+1)\bigg(\sum_{m=2}^{k-1}\frac{(\lambda_k-\lambda_m)^{\alpha}}{(\alpha \lambda_m)^{\alpha}}+\frac{1}{(a\lambda_k)^{\alpha}}\sum_{m=k+1}^{n}\big(\lambda_m-\lambda_k\big)^\alpha\bigg)\\
&+\frac{\int_0^\infty g^\alpha(s)\,ds}{n^\alpha(n-1)^\alpha}=\frac{1}{(n-1)^\alpha}\bigg[\sum_{k=2}^n |q_{ik}|^\alpha |q_{jk}|^\alpha \Lambda_{\alpha,1}(k)+\frac{G_\alpha}{n^\alpha}\bigg]
\end{split}$$
Following similar steps, for $i=j$, we have
$$\begin{split}
&\int_0^{\infty}|f_{ij}(s)|^{\alpha}\,ds\leq\\
&\hspace{0.25in}\leq \frac{1}{(n-1)^\alpha}\bigg[\sum_{k=2}^n |q_{ik}|^\alpha |q_{jk}|^\alpha \lambda_{\alpha,1}(k)+\frac{(n-1)^\alpha}{n^\alpha} G_\alpha\bigg].
\end{split}$$
For $\alpha \in [1,2]$, we invoke Lemma \[lem: minksowski\] and use similar techniques to these in proof of Claim \[lem: sigmaijestimates\] to obtain $$\sigma_{ij}\leq \begin{cases} \frac{1}{\alpha^{\frac{1}{\alpha}}(n-1)}\bigg[\sum_{k} |q_{ik}| |q_{jk}|\Lambda_{\alpha,\alpha}(k)+\frac{1}{n}G_{\alpha}^{\frac{1}{\alpha}}\bigg], & i\neq j \\
\frac{1}{\alpha^{\frac{1}{\alpha}}(n-1)}\sum_{k} |q_{ik}|^2\Lambda_{\alpha,\alpha}(k)+\frac{1}{\alpha^{\frac{1}{\alpha}} n}G_{\alpha}^{\frac{1}{\alpha}}, & i=j
\end{cases}$$ We need, however, estimates of $\sigma_{ij}^\alpha$. So $$\sigma_{ij}^\alpha\leq \begin{cases} \frac{1}{\alpha(n-1)^\alpha}\big[\sum_{k} |q_{ik}| |q_{jk}|\Lambda_{\alpha,\alpha}(k)+\frac{1}{n}G_{\alpha}^{\frac{1}{\alpha}}\big]^\alpha, & i\neq j \\
\frac{1}{\alpha(n-1)^\alpha}\big[\sum_{k} |q_{ik}|^2\Lambda_{\alpha,\alpha}(k)+\frac{(n-1)}{ n}G_{\alpha}^{\frac{1}{\alpha}}\big]^\alpha, & i=j
\end{cases}$$ Now, we use the second inequality of Lemma \[lem: ineq\], part by part. Fix $i\neq j$. Then, either $$\begin{split}
\sigma_{ij}^\alpha&\leq \frac{2^{\alpha-1}}{\alpha(n-1)^\alpha}\bigg[\bigg(\sum_k |q_{ik}| |q_{jk}|\Lambda_{\alpha,\alpha}(k) \bigg)^\alpha+\frac{G_\alpha}{n^\alpha}\bigg]\\
& \leq \frac{2^{\alpha-1}}{\alpha(n-1)^\alpha}\bigg[\Lambda_{\alpha,\alpha}^{\alpha-1}\sum_k |q_{ik}|^\alpha |q_{jk}|^\alpha \Lambda_{\alpha,\alpha}(k)+\frac{G_\alpha}{n^\alpha}\bigg],
\end{split}$$ where the last step is due to Lemma \[lem: jensen\], or $$\begin{split}
\sigma_{ij}^\alpha&\leq \frac{1}{\alpha(n-1)^\alpha}\bigg[\bigg(\sum_k |q_{ik}| |q_{jk}|\Lambda_{\alpha,\alpha}(k) \bigg)^\alpha+\\
&\hspace{0.7in}+\frac{G_\alpha}{n^\alpha}+\alpha \big(\sum_k |q_{ik}| |q_{jk}|\Lambda_{\alpha,\alpha}(k)\big)^{\alpha-1}\frac{G_\alpha}{n^\alpha} \bigg]\\
& \leq \frac{1}{\alpha(n-1)^\alpha}\bigg[\Lambda_{\alpha,\alpha}^{\alpha-1}\sum_k |q_{ik}|^\alpha |q_{jk}|^\alpha \Lambda_{\alpha,\alpha}(k)\\
&\hspace{1.9in}+\frac{G_\alpha}{n^\alpha}\big(1+\alpha \Lambda_{\alpha,\alpha}^{\alpha-1}\big)\bigg]
\end{split}$$ for the last step is due $(\sum_{k}|q_{ik}| |q_{jk}| \Lambda_{\alpha,\alpha}(k))^{\alpha-1}\leq \Lambda_{\alpha,\alpha}^{\alpha-1}$. Similar steps are taken for $i=j$.
[^1]: C. Somarakis and N. Motee are with the Department of Mechanical Engineering & Mechanics, Lehigh University, Bethlehem, PA, 18015, USA e-mail: {csomarak,motee}@lehigh.edu
[^2]: see Property 1.2.19 in [@samorodnitsky1994stable].
[^3]: We consider equality in the sense of distribution, when we refer to stochastic processes.
[^4]: Also called complete topological graph.
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- 'N. Markova'
- 'J. Puls'
date: 'Received; Accepted '
subtitle: 'IV. Stellar and wind parameters of early to late B supergiants'
title: Bright OB stars in the Galaxy
---
Introduction
============
Hot massive stars are key objects for studying and understanding many exciting phenomena in the Universe such as re-ionisation and $\gamma$-ray bursters. Due to their powerful stellar winds hot massive stars are important contributors to the chemical and dynamical evolution of galaxies, and in the distant Universe they dominate the integrated UV radiation in young galaxies.
While the number of Galactic O and early B stars with reliably determined stellar and wind parameters has progressively increased during the last few years (e.g., @Herrero02 [@repo; @GB04; @bouret05; @martins05; @crowther06]), mid and late B supergiants (SGs) are currently under-represented in the sample of stars investigated so far. Given the fact that B-SGs represent an important phase in the evolutionary sequence of massive stars, any study aiming to increase our knowledge of these stars would be highly valuable, since it would allow several important issues to be addressed (see below).
Compared to O-type stars the B-SG spectra are more complicated due to a larger variety of atomic species being visible, the most important among which is Silicon, the main temperature indicator in the optical domain. Thus, the reproduction of these spectra by methods of [*quantitative spectroscopy*]{} is a real challenge for state-of-the art model atmosphere codes, since it requires a good knowledge of the physics of these objects, combined with accurate atomic data. In turn, any discrepancy that might appear between computed and observed spectral features would help to validate the physical assumptions underlying the model calculations as well as the accuracy of the adopted atomic models and data.
Numerical simulations of the non-linear evolution of the line-driven flow instability (for a review, see @O94), with various degrees of approximation concerning the stabilising diffuse, scattered radiation field (@op96 [@OP99]) as well as more recent simulations concentrating on the outer wind regions [@ro02; @ro05], predict that hot star winds are not smooth but structured, with clumping properties depending on the distance to the stellar surface. However, recent observational studies of clumping in O-SGs have revealed inconsistencies both between results originating from different wind diagnostics, such as UV resonance lines, and the IR-/radio-excess [@FMP06; @puls06], and between theoretical predictions and observed constraints on the radial stratification of the clumping factor [@bouret05; @puls06]. In addition, there are observational results which imply that clumping might depend on wind density. Because of their dense winds, B-SGs might provide additional clues to clarify these points.
Due to their high luminosities, BA-SGs can be resolved and observed, both photometrically and spectroscopically, even in rather distant, extragalactic stellar systems (e.g., @Kud99 [@Bresolin02; @Urb03; @Bianchi06]). This fact makes them potential standard candles, allowing us to determine distances by means of purely spectroscopic tools using the wind-momentum luminosity relationship (WLR, @klp95). Even though certain discrepancies between predicted and observed wind momenta of early B0 to B3 subtypes have been revealed [@crowther06], relevant information about later subtypes is still missing.
During the last years, the quantitative analyses of spectra in the far-UV/UV and optical domains (e.g., @Herrero02 [@BG02; @Crowther02; @bouret03; @repo; @massey04; @heap06]) have unambiguously shown that the inclusion of line-blocking and blanketing and wind effects (if present) significantly modifies the temperature scale of O-stars (for a recent calibration at [*solar*]{} metallicity, see @Markova04 [@martins05]). Regarding B-SGs, particularly of later subtype, this issue has not been addressed so far, mostly due to lacking estimates.
The main goal of this study is to test and to apply the potential of our NLTE atmosphere code FASTWIND [@Puls05] to provide reliable estimates of stellar and wind parameters of SGs with temperatures ranging from 30 to 11 kK. By means of these data and incorporating additional datasets from alternative studies, we will try to resolve the questions outlined above.
In Sects. \[obs\] and \[sample\], we describe the stellar sample and the underlying observational material used in this study. In Sect. 4 we outline our procedure to determine the basic parameters of our targets, highlighting some problems faced during this process. In Sect. 5 the effects of line blocking/blanketing on the temperature scale of B-SGs at solar and SMC metallicities will be addressed, and in Sect. 6 we investigate the wind properties for Galactic B-SGs (augmented by O-and A-SG data), by comparison with theoretical predictions. Particular emphasis will be given to the behaviour of the mass-loss rate over the so-called bi-stability jump. Sect. 7 gives our summary and implications for future work.
0.8mm
[llllllll]{}\
& & & & & & &\
& &\
185859 &B0.5 Ia & & & & & &-7.0\*\
190603 &B1.5 Ia+ & &1.57$^{c}$ &5.62 &0.760 &-0.16 &-8.21\
& & & &5.62 &0.54$\pm$0.02& &-7.53\
206165 &B2 Ib &Cep OB2 &0.83$^{a}$ &4.76 &0.246 &-0.19 &-6.19\
198478 &B2.5 Ia &Cyg OB7 &0.83$^{a}$ &4.81 &0.571 &-0.12 &-6.93\
& & & &4.84 &0.40$\pm$0.01 &&-6.37\
191243 &B5 Ib &Cyg OB3 &2.29$^{a}$ &6.12 &0.117 &-0.12 &-6.41\
& & &1.73$^{b}$ & & & &-5.80\
199478 &B8 Iae &NGC 6991 &1.84$^{d}$ &5.68 &0.408 &-0.03 &-7.00\
212593 &B9 Iab & & & & & &-6.5\*\
202850 &B9 Iab &Cyg OB4 &1.00$^{a}$ &4.22 &0.098 &-0.03 &-6.18\
$^{a}$ @Humphreys78; $^{b}$ @GS; $^{c}$ @BC; $^{d}$ @DH\
$^{*}$ from calibrations [@HM]\
Observations and data reduction {#obs}
===============================
High-quality optical spectra were collected for eight Galactic B-type SGs of spectral types B0.5 to B9 using the Coudé spectrograph of the NAO 2-m telescope of the Institute of Astronomy, Bulgarian Academy of Sciences. The observations were carried out using a BL632/14.7 grooves mm$^{-1}$ grating in first order, together with a PHOTOMETRICS CCD (1024 x 1024, 24$\mu$) as a detector.[^1] This configuration produces spectra with a reciprocal dispersion of $\sim$0.2 Å pixel$^{-1}$ and an effective resolution of $\sim$ 2.0 pixels, resulting in a spectral resolution of $\sim$ 15000 at .
The signal-to-noise (S/N) ratio, averaged over all spectral regions referring to a given star, has typical values of 200 to 350, being lower in the blue than in the red.
We observed the wavelength range between 4100 and 4900 Å, where most of the strategic lines of H, He and Si ions are located, together with the region around H$_\alpha$. Since our spectra sample about 200 Å, five settings were used to cover the ranges of interest. These settings are as follows:
From 4110 to 4310 Å (covering , and He II $\lambda$4200).
From 4315 to 4515 Å (He I $\lambda\lambda$4387, 4471 and ).
From 4520 to 4720 Å (Si III $\lambda\lambda$4553, 4568, 4575, He I $\lambda$4713 and He II $\lambda\lambda$4541, 4686).
The region around including Si III $\lambda\lambda$4813, 4820, 4829 and He I $\lambda$4922.
The region around including He I $\lambda$6678 and He II $\lambda\lambda$6527, 6683. To minimise the effects of temporal spectral variability (if any), all spectra referring to a given star were taken one after the other, with a time interval between consecutive exposures of about half an hour. Thus, we expect our results to be only sensitive to temporal variability of less than 2 hours.
The spectra were reduced following standard procedures and using the corresponding IRAF[^2] routines.
Sample stars {#sample}
============
Table \[log\_phot\] lists our stellar sample, together with corresponding spectral and photometric characteristics, as well as association/cluster membership and distances, as adopted in the present study. For hotter and intermediate temperature stars, spectral types and luminosity classes (Column 2) were taken from the compilation by @Howarth97, while for the remainder, data from $SIMBAD$ have been used.
Since $HIPPARCOS$ based distances are no longer reliable in the distance range considered here (e.g., @Z99 [@Schroeder04]), we have adopted photometric distances collected from various sources in the literature (Column 4). In particular, for stars which are members of OB associations, we drew mainly from @Humphreys78 but also consulted the lists published by @GS and by @BC. In most cases, good agreement between the three datasets was found, and only for Cyg OB3 did the distance modulus provided by Humphreys turned out to be significantly larger than that provided by Garmany & Stencel. In this latter case two entries for $d$ are given in Table \[log\_phot\].
Apart from those stars belonging to the OB associations, there are two objects in our sample which have been recognised as cluster members: HD 190603 and HD 199478. The former was previously assigned as a member of Vul OB2 (e.g. @Lennon92), but this assignment has been questioned by @McE99 who noted that there are three aggregates at approximately 1, 2 and 4 kpc in the direction of HD 190603. Since it is not obvious to which of them (if any) this star belongs, they adopted a somewhat arbitrary distance of 1.5 kpc. This value is very close to the estimate of 1.57 kpc derived by @BC, and it is this latter value which we will use in the present study. However, in what follows we shall keep in mind that the distance to HD 190603 is highly uncertain. For the second cluster member, HD 199478, a distance modulus to its host cluster as used by @DH was adopted.
Visual magnitudes, $V$, and $B-V$ colours (Column 5 and 6) have been taken from the [*HIPPARCOS Main Catalogue (I/239)*]{}. While for the majority of sample stars the [*HIPPARCOS*]{} photometric data agree quite well (within 0.01 to 0.04 mag both in $V$ and $B-V$) with those provided by [*SIMBAD*]{}, for two of them (HD 190603 and HD 198478) significant differences between the two sets of $B-V$ values were found. In these latter cases two entries for $B-V$ are given, where the second one represents the mean value averaged over all measurements listed in [*SIMBAD*]{}).
Absolute magnitudes, (Column 8), were calculated using the standard extinction law with $R = 3.1$ combined with intrinsic colours, $(B-V)_{\rm
0}$, from @FG (Column 7) and distances, $V$ and $B-V$ magnitudes as described above. For the two stars which do not belong to any cluster/association (HD 185859 and HD 212593), absolute magnitudes according to the calibration by @HM have been adopted.
For the majority of cases, the absolute magnitudes we derived, agree within $\pm$0.3 mag with those provided by the Humphreys-McElroy calibration. Thus, we adopted this value as a measure for the uncertainty in for cluster members (HD 199478) and members of spatially more concentrated OB associations (HD 198478 in Cyg OB7, see @crowther06). For other stars with known membership, a somewhat larger error of $\pm$0.4 mag was adopted to account for a possible spread in distance within the host association. Finally, for HD 190603 and those two stars with calibrated M$_{\rm V}$, we assumed a typical uncertainty of $\Delta$= $\pm$0.5 mag, representative for the spread in of OB stars within a given spectral type [@crowther04].[^3]
Determination of stellar and wind parameters {#results}
============================================
The analysis presented here was performed with FASTWIND, which produces spherically symmetric, NLTE, line-blanketed model atmospheres and corresponding spectra for hot stars with winds. While detailed information about the latest version used here can be found in @Puls05, we highlight only those points which are important for our analysis of B stars.
- In addition to H and He, Silicon is used as an explicit element (i.e., by means of a detailed model atom and using a comoving frame transport for the bound-bound transitions). All other elements (e.g., C, N, O, Fe, Ni etc.) are treated as background elements, where the major difference (compared to the explicit ones) relates to the line transfer, which is performed within the Sobolev approximation.
- A detailed description of the Silicon atomic model can be found in @Trundle04.
- Since previous applications of FASTWIND have concentrated on O and early B stars, we briefly note that correct treatment of cooler stars requires sufficiently well described iron group ions of stages II/III, whose lines are dominating the background opacities for these objects. Details of the corresponding model atoms and line-lists (from [*superstructure*]{}, @Eissner74 [@Nussbaumer78], augmented by data from @Kurucz92) can be found in [@Pauldrach01]. In order to rule out important effects from still missing data, we have constructed an alternative dataset which uses [*all*]{} Fe/Ni II/III lines from the Kurucz line-list. Corresponding models (in particular temperature structure and emergent fluxes) turned out to remain almost unaffected by this alteration, so we are confident that our original database is fairly complete, and can be used for calculations roughly down to 10 kK.
- A consistent temperature stratification utilising a flux-correction method in the lower wind and the thermal balance of electrons in the outer part is calculated, with a transition point between the two approaches located roughly at a Rossland optical depth of $\tau_{\rm
R} \approx 0.5$ (in dependence of wind density).
To allow for an initial assessment of the basic parameters, a coarse grid of models was generated using this code (appropriate for the considered targets). The grid involves 270 models covering the temperature range between 12 and 30 kK (with increments of 2 kK) and including values from 1.6 to 3.4 (with increments of 0.2 dex). An extended range of wind-densities, as combined in the optical depth invariant $Q$ (=/()$^{1.5}$, cf. @Puls96) has been accounted for as well, to allow for both thin and thick winds.
All models have been calculated assuming solar Helium ($Y_{\rm He}$ = 0.10, with $Y_{\rm He}=N({\rm He})/N({\rm H})$) and Silicon abundance (log (Si/H) = -4.45 by number[^4], cf. @GS98 and references therein), and a micro-turbulent velocity, , of 15 for hotter and 10 for cooler subtypes, with a border line at 20 kK.
By means of this model grid, initial estimates on , and were obtained for each sample star. These estimates were subsequently used to construct a smaller subgrid, specific for each target, to derive the final, more exact values of the stellar and wind parameters (including Y$_{\rm He}$, log (Si/H) and ).
#### Radial velocities
To compare observed with synthetic profiles, radial velocities and rotational speeds of all targets have to be known. We started our analysis with radial velocities taken from the General Catalogue of Mean Radial Velocities (III/213, @BB). These values were then modified to obtain better fits to the analysed absorption profiles. In doing so we gave preference to Silicon rather than to Helium or Hydrogen lines since the latter might be influenced by (asymmetrical) wind absorption/emission. The finally adopted -values which provide the “best” fit to most of the Silicon lines are listed in Column 3 of Table \[para\_1\]. The accuracy of these estimates is typically $\pm$ 2.
1.5mm
----------- ---------- ----- ------- ------- ---- ------
HD 185859 B0.5 Ia 12 62(5) 58(3) 18 7.51
HD 190603 B1.5 Ia+ 50 47(8) 60(3) 15 7.46
HD 206165 B2 Ib 0 45(7) 57(3) 8 7.58
HD 198478 B2.5 Ia 8 39(9) 53(3) 8 7.58
HD 191243 B5 Ib 25 38(4) 37(3) 8 7.48
HD 199478 B8 Iae -12 41(4) 40(3) 8 7.55
HD 212593 B9 Iab -13 28(3) 25(3) 7 7.65
HD 202850 B9 Iab 13 33(3) 33(3) 7 7.99
----------- ---------- ----- ------- ------- ---- ------
: Radial velocities (from Si), projected rotational velocities, macro- and micro-turbulent velocities (all in ) and Si abundances, given as log \[$N$(Si)/$N$(H\] + 12, of the sample stars as determined in the present study. The number in brackets refers to the number of lines used to derive and .[]{data-label="para_1"}
Projected rotational velocities and macro-turbulence
----------------------------------------------------
As a first guess for the projected rotational velocities of the sample stars, , we used values obtained by means of the Spectral type - calibration for Galactic B-type SGs provided by @Abt02. However, during the fitting procedure it was found that (i) these values provide poor agreement between observed and synthetic profiles and (ii) an additional line-broadening agent must be introduced to improve the quality of the fits. These findings are consistent with similar results from earlier investigations claiming that absorption line spectra of O-type stars and B-type SGs exhibit a significant amount of broadening in excess to the rotational broadening [@Rosenh70; @CE77; @LDF93; @Howarth97]. Furthermore, although the physical mechanism responsible for this additional line-broadening is still not understood we shall follow @Ryans and refer to it as “macro-turbulence”.
Since the effects of macro-turbulence are similar to those caused by axial rotation, (i.e., they do not change the line strengths but “only” modify the profile shapes) and since stellar rotation is a key parameter, such as for stellar evolution calculation (e.g., @MM00, @HMM and references therein), it is particularly important to distinguish between the individual contributions of these two processes.
There are at least two possibilities to approach this problem: either exploiting the goodness of the fit between observed and synthetic profiles [@Ryans] or analysing the shape of the Fourier transforms (FT) of absorption lines [@Gray73; @Gray75; @simon]. Since the second method has been proven to provide better constraints [@dft06], we followed this approach to separate and measure the relative magnitudes of rotation and macro-turbulence.
The principal idea of the FT method relates to the fact that in Fourier space the convolutions of the ”intrinsic line profile" (which includes the natural, thermal, collisional/Stark and microturbulence broadening) with the instrumental, rotational and macro-turbulent profiles, become simple products of the corresponding Fourier components, thus allowing the contributions of the latter two processes to be separated by simply dividing the Fourier components of the observed profile by the components of the thermal and instrumental profile.
The first minimum of the Fourier amplitudes of the obtained residual transform will then fix the value of while the shape of the first side-lobe of the same transform will constrain $v_{\rm mac}$.
The major requirements to obtain reliable results from this method is the presence of [*high quality*]{} spectra (high S/N ratio and high spectral resolution) and to analyse only those lines which are free from strong pressure broadening but are still strong enough to allow for reliable estimates.
For the purpose of the present analysis, we have used the implementation of the FT technique as developed by @simon (based on the original method proposed by @Gray73 [@Gray75]) and applied it to a number of preselected absorption lines fulfilling the above requirements. In particular, for our sample of [*early*]{} B subtypes, the Si III multiplet around 4553 Å but also lines of O II and N II were selected, whereas for the rest the Si II doublet around 4130 Å and the Mg II line at $\lambda$4481 were used instead.
The obtained pairs of (, ), averaged over the measured lines, were then used as input parameters for the fitting procedure and subsequently modified to improve the fits.[^5] The finally adopted values of and are listed in Columns 4 and 5 of Table \[para\_1\], respectively. Numbers in brackets refer to the number of lines used for this analysis. The uncertainty of these estimates is typically less than $\pm$10 km s$^{\rm -1}$, being largest for those stars with a relatively low rotational speed, due to the limitations given by the resolution of our spectra ($\sim$35 ). Although the sample size is small, the and data listed in Table \[para\_1\] indicate that:
$\bullet$ in none of the sample stars is rotation alone able to reproduce the observed line profiles (width and shape).
$\bullet$ both and decrease towards later subtypes (lower ), being about a factor of two lower at B9 than at B0.5.
$\bullet$ independent of spectral subtype, the size of the macro-turbulent velocity is similar to the size of the projected rotational velocity.
$\bullet$ also in all cases, is well beyond the speed of sound.
Compared to similar data from other investigations for stars in common (e.g. @Rosenh70 [@Howarth97]), our estimates are always smaller, by up to 40%, which is understandable since these earlier estimates refer to an interpretation in terms of rotational broadening alone.
On the other hand, and within a given spectral subtype, our estimates of and are consistent with those derived by @dft06 and @simon (see Figure \[vsini\_vmac\]). From these data it is obvious that both and appear to decrease (almost monotonically) in concert, when proceeding from early-O to late B-types.
Effective temperatures, {#teff}
------------------------
For B-type stars the primary temperature diagnostic at optical wavelengths is Silicon [@BB90; @Killian91; @McE99; @Trundle04] which shows strong lines from three ionisation stages through all the spectral types: Si III/Si IV for earlier and Si II/Si III for later subtypes, with a “short” overlap at B1.5 - B2. To evaluate (and ), to a large extent we employed the method of line profile fitting instead of using fit diagrams (based on EWs), since in the latter case the corresponding estimates rely on interpolations and furthermore do not account for the profile shape. Note, however, that for certain tasks (namely the derivation of the Si-abundance together with the micro-turbulent velocity), EW-methods have been applied (see below).
In particular, to determine we used the Si II features at $\lambda\lambda$4129, 4131, the Si III features at $\lambda\lambda$4553, 4568, 4575 and at $\lambda\lambda$4813, 4819, 4828, with a preference on the first triplet (see
below) and the Si IV feature at $\lambda$4116.[^6] In addition, for stars of spectral type B2 and earlier the Helium ionisation balance was exploited as an additional check on , involving He I transitions at $\lambda\lambda$4471, 4713, 4387, 4922 and He II transitions at $\lambda\lambda$4200, 4541, 4686.
### Micro-turbulent velocities and Si abundances {#vmic}
Though the introduction of a non-vanishing micro-turbulent velocity can significantly improve the agreement between synthetical profiles and observations [@McE98; @SH98], it is still not completely clear whether such a mechanism (operating on scales below the photon mean free path) is really present or whether it is an artefact of some deficiency in the present-day description of model atmospheres (e.g., @McE98 [@Villamariz] and references therein).
Since micro-turbulence can strongly affect the strength of Helium and metal lines, its inclusion into atmospheric models and profile functions can significantly modify the derived stellar abundances but also effective temperatures and surface gravities (the latter two parameters mostly indirectly via its influence on line blanketing: stronger implies more blocking/back-scattering, and thus lower .)
Whereas @Villamariz showed the effects of to be relatively small for O-type stars, for B-type stars this issue has only been investigated for early B1-B2 giants (e.g., @Vrancken) and a few, specific BA supergiants (e.g. @Urb [@Przb06]). Here, we report on the influence of micro-turbulence on the derived effective temperatures[^7] for the complete range of B-type SGs. For this purpose we used a corresponding sub-grid of FASTWIND models with ranging from 4 to 18 (with increments of 4 ) and $\log Q$ values corresponding to the case of relatively weak winds. Based on these models we studied the behaviour of the SiIV4116/SiIII4553 and SiII4128/SiIII4553 line ratios and found these ratios to be almost insensitive to variations in (Fig. \[Si\_ratio\]), except for the case of SiII/SiIII beyond 18 kK where differences of about 0.3 to 0.4 dex can be seen (and are to be expected, due to the large difference in absolute line-strengths caused by strongly different ionisation fractions). Within the temperature ranges of interest (18 $\leq$ $\leq$ 28 kK for SiIV/SiIII and 12 $\leq$ $\leq$ 18 kK for SiII/SiIII), however, the differences are relatively small, about 0.15 dex or less, resulting in temperature differences lower than 1000 K, i.e., within the limits of the adopted uncertainties (see below).
Based on these results, we relied on the following strategy to determine $T_{\rm eff}$, and Si abundances. As a first step, we used the FASTWIND model grid as described previously (with =10 and 15 and “solar” Si abundance) to put initial constraints on the stellar and wind parameters of the sample stars. Then, by varying (but also , and velocity-field parameter $\beta$) within the derived limits and by changing within $\pm$5 to obtain a satisfactory fit to most of the strategic Silicon lines, we fixed /and derived rough estimates of $v_{\rm mic}$. Si abundances and final values for resulted from the following procedure: for each sample star a grid of 20 FASTWIND models was calculated, combining four abundances and five values of micro-turbulence (ranging from 10 to 20 or from 4 to 12 km s$^{\rm -1}$, to cover hot and cool stars, respectively). By means of this grid, we determined those abundance ranges which reproduce the observed individual EWs (within the corresponding errors) of several previously selected Si lines from different ionisation stages. Subsequently, we sorted out the value of which provides the best overlap between these ranges,
\
\
\
i.e., defines a unique abundance together with appropriate errors (for more details, see e.g. @Urb [@simon1] and references therein).
Our final results for were almost identical (within about $\pm$1 ) to those derived from the “best” fit to Silicon. Similarly, for all but one star, our final estimates for the Si abundance are quite similar to the initially adopted “solar” one, within $\leq\pm$0.1 dex, and only for HD 202850 an increase of 0.44 dex was found. Given that Si is not involved in CNO nuclear processing, the latter result is difficult to interpret. On the one hand, fitting/analysis problems are highly improbable, since no unusual results have been obtained for the other late B-SG, HD 212593. Indeed, the overabundance is almost “visible” because the EWs of at least 2 of the 3 strategic Si lines are significantly larger (by about 20 to 40%) in HD 202850 than in HD 212593. On the other hand, the possibility that this star is metal rich seems unlikely given its close proximity to our Sun. Another possibility might be that HD 202850 is a Si star, though its magnetic field does not seem particularly strong (but exceptions are still known, e.g. V 828 Her B9sp, EE Dra B9, @BBM). A detailed abundance analysis may help to solve this puzzling feature.
Finally, we have verified that our newly [*derived*]{} Si abundances (plus -values) do not affect the stellar parameters (which refer to the initial abundances), by means of corresponding FASTWIND models. Though an increase of 0.44 dex (the exceptional case of HD 202850) makes the Si-lines stronger, this strengthening does not affect the derived $T_{\rm eff}$, since the latter parameter depends on line [*ratios*]{} from different ionisation stages, being thus almost independent of abundance. In each case, however, the quality of the Silicon line-profile fits has been improved, as expected.
In Column 6 and 7 of Table \[para\_1\] we present our final values for and Si abundance. The error of these estimates depends on the accuracy of the measured equivalent widths (about 10%) and is typically about $\pm$2 and $\pm$0.15 dex for and the logarithmic Si abundance, respectively. A closer inspection of these data indicates that the micro-turbulent velocities of B-type SGs might be closely related to spectral type (see also @McE99), being highest at earlier (18 at B0.5) and lowest at later B subtypes (7 at B9). Interestingly, the latter value is just a bit larger than the typical values reported for A-SGs (3 to 8 , e.g., @venn), thus implying a possible decline in micro-turbulence towards even later spectral types.
### Silicon line profiles fits – a closer inspection {#siIII}
During our fitting procedure, we encountered the problem that the strength of the Si III multiplet near 4813 Å was systematically over-predicted (see Fig. \[si\_prof\] for some illustrative examples). Though this discrepancy is not very large (and vanishes if is modified within ($\pm$500 K), this discrepancy might point to some weaknesses in our model assumptions or data. Significant difficulties in reproducing the strength of Si III multiplets near 4553 and 4813 Å have also been encountered by @McE99 and by @BB90. While in the former study both multiplets were found to be [*weaker*]{} in their lowest-gravity models with beyond 22500 K, in the latter study the second multiplet was overpredicted, by a factor of about two.
The most plausible explanation for the problems encountered by @McE99 (which are opposite to ours) is the neglect of line-blocking/blanketing and wind effects in their NLTE model calculations, as already suggested by the authors themselves. The discrepancies reported by @BB90, on the other hand, are in qualitative agreement to our findings, but much more pronounced (a factor of two against 20 to 30%). Since both studies use the same Si III model ion whilst we have updated the oscillator strengths of the multiplet near 4553 Å(!) drawing from the available atomic databases, we suggest that it is these improved oscillator strengths in conjunction with modern stellar atmospheres which have reduced the noted discrepancy.
Regardless of these improvements, the remaining discrepancy must have an origin, and there are at least two possible explanations: (i) too small an atomic model for Si III (cutoff effects) and/or (ii) radially stratified micro-turbulent velocities (erroneous oscillator strengths cannot be excluded, but are unlikely, since all atomic databases give similar values).
The first possibility was discussed by @BB90 who concluded that this defect cannot be the sole origin of their problem, since the required corrections are too large and furthermore would affect the other term populations in an adverse manner. Because the discrepancy found by us has significantly decreased since then, the possibility of too small an atomic model can no longer be ruled out though. Future work on improving the complete Si atom will clarify this question.
In our analyses, we have used the same value of for [*all*]{} lines in a given spectrum, i.e., assumed that this quantity does not follow any kind of stratification throughout the atmosphere, whilst the opposite might actually be true (e.g. @McE99 [@Vrancken; @Trundle02; @Trundle04; @hunter]). If so, a micro-turbulent velocity being a factor of two lower than inferred from the “best” fit to Si III 4553 and the Si II doublet would be needed to reconcile calculated and observed strengths of the 4813 Å multiplet. Such a number does not seem unlikely, given the difference in line strengths, but clearly further investigations are necessary (after improving upon the atomic model) to clarify this possibility (see also @hunter and references therein).
Considering the findings from above, we decided to follow @BB90 and to give preference to the Si III multiplet near 4553 Å throughout our analysis. Since this multiplet is observable over the whole B star temperature range while the other (4813 Å) disappears at mid-B subtypes, such a choice has the additional advantage of providing consistent results for the complete sample, from B0 to B9.
\
\
### Helium line-profiles fits and Helium abundance {#He}
As pointed out in the beginning of this section, for early-B subtypes, the Helium ionisation balance can be used to determine $T_{\rm eff}$. Consequently, for the three hottest stars in our sample we used He I and He II lines to derive independent constraints on $T_{\rm eff}$, assuming helium abundances as discussed below.[^8] Interestingly, in all these cases satisfactory fits to the available strategic Helium lines could be obtained in parallel with Si IV and Si III (within the adopted uncertainties, $\Delta$=$\pm$500K). This result is illustrated in Fig. \[he\_prof\_1\], where a comparison between observed and synthetic Helium profiles is shown, the latter being calculated at the upper and lower limit of the range derived from the Silicon ionisation balance. Our finding contrasts @Urb who reported differences in the stellar parameters beyond the typical uncertainties, if either Silicon or Helium was used independently.
Whereas no obvious discrepancy between He I singlets and triplets (“He I singlet problem", @najarro06 and references therein) has been seen in stars of type B1.5 and earlier, we faced several problems when trying to fit Helium in parallel with Silicon in stars of mid and late subtypes (B2 and later).[^9]
In particular, and at “normal” helium abundance (=0.1$\pm$0.02), the singlet line at $\lambda$4387 is somewhat over-predicted for all stars in this subgroup, except for the coolest one - HD 202850. At the same time, the triplet transitions at $\lambda\lambda$4471 and 4713 have been under-predicted (HD 206165, B2 and HD 198478, B2.5), well reproduced (HD 191243, B5 and HD 199478, B8) or over-predicted (HD 212593, B9). Additionally, in half of these stars (HD 206165, HD 198478, and HD 199478) the strength of the forbidden component of He I 4471 was over-predicted, whereas in the other half this component was well reproduced. In all cases, however, these discrepancies were not so large as to prevent a globally satisfactory fit to the available He I lines in parallel with Silicon. Examples illustrating these facts are shown in Fig. \[he\_prof\_2\]. Again, there are at least two principle possibilities to solve these problems: to adapt the He abundance or/and to use different values of , on the assumption that this parameter varies as a function of atmospheric height (cf. Sect. \[siIII\]).
\
\
\
#### Helium abundance.
For all sample stars a “normal” helium abundance, = 0.10, was adopted as a first guess. Subsequently, this value has been adjusted (if necessary) to improve the Helium line fits. For the two hottest stars with well reproduced Helium lines (and two ionisation stages being present!), an error of only $\pm$0.02 seems to be appropriate because of the excellent fit quality. Among those, an overabundance in Helium (= 0.2) was found for the hypergiant HD 190603, which might also be expected according to its evolutionary stage.
In mid and late B-type stars, on the other hand, the determination of was more complicated, due to problems discussed above. Particularly for stars where the discrepancies between synthetic and observed triplet and singlet lines were opposite to each other, no unique solution could be obtained by varying the Helium abundance, and we had to increase the corresponding error bars (HD 206165 and HD 198478). For HD 212593, on the other hand (where all available singlet and triplet lines turned out to be over-predicted), a Helium depletion by 30 to 40% would be required to reconcile theory with observations.
All derived values are summarised in Column 8 of Table \[para\_2\], but note that alternative fits of similar quality are possible for those cases where an overabundance/depletion in He has been indicated, namely by using a solar Helium content and being a factor of two larger/lower than inferred from Silicon: Due to the well known dichotomy between abundance and micro-turbulence (if only one ionisation stage is present), a unique solution is simply not possible, accounting for the capacity of the diagnostic tools used here.
Column 4 of Table \[para\_2\] lists all effective temperatures as derived in the present study. As we have seen, these estimates are influenced by several processes and estimates of other quantities, among which are micro- and macro-turbulence, He and Si abundances, surface gravity and mass-loss rate (where the latter two quantities are discussed in the following). Nevertheless, we are quite confident that, to a large extent, we have consistently and partly independently (regarding , and Si abundances) accounted for these influences. Thus, the errors in our estimates should be dominated by uncertainties in the fitting procedure, amounting to about $\pm$500 K. Of course, these are differential errors assuming that physics complies with all our assumptions, data and approximations used within our atmosphere code.
\
\
\
\
\
\
Surface gravity
---------------
Classically, the Balmer lines wings are used to determine the surface gravity, $\log g$, where only higher members (and when available) have been considered in the present investigation to prevent a bias because of potential wind-emission effects in and . Note that due to stellar rotation the values derived from such diagnostics are only $effective$ values. To derive the [*true*]{} gravities, $\log g_{\rm true}$, required to calculate masses, one has to apply a centrifugal correction (approximated by $^2$/), though for all our sample stars this correction was found to be typically less than 0.03 dex.
Corresponding values for effective and corrected surface gravities are listed in Columns 5 and 6 of Table \[para\_2\]. The error of these estimates was consistently adopted as $\pm$ 0.1 dex due to the rather good quality of the fits and spectra (because of the small centrifugal correction, corresponding errors can be neglected) except for HD 190603 and HD 199478 where an error of $\pm$0.15 dex was derived instead. This point is illustrated in Figs \[Hg\_Ha\_1\] and \[Hg\_Ha\_2\] where our final (“best”) fits to the observed profiles are shown. Note that the relatively large discrepancies in the cores of HD 190603 and HD 199478 might be a result of additional emission/absorption from large-scale structures in their winds [@Rivi; @Mar06], which cannot be reproduced by our models (see also Sec. \[wind\] below). At least for HD 190603, an alternative explanation in terms of too large a mass-loss rate (clumping effects in ) is possible as well.
Stellar radii, luminosities and masses
--------------------------------------
The input radii used to calculate our model grid have been drawn from evolutionary models. Of course, these radii are somewhat different from the finally adopted ones (listed in Column 7 of Table \[para\_2\]) which have been derived following the procedure introduced by @Kudritzki80 (using the de-reddened absolute magnitudes from Table \[log\_phot\] and the theoretical fluxes of our models). With typical uncertainties of $\pm$500 K in our and of $\pm$0.3 to 0.5 mag in , the error in the stellar radius is dominated by the uncertainty in , and is of the order of $\Delta$log = $\pm$0.06...0.10, i.e., less than 26% in $R_\star$.
Luminosities have been calculated from the estimated effective temperatures and stellar radii, while masses were inferred from the “true” surface gravities. These estimates are given in Columns 9 and 10 of Table \[para\_2\], respectively. The corresponding errors are less then $\pm$0.21 dex in and $\pm$0.16 to 0.25 dex in log .
The spectroscopically estimated masses of our SG targets range from 7 to 53 .[^10] Compared to the evolutionary masses from @MM00 and apart from two cases, our estimates are generally lower, by approximately 0.05 to 0.38 dex, with larger differences for less luminous stars. While for some stars the discrepancies are less than or comparable to the corresponding errors (e.g., HD 185859, HD 190603 first entry, HD206165), they are significant for some others (mainly at lower luminosities) and might indicate a “mass discrepancy”, in common with previous findings [@crowther06; @Trundle05].
Wind parameters {#wind}
---------------
#### Terminal velocities.
For the four hotter stars in our sample, individual terminal velocities are available in the literature, determined from UV P Cygni profiles [@LSL95; @PBH; @Howarth97]. For these stars, we adopted the estimates from @Howarth97. Interestingly, the initially adopted value of 470 for of HD 198478 did not provide a satisfactory fit to H$_\alpha$, which in turn required a value of about 200 km s$^{\rm -1}$. This is at the lower limit of the “allowed” range, since the photospheric escape velocity, $v_{\rm esc}$, is of the same order. Further investigations, however, showed that at a different observational epoch the profile of HD 198478 indeed has extended to about 470 [@crowther06]. Thus, for this object we considered a rather large uncertainty, accounting for possible variations in $v_\infty$.
Regarding the four cooler stars, on the other hand, we were forced to estimate by employing the spectral type - terminal velocity calibration provided by @kud00, since no literature values could be found and since archival data do not show saturated P Cygni profiles which could be used to determine . In all but one of these objects (HD 191243, first entry), the calibrated -values were lower than the corresponding escape velocities, and we adopted = to avoid this problem.
The set of -values used in the present study is listed in Column 11 of Table \[para\_2\]. The error of these data is typically less than 100 [@PBH] except for the last four objects where an asymmetric error of -25/+50% was assumed instead, allowing for a rather large insecurity towards higher values.
#### Velocity exponent $\beta$.
In stars with denser winds (in emission) $\beta$ can be derived from with relatively high precision and in parallel with the mass loss rate, due to the strong sensitivity of the profile shape on this parameter (but see below). On the other hand, for stars with thin winds (in absorption) the determination of $\beta$ from optical spectroscopy alone is (almost) impossible and a typical value of $\beta=1$ was consistently adopted, but lower and larger values have been additionally used to constrain the errors. Note that for two of these stars we actually found indications for values larger than $\beta$=1.0 (explicitly stated in Table \[para\_2\]).
#### Mass-loss rates,
$\dot M$, have been derived from fitting the observed profiles with model calculations. The obtained estimates are listed in column 13 of Table \[para\_2\]. Corresponding errors, accumulated from the uncertainties in $Q$[^11], and , are typically less than $\pm$0.16 dex for the three hotter stars in our sample and less than $\pm$0.26 dex for the rest, due to more insecure values of and $Q$.
1.15mm
----------- ---------- ------- ------ ------ ------ ---- --------------- ------ -------------------------- ----------- --------------------------------------------- ------------------------------- -------------------------------
HD 185859 B0.5 Ia -7.00 26.3 2.95 2.96 35 0.10$\pm$0.02 5.72 41$^{\rm +27}_{\rm -16}$ 1830 [**1.1**]{}$\pm$0.1$^{a)}$ -5.82$\pm$0.13 29.01$\pm$0.20
HD 190603 B1.5 Ia+ -8.21 19.5 2.35 2.36 80 0.20$\pm$0.02 5.92 53$^{\rm +41}_{\rm -23}$ 485 [**2.9**]{}$\pm$0.2 -5.70$\pm$0.16 28.73$\pm$0.22
-7.53 2.36 58 5.65 28$^{\rm +21}_{\rm -12}$ -5.91$\pm$0.16 28.45$\pm$0.22
HD 206165 B2 Ib -6.19 19.3 2.50 2.51 32 0.10 - 0.20 5.11 12$^{\rm +7}_{\rm -4}$ 640 [**1.5**]{}$^{\rm +0.2}_{\rm -0.1}$ $^{a)}$ -6.57$\pm$0.13 27.79$\pm$0.17
HD 198478 B2.5 Ia -6.93 17.5 2.10 2.12 49 0.10 - 0.20 5.31 11$^{\rm +5}_{\rm -3}$ 200...470 [**1.3**]{}$\pm$0.1 -6.93...-6.39 26.97...27.48
-6.37 2.12 38 5.08 7$^{\rm +3}_{\rm -2}$ -7.00...-6.46 26.84...27.36
HD 191243 B5 Ib -5.80 14.8 2.60 2.61 34 0.09$\pm$0.02 4.70 17$^{\rm +9}_{\rm -6}$ 470 0.8...1.5 -7.52$^{\rm+0.26}_{\rm-0.20}$ 26.71$^{\rm+0.27}_{\rm-0.23}$
-6.41 2.60 46 4.96 31$^{\rm +17}_{\rm -11}$ -7.30$^{\rm+0.25}_{\rm-0.17}$ 27.00$^{\rm+0.25}_{\rm-0.21}$
HD 199478 B8 Iae -7.00 13.0 1.70 1.73 68 0.10$\pm$0.02 5.08 9$^{\rm +5}_{\rm -3}$ 230 0.8...1.5 -6.73...-6.18 27.33...27.88
HD 212593 B9 Iab -6.50 11.8 2.18 2.19 59 0.06 - 0.10 4.79 19$^{\rm +13}_{\rm -8}$ 350 0.8...1.5 -7.04$^{\rm+0.25}_{\rm-0.19}$ 27.18$^{\rm+0.28}_{\rm-0.24}$
HD 202850 B9 Iab -6.18 11.0 1.85 1.87 54 0.09$\pm$0.02 4.59 8$^{\rm +4}_{\rm -3}$ 240 0.8...1.8 -7.22$^{\rm+0.25}_{\rm-0.17}$ 26.82$^{\rm+0.25}_{\rm-0.20}$
----------- ---------- ------- ------ ------ ------ ---- --------------- ------ -------------------------- ----------- --------------------------------------------- ------------------------------- -------------------------------
\
\
The errors in $Q$ itself have been determined from the fit-quality to and from the uncertainty in $\beta$ (for stars of thin winds), while the contribution from the small errors in have been neglected. Since we assume an unclumped wind, the [*actual*]{} mass-loss rates of our sample stars might, of course, be lower. In case of small-scale clumping, this reduction would be inversely proportional to the square root of the effective clumping factor being present in the forming region (e.g., @puls06 and references therein). In Figures \[Hg\_Ha\_1\] and \[Hg\_Ha\_2\] we present our final (“best”) fits for all sample stars. Apparent problems are:
$\bullet$ The P Cygni profile of seen in HD 190603 (B1.5 Ia+) is not well fitted. The model fails to reproduce the depth and the width of the absorption trough. This (minor) discrepancy, however, has no effect on the derived and $\beta$, because these parameters are mainly determined by the emission peak and the red emission wing of the line which are well reproduced.
$\bullet$ In two sample stars, HD 198478 and HD 199478, the observations show in emission whilst the models predict profiles of P Cygni-type. At least for HD 198478, a satisfactory fit to the red wing and the emission peak became possible, since the observed profile is symmetric with respect to rest wavelength, thus allowing us to estimate $\beta$ and within our assumption of homogeneous and spherically symmetric winds (see below). For $\dot M$, we provide lower and upper limits in Table \[para\_2\], corresponding to the lower and upper limits of the adopted (see also Sect. \[wlr\_comp1\]). For HD 199478, on the other hand, $\beta$ is more insecure due to the strong asymmetry in the profile shape, leading to a larger range in possible mass loss rates and wind momenta.
$\bullet$ The profile of HD 202850, the coolest star in our sample, is not well reproduced: the model predicts more absorption in the core than is actually observed.
The most likely reason for our failure to reproduce certain profile shapes for stars with [*dense*]{} winds is our assumption of smooth and spherically symmetric atmospheres. Besides the open question of small-scale clumping (which can change the morphology quite substantially[^12], cf. @puls06), the fact that in two of the four problematic cases convincing evidence has been found for the presence of time-dependent large-scale structure (HD 190603, @Rivi) or deviations from spherical symmetry and homogeneity (HD 199478, @Mar06) seems to support such a possibility. Note that similar problems in reproducing certain profile shapes in B-SGs have been reported by @Kud99 [ from here on KPL99], @Trundle04, @crowther06 and @Lefever.
A comparison of present results with such from previous studies [@crowther06; @BC] for three stars in common indicates that the parameters derived by @crowther06 for HD 190603 and HD 198478 are similar to ours (accounting for the fact that higher and result in larger and $\dot M$, respectively, and vice versa). The mass-loss rates from @BC (derived from the IR-excess!) for HD 198478 and HD 202850, on the other hand, are significantly larger than ours and those from Crowther et al., a problem already faced by @Kud99 in a similar (though more simplified) investigation. This might be either due to certain inconsistencies in the different approaches, or might point to the possibility that the IR-forming region of these stars is more heavily clumped than the forming one.
The scale for B-SGs – comparison with other investigations {#teffscale}
===========================================================
Line-blanketed analyses
-----------------------
Besides the present study, two other investigations have determined the effective temperatures of [*Galactic*]{} B-type SGs by methods similar to ours, namely from Silicon and Helium (when possible) ionisation balances, employing state of the art techniques of quantitative spectroscopy on top of high resolution spectra covering all strategic lines. @crowther06 have used the non-LTE, line blanketed code CMFGEN [@hil98] to determine of 24 supergiants (luminosity classes Ia, Ib, Iab, Ia+) of spectral type B0-B3 with an accuracy of $\pm$1000 K, while @Urb employed FASTWIND (as done here) and determined effective temperatures of five early B (B2 and earlier) stars of luminosity classes Ia/Ib with an (internal) accuracy of $\pm$500 K. In addition, @Przb06 have recently published very precise temperatures (typical error of $\pm$200 K) of four BA SGs (among which one B8 and two A0 stars), again derived by means of a line-blanketed non-LTE code, in this case in plane-parallel geometry neglecting wind effects.
Motivated by the good correspondence between data from FASTWIND and CMFGEN (which has also been noted by @crowther06), we plotted the effective temperatures of all four investigations, as a function of spectral type (left panel of Fig. \[teff\_comp\]). Overplotted (dashed line) is a 3rd order polynomial regression to these data, accounting for the individual errors in , as provided by the different investigations. The grey-shaded area denotes the standard deviation of the regression. Obviously, the correspondence between the different datasets is (more than) satisfactory: for a given spectral subtype, the dispersion of the data does not exceed $\pm$1000 K. There are only three stars (marked with large circles), all from the sample of Crowther et al., which make an exception, showing significantly lower temperatures: HD 190603, HD 152236 and HD 2905. Given their strong P Cygni profiles seen in and their high luminosities - note that the first two stars are actually hypergiants - this result should not be a surprise though (higher luminosity $\rightarrow$ denser wind $\rightarrow$ stronger wind blanketing $\rightarrow$ lower ).
Very recently, @Lefever published a study with the goal to test whether the variability of a sample of 28 periodically pulsating, Galactic B-type SGs is compatible with opacity driven non-radial pulsations. To this end, they analysed this sample plus 12 comparison objects, also by means of FASTWIND, thus providing additional stellar and wind parameters of such objects. In contrast to both our investigation and those mentioned above, Lefever et al. could not use the Si (He) ionisation [*balance*]{} to estimate , but had to rely on the analysis of one ionisation stage alone, [*either*]{} Si II [*or*]{} Si III, plus two more He I lines ($\lambda$4471 and $\lambda$6678). The reason for doing so was the (very) limited spectral coverage of their sample (though at very high resolution), with only one representative Silicon ionisation stage observed per object.
Given the problems we faced during our analysis (which only appeared because we had a much larger number of lines at our disposal) and the fact that Lefever et al. were not able to [*independently*]{} estimate Si abundances and of their sample stars (as we have done here), the results derived during this investigation are certainly prone to larger error bars than those obtained by methods where [*all*]{} strategic lines could be included (see Lefever et al. for more details).
The right-hand panel of Fig. \[teff\_comp\] displays their temperature estimates for stars from the so-called GROUP I (most precise parameters), overplotted by [*our*]{} regression from the left panel. The error bars correspond to $\pm$1000 K quoted by the authors as a nominal error. While most of their data are consistent (within their errors) with our regression, there are also objects (marked again with large circles) which deviate significantly.
Interestingly, all outliers situated below the regression are stars of early subtypes (B0/B1), which furthermore show P Cygni profiles with relatively strong emission components in (except for HD 15043 which exhibits in absorption), a situation that is quite similar to the one observed on the left of this figure. (We return to this point in the next section.)
On the other hand, there are two stars of B5-type with same , which lie above the regression, i.e., seem to show “overestimated” temperatures. The positions of these stars within the -spectral type plane have been extensively discussed by @Lefever who suggested that the presence of a radially stratified micro-turbulent velocity (as also discussed by us) or a Si abundance being lower than adopted (solar) might explain the overestimate (if so) of their temperatures. Note, however, that the surface gravity of HD 108659 (=2.3), one of the Lefever et al. B5 targets, seems to be somewhat large for a SG but appropriate for a bright giant. Thus, it might still be that the “overestimated” temperature of HD 108659 is a result of its misclassification as a SG whilst actually it is a bright giant. This possibility, however, cannot be applied to the other B5 target, HD 102997, which has of 2.0 (and of -7.0), i.e., is consistent with its classification as a supergiant.
Interestingly, the surface gravity of “our” B5 star, HD 191243 (=2.6), appears also to be larger than what is typical for a supergiant of B5 subtype. With a distance modulus of 2.2 kps [@Humphreys78], the absolute magnitude of HD 191243 would be more consistent with a supergiant classification, but with d=1.75 kpc [@GS] a luminosity class II is more appropriate. Thus, this star also seems to be misclassified. [^13]
Temperature revisions due to line-blanketing and wind effects {#line block}
-------------------------------------------------------------
In order to estimate now the effects of line-blocking/blanketing together with wind effects in the B supergiant domain (as has been done previously for the O-star domain, e.g., @Markova04 [@repo; @martins05]), we have combined the different datasets as discussed above into one sample, keeping in mind the encountered problems.
Figure \[teff\_dev\], left panel, displays the differences between “unblanketed” and “blanketed” effective temperatures for this combined sample, as a function of spectral type. The “unblanketed” temperatures have been estimated using the -Spectral type calibration provided by @McE99, based on unblanketed, plane-parallel, NLTE model atmosphere analyses. Objects enclosed by large circles are the same as in Fig. \[teff\_comp\], i.e., three from the analysis by Crowther et al., and seven from the sample by Lefever et al.[^14] As to be expected and as noted by previous authors on the basis of smaller samples (e.g., @crowther06 [@Lefever]), the “blanketed” temperatures of Galactic B-SGs are systematically lower than the “unblanketed” ones. The differences range from about zero to roughly 6000 K, with a tendency to decrease towards later subtypes (see below for further discussion).
The most remarkable feature in Figure \[teff\_dev\] is the large dispersion in $\Delta$for stars of early subtypes, B0-B3. Since the largest differences are seen for stars showing P Cygni profiles with a relatively strong emission component in , we suggest that most of this dispersion is related to wind effects.
To investigate this possibility, we have plotted the distribution of the $\Delta$-values of the B0-B3 object as a function of the distant-invariant optical depth parameter $\log Q$ (cf. Sect. \[results\]). Since the emission strength does not depend on $Q$ alone but also on - for same $Q$-values cooler objects have more emission due to lower ionisation - stars with individual subclasses were studied separately to diminish this effect. The right-hand panel of Fig. \[teff\_dev\] illustrates our results, where the size of the circles corresponds to the strength of the emission peak of the line. Filled symbols mark data from CMFGEN, and open ones data from FASTWIND. Inspection of these data indicates that objects with stronger emission tend to show larger log $Q$-values and subsequently higher $\Delta$- a finding that is model independent. This tendency is particularly evident in the case of B1 and B2 objects. On the other hand, there are at least three objects which appear to deviate from this rule, but this might still be due to the fact that the temperature dependence of $Q$ has not been completely removed (of course, uncertainties in $\beta$, and can also contribute). All three stars (HD 89767, HD 94909 (both B0) and HD 154043 (B1)) are from the Lefever et al. sample and do not exhibit strong emission but nevertheless the highest $\Delta$among the individual subclasses.
In summary, we suggest that the dispersion in the derived effective temperature scale of early B-SGs is physically real and originates from wind effects. Moreover, there are three stars from the Lefever et al. GROUP I sample (spectral types B0 to B1) whose temperatures seem to be significantly underestimated, probably due to insufficient diagnostics. In our follow-up analysis with respect to wind-properties, we will discard these “problematic” objects to remain on the “conservative” side.
Comparison of the temperature scales of Galactic and SMC B supergiants
----------------------------------------------------------------------
Wanted to obtain an impression of the influence of metallicity on the temperature scale for B-type SGs, by comparing Galactic with SMC data. To this end, we derived a -Spectral type calibration for Galactic B-SGs on basis of the five datasets discussed above, discarding only those (seven) objects from the Lefever et al. sample where the temperatures might be particularly affected by strong winds or other uncertainties (marked by large circles in Fig, \[teff\_comp\], right). Accounting for the errors in $T_{\rm eff}$, we obtain the following regression (for a precision of three significant digits) =27800-6000[SP]{}+878[SP]{}\^2-45.9 [SP]{}\^3, where “SP” (0-9) gives the spectral type (from B0 to B9), and the standard deviation is $\pm$1040 K. This regression was then compared to estimates obtained by @Trundle04 and @Trundle05 for B-SGs in the SMC.
We decided to compare with these two studies [*only*]{}, because Trundle et al. have used a similar (2004) or identical (2005) version of FASTWIND as we did here, i.e., systematic, model dependent differences between different datasets can be excluded and because the metallicity of the SMC is significantly lower than in the Galaxy, so that metallicity dependent effects should be maximised.
The outcome of our comparison is illustrated in Fig. \[teff\_metal\]: In contrast to the O-star case (cf. @massey04 [@massey05; @mokiem06]), the data for the SMC stars are, within their errors, consistent with the temperature scale for their Galactic counterparts. This result might be interpreted as an indication of small or even negligible metallicity effects (both directly, via line-blanketing, and indirectly, via weaker winds) in the temperature regime of B-SGs, at least for metallicities in between solar and SMC (about 0.2 solar) values. Such an interpretation would somewhat contradict our findings about the strong influence of line-blanketing in the Galactic case (given that these effects should be lower in the SMC), but might be misleading since Trundle et al. (2004, 2005) have used the spectral classification from @lennon97, which already accounts for the lower metallicity in the SMC. To check the influence of this re-classification, we recovered the original (MK) spectral types of the SMC targets using data provided by @lennon97 [Table 2], and subsequently compared them to our results for Galactic B-SGs. Unexpectedly, SMC objects still do not show any systematic deviation from the Galactic scale but are, instead, distributed quite randomly around the Galactic mean. Most plausibly, this outcome results from the large uncertainty in spectral types as determined by @AZO75[^15], such that metallicity effects cannot become apparent for the SMC objects considered here. Nevertheless, we can also conclude that the classification by @lennon97 has been done [**in a perfect way**]{}, namely that Galactic and SMC stars of similar spectral type also have similar physical parameters, as expected.
Wind-momentum luminosity relationship {#wlr_comp}
=====================================
Comparison with results from similar studies {#wlr_comp1}
--------------------------------------------
Using the stellar and wind parameters, the modified wind momenta can be calculated (Table \[para\_2\], column 14), and the wind-momentum luminosity diagram constructed. The results for the combined sample (to improve the statistics, but without the “problematic” stars from the Lefever et al. GROUP I sample) are shown on the left of Figure \[wlr\]. Data from different sources are indicated by different symbols. For HD 190603 and HD 198478, both alternative entries (from Table \[para\_1\]) are indicated and connected by a dashed line. Before we consider the global behaviour, we first comment on few particular objects.
$\bullet$ The position of HD 190603 corresponding to $B-V$=0.540 (lower luminosity) appears to be more consistent with the distribution of the other data points than the alternative position with $B-V$=0.760. In the following, we give more weight to the former solution.
$\bullet$ The positions of the two B5 stars suggested as being misclassified (HD 191243 and HD 108659, large diamonds) fit well the global trend of the data, implying that these bright giants do not behave differently from supergiants.
$\bullet$ The minimum values for the wind momentum of HD 198478 (with =200 ) deviate strongly from the global trend, whereas the maximum ones (=470 ) are roughly consistent with this trend. For our follow-up analysis, we discard this object because of the very unclear situation.
$\bullet$ HD 152236 (from the sample of Crowther et al., marked with a large circle) is a hypergiant with a very dense wind, for which the authors adopted = 112 , which makes this object the brightest one in the sample.
#### Global features.
From the left of Figure \[wlr\], we see that the lower luminosity B-supergiants seem to follow a systematically lower WLR than their higher luminosity counterparts, with a steep transition between both regimes located in between =5.3 and = 5.6. (Admittedly, most of the early type (high L) objects are Ia’s, whereas the later types concentrate around Iab’s with few Ia/Ib’s.) This finding becomes even more apparent when the WLR is extended towards higher luminosities by including Galactic O supergiants (from @repo [@Markova04; @Herrero02]), as done on the right of the same figure.
KPL99 were the first to point out that the offsets in the corresponding WLR of OBA-supergiants depend on spectral type, being strongest for O-SGs, decreasing from B0-B1 to B1.5-B3 and [*increasing*]{} again towards A supergiants. While some of these results have been confirmed by recent studies, others have not [@crowther06; @Lefever].
To investigate this issue in more detail and based on the large sample available now, we have highlighted the early objects (B0-B1.5, 21000$\leq$$\leq$27500 K) in the right-hand panel of Figure \[wlr\] using filled diamonds. (Very) Late objects with $\leq$ 12500 K have been indicated by asterisks, and intermediate temperature objects by open diamonds. Triangles denote O-SGs. Additionally, the theoretical predictions by @Vink00 are provided via dashed-dotted and dashed lines, corresponding to the temperature regimes of O and B-supergiants, respectively (from here on referred to as “higher” and “lower” temperature predictions). Indeed,
$\bullet$ O-SGs show the strongest wind momenta, determining a different relationship than the majority of B-SGs (see below).
$\bullet$ the wind momenta of B0-B1.5 subtypes are larger than those of B1.5-3, and both follow a different relationship. However, a direct comparison with KPL99 reveals a large discrepancy for mid B1.5-B3 subtypes ($\Delta$about 0.5 dex), while for B0-B1.5 subtypes their results are consistent with those from our combined dataset.
$\bullet$ Late B4-B9 stars follow the same relationship as mid subtypes.
Thus, the only apparent disagreement with earlier findings relates to the KPL99 mid-B types, previously pointed out by @crowther06, and suggested to be a result of line blocking/blanketing effects not accounted for in the KPL99 analysis.[^16] After a detailed investigation of this issue for one proto-typical object from the KPL99 sample (HD 42087), we are convinced that the neglect of line blocking/blanketing cannot solely account for such lower wind momenta. Other effects must also contribute, e.g., overestimated $\beta$-values, though at least the latter effect still leaves a considerable discrepancy.
On the other hand, a direct comparison of the KPL99 A-supergiant dataset (marked with large plus-signs on the right of Fig. \[wlr\]) with data from the combined sample shows that their wind momenta seem to be quite similar to those of mid and late B subtypes. Further investigations based on better statistics are required to clarify this issue.
Comparison with theoretical predictions and the bistability jump
----------------------------------------------------------------
According to the theoretical predictions by @Vink00, Galactic supergiants with effective temperatures between 12500 and 22500 K (spectral types B1 to B9) should follow a WLR different from that of hotter stars (O-types and early B subtypes), with wind momenta being systematically [*larger*]{}. From Figure \[wlr\] (right), however, it is obvious that the observed behaviour does not follow these predictions. Instead, the majority of O-SGs (triangles – actually those with in emission, see below) follow the low-temperature predictions (dashed line), while most of the early B0-B1.5 subtypes (filled diamonds) are consistent with the high-temperature predictions (dashed-dotted), and later subtypes (from B2 on, open diamonds) lie below (!), by about 0.3 dex. Only few early B-types are located in between both predictions or close to the low-temperature one.
The offset between both theoretical WLRs has been explained by @Vink00 due to the [*increase*]{} in mass-loss rate at the bi-stability jump (more lines from lower iron ionization stages available to accelerate the wind), which is only partly compensated by a drop in terminal velocity. The size of the jump in $\dot M$, about a factor of five, was determined requiring a drop in by a factor of two, as extracted from earlier observations [@LSL95].
However, more recent investigations (@crowther06, see also @Evans2) have questioned the presence of such a “jump” in , and argued in favour of a gradual decrease in /, from $\sim$3.4 above 24 kK to $\sim$1.9 below 20 kK.
In the following, we comment on our findings regarding this problem in some detail, (i) because of the significant increase in data (also at lower ), (ii) we will tackle the problem by a somewhat modified approach and (iii) recently a new investigation of the bistability jump by means of [*radio*]{} mass-loss rates has been published [@benaglia07] which gives additional impact and allows for further comparison/conclusions.
First, let us define the “position” of the jump by means of the /-ratios from the OBA-supergiant sample as defined in the previous section (excluding the “uncertain” object HD 198478). In Figure \[jump\], two temperature regimes with considerably different values of such ratios have been identified, connected by a transition zone. In the high temperature regime ($>$ 23 kK), our sample provides $\vinfe/\vesce \approx 3.3 \pm 0.7$, whereas in the low temperature one ($<$ 18 kK), we find $\vinfe/\vesce \approx 1.3 \pm 0.4$. (Warning: The latter estimate has to be considered cautiously, due to the large uncertainties at the lower end where $\vinfe = \vesce$ has been adopted for few stars due to missing diagnostics.) Note that the individual errors for $\vinfe/\vesce$ are fairly similar, of the order of 33% (for $\Delta$= 0.3, $\Delta$= 0.15 and $\Delta$/= 0.25) to 43% (in the most pessimistic case $\Delta$= 1.0), similar to the corresponding Fig. 8 by Crowther et al..
In the [*transition zone*]{}, a variety of ratios are present, thus supporting the findings discussed above. Obviously though, large ratios typical for the high temperature region are no longer present from the centre of the transition region on, so we can define a “jump temperature” of $\approx$ 20,000 K. Nevertheless, we have shifted the border of the high-temperature regime to = 23kK, since at least low ratios are present until then (note the dashed vertical and horizontal lines in Fig. \[jump\]). The low temperature border has been defined analogously, as the coolest location with ratios $> 2$ (dotted lines).
By comparing our (rather conservative) numbers with those from the publications as cited above, we find a satisfactory agreement, both with respect to the borders of the transition zone as well as with the average ratios of /. In particular, our high temperature value is almost identical to that derived by @crowther06; @kud00 provide an average ratio of 2.65 for $>$ 21 kK), whereas in the low temperature regime we are consistent with the latter investigation (Kudritzki & Puls: 1.4). The somewhat larger value found by Crowther et al. results from missing latest spectral subtypes.
Having defined the behaviour of $v_\infty$, we investigate the behaviour of , which is predicted to increase more strongly than decreases. As we have already seen from the WLR, this most probably is [*not*]{} the case for a statistically representative sample of “normal” B-SGs, but more definite statements become difficult for two reasons. First, both the independent ($\log L$) and the dependent () variable depend on (remember, the fit quantity is not but $Q$), which is problematic for Galactic objects. Second, the wind-momentum rate is a function of $L$ but not of alone, such that a division of different regimes becomes difficult. To avoid these problems, let us firstly recapitulate the derivation of the WLR, to see the differences compared to our alternative approach formulated below.
From the scaling relations of line-driven wind theory, we have && k\^L\^((1-))\^[1-]{} \[mdotscal\]\
&=& , ((1-))\^where $\alpha'$ is the difference between the line force multipliers $\alpha -
\delta$ (corresponding to the slope of the line-strength distribution function and the ionisation parameter; for details, see @Puls00), $k$ - the force-multiplier parameter proportional to the effective number of driving lines and $\Gamma$ - the (distance independent!) Eddington parameter. Note that the relation for is problematic because of its mass-dependence, and that itself depends on distance. By multiplying with and (/)$^{1/2}$, we obtain the well-known expression for the (modified) wind-momentum rate, &=& (/)\^k\^L\^((1-))\^[-]{}\
&=&-\
&& L +\
&=& k + + [const]{} where we have explicitly included here those quantities which are dependent on spectral type (and metallicity). Remember that this derivation assumes the winds to be unclumped, and that $\varepsilon$ is small, which is true at least for O-supergiants [@Puls00].
Investigating various possibilities, it turned out that the (predicted) scaling relation for a quantity defined similarly as the optical-depth invariant is particularly advantageous: Q’&=:& \^\^[-]{}, \[lqp\]\
Q’ && + ’\
’ &=& k - + [const’]{}
This relation for the [*distance independent*]{} quantity $Q'$ becomes a function of log and $\Do'$ alone if $\alpha'$ were exactly 2/3, i.e., under the same circumstances as the WLR. Obviously, this relation has all the features we are interested in, and we will investigate the temperature behaviour of by plotting log $Q'$ vs. log . We believe that the factor $\vinfe/\geff$ is a monotonic function on both sides of and through the transition zone, as is also the case for $\geff$ itself. Thus, the $\log Q' - \log \Teffe$ relation should react only on differences in the effective number of driving lines and on the different ratio /on both sides of the transition region.
Fig. \[teffqp\] displays our final result. At first glance, there is almost no difference between the relation on both sides of the “jump”, whereas inside the transition zone there is a large scatter, even if not accounting for the (questionable) mid-B star data from KPL99.
Initially, we calculated the average slope of this relation by linear regression, and then the corresponding slope by additionally excluding the objects inside the transition region (dashed). Both regressions give similar results, interpreted in terms of $\alpha'$ with values of 0.65 and 0.66 (!), respectively[^17], and with standard errors regarding $\log Q'$ of $\pm 0.33$ and $\pm 0.28$ dex.
If the relations indeed were identical on both sides of the jump, we would also have to conclude that the offset, $\Do'$, is identical on both sides of the transition region. In this case, the decrease in /within the transition zone has to be more or less exactly balanced by the [*same*]{} amount of a decrease in $k^\ao$, i.e., both and are decreasing in parallel, in complete contradiction to the prediction by Vink et al.
A closer inspection of Fig. \[teffqp\] (in combination with the corresponding WLR of Fig. \[wlr\]) implies an alternative interpretation. At the hottest (high luminosity) end, we find the typical division of supergiants with in emission and absorption, where the former display an offset of a factor 2[…]{}3 above the mean relation, a fact which has been interpreted to be related to wind-clumping previously.
Proceeding towards lower temperatures, the $Q'$ relation becomes well defined between roughly 31kK and the hot side of the transition zone (in contrast to the WLR, which shows more scatter, presumably due to uncertain ). Inside the transition zone and also in the WLR around $\approx$ 5.45, a large scatter is present, followed by an apparent steep decrease in $\log Q'$ and wind-momentum rate, where the former is located just at the “jump temperature” of 20 kK. Note that the mid-B type objects of the KPL99 sample are located just in this region. From then on, $Q'$ appears to remain almost constant until 14 kK, whereas the WLR is rather flat between 5.1 $<$ $<$ 5.4, in agreement with the findings by @benaglia07 [ their Fig. 8]. At the lowest temperatures/luminosities, both $Q'$ and the WLR decrease again, with a similar slope as in the hot star domain. This offers a possibility of a discontinuous behaviour, but, again, in contradiction to what is predicted.
We now quantify the behaviour of the mass-loss rate in the low temperature region (compared to the high temperature one), in a more conservative manner than estimated above, by using both the $\log Q'$ relation [*and*]{} the WLR. Accounting for the fact that the corresponding slopes are rather similar on both sides of the transition zone, we define a difference of offsets, & & k +\
’ & & k - , evaluated with respect to “low” minus “high”. From the WLR, we have $\Delta \Do < 0$, whereas the $Q'$ relation implies $\Delta \Do' \ge 0$, to be cautious. Thus, the change in $\ao \Delta \log k$ (which expresses the difference in log on both sides of the jump, cf. Eq. \[mdotscal\]) is constrained by k < -. To be cautious again, we note that $\Delta \log \Cinf$ should lie in the range log(1.9/2.4) […]{} log(1.3/3.3) = -0.1 […]{} -0.4, accounting for the worst and the average situation (cf. Fig. \[jump\]).
Thus, the scaling factors of mass-loss rates on both sides of the jump (cool vs. hot) differ by 0.4[…]{} 0.8 k\^ < 1.25[…]{} 2.5 \[mdotfactor\] i.e., either decreases in parallel with /or increases [*marginally*]{}.
#### Wind efficiencies.
Before discussing the implications of these findings, let us come back to the investigations by @benaglia07 who recently reported evidence of the possible presence of a local maximum in the wind efficiency, $\eta$ = /(L/c), around 21000 K, which would be at least in [*qualitative*]{} agreement with theoretical predictions. In Figure \[eta\], we compare the wind-efficiencies as derived for our combined sample (from ) to corresponding data from their radio measurements (filled dots). The dashed line in the figure displays the theoretical predictions, which, again, are based on the models by @Vink00.
There are eight stars in common with our sample for which we display the results only, not to artificially increase the statistics. At least for five of those, all of spectral type B0 to B2, a direct comparison of the and radio results is possible, since same values of , and have been used to derive the corresponding wind efficiencies. In all but one of these stars[^18], radio and optical mass-loss rates agree within 0.2 dex, which is comparable to the typical uncertainty of the optical data. Translated to potential wind-clumping, this would mean that the outer and inner wind-regions were affected by similar clumping factors, in analogy to the findings for [*thin*]{} O-star winds [@puls06]. From Fig. \[eta\] now, several issues are apparent:
$\bullet$ As for the wind-momenta and mass-loss rates, also the wind-efficiencies of OB-supergiants do not behave as predicted, at least globally.[^19] Instead, they follow a different trend (for $\alpha'=2/3$, one expects $\eta \propto \Teffe^2 \, (\Rstare^{0.5} k^{1.5}
\Cinf)$, i.e., a parabola with spectral type dependent offset), where, as we have already seen, the offset at the cool side of the jump is much lower than in the simulations by Vink et al. Actually, this is true for almost the complete B-SGs domain (between 27 and 10 kK).
$\bullet$ Similar as in the [*observed*]{} wind-momentum luminosity diagram (Fig. \[wlr\], right panel), some of the O-supergiants do follow the predictions, while others show wind-efficiencies which are larger by up to a factor of two. Note that this result is supported by [*both*]{} radio diagnostics. If this discrepancy were interpreted in terms of small-scale clumping, we would have to conclude that the winds of these objects are moderately clumped, even at large distances from the stellar surface.
$\bullet$ Within the transition zone, a large scatter towards higher values of $\eta$ is observed, which, if not due to systematic errors in the adopted parameters, indeed might indicate the presence of a local maximum, thus supporting the findings of @benaglia07. From a careful investigation of the distribution of stellar radii, terminal velocities and mass-loss rates, we believe that this local bump does not seem to be strongly biased by such uncertainties, but is instead due to a real increase in .
Discussion
----------
Regarding a comparison with theoretical models, the major conclusion to be drawn from the previous section is as follows. In addition to the well-known factor of two discrepancies for dense O-SG winds, the most notable disagreement (discarding local effects within the transition zone for the moment) is found in the low /low $L$ B-SG domain, confirming the analysis by @crowther06. The predictions by Vink et al. clearly require the decrease in to be [*over*]{}compensated by an increase in throughout the complete mid/late B-star regime, whereas our analysis has shown that this is not the case. At best, increases by the same amount as decreases, though a reduction of seems to be more likely, accounting for the fact that the upper limit in Eq. \[mdotfactor\] is a rather conservative estimate.
Since the calculation of [*absolute*]{} mass-loss rates and wind-momenta is a difficult task and depends on a number of uncertainties (see below), let us firstly consider the possibility that at least the predictions regarding the [*relative*]{} change in (from hot to cool objects) are correct, and that clumping affects this prediction only marginally.
In this case, the most simple explanation for the detected discrepancy is that cooler objects are less clumped than hotter ones. Since Vink et al. predict an increase in of a factor of five, this would imply that the clumping factors for hotter objects are larger by factors of 4 (most optimistic case) to 156 (worst case) compared to those of cooler ones.[^20] Given our present knowledge (see @FMP06, @puls06 and references therein), this is not impossible, but raises the question about the physical origin of such a difference. This hypothesis would also imply that [*all*]{} B-SG mass-loss rates are overpredicted, though to a lesser extent for cooler subtypes.
In the alternative, and maybe more reasonable scenario that the clumping properties of OBA supergiants were not too different, we would have to conclude that at least the low temperature predictions suffer from unknown defects. Note, however, that a potential “failure” of these predictions does not invalidate the radiation driven wind theory itself. The actual mass-loss rates depend on the effective number of driving lines, and, at least in principle, this number should [*decrease*]{} towards lower , due to an increasing mismatch between the position of these lines and the flux maximum (e.g., @Puls00). In Vink’s models, it increases instead because Fe [iii]{} has many more lines than Fe [iv]{}, and because these lines are distributed over a significant spectral range. The absolute number of these lines and their strengths, however, depend on details of the available data (not forgeting the elemental abundances, @KK07), a consistent description of the ionisation/excitation equilibrium and also on other, complicating effects (e.g., the diffuse radiation field diminishing the line acceleration in the lower wind, @OP99, and the potential influence of microturbulence, @Lucy07), which makes quantitative predictions fairly ambiguous. Moreover, if the winds were clumped, this would influence the hydrodynamical simulations, due to a modified ionisation structure.
[*That*]{} there is an effect which is most probably related to the principal bistability mechanism [@PP90] remains undisputed, and is evident from the more or less sudden decrease in /. Additionally, there is a large probability that at least inside the transition zone a “local” increase of () is present, which would partly support the arguments by Vink et al., though not on a global scale. Furthermore, the scatter of $Q'$ (and wind-momentum rate) turned out to be [*much*]{} larger in the transition region than somewhere else. This might be explained by the fact that hydrogen begins to recombine in the wind just in this region, whereas the degree of recombination depends on a multitude of parameters, thus leading to the observed variety of mass-loss rates and terminal velocities. Finally, note that at least the observed hypergiant seems to be consistent with the bistability scenario, which, after all, has been originally “invented” for these kind of objects.
Summary and future work {#summary}
=======================
In this study, we have presented a detailed investigation of the optical spectra of a small sample of Galactic B supergiants, from B0 to B9. Stellar and wind parameters have been obtained by employing the NLTE, unified model atmosphere code FASTWIND [@Puls05], assuming unclumped winds. The major findings of our analysis can be summarised as follows.
1\. We confirm recent results [@Ryans; @dft06; @simon] of the presence of a (symmetric) line-broadening mechanism in addition to stellar rotation, denoted as “macro-turbulence”. The derived values of are highly supersonic, decreasing from $\approx$ 60 at B0 to $\approx$ 30 at B9.
2\. We determined the Si abundances of our sample stars in parallel with their corresponding micro-turbulent velocities.\
(i) For all but one star, the estimated Si abundances [**were**]{} consistent with the corresponding solar value (within $\pm$0.1 dex), in agreement with similar studies [@GL; @Roll; @Urb; @Przb06]. For HD 202850, on the other hand, an overabundance of about 0.4 dex has been derived, suggesting that this late-B supergiant might be a silicon star.\
(ii) The micro-turbulent velocities tend to decrease towards later B subtypes, from 15 to 20 at B0 (similar to the situation in O-supergiants) to 7 at B9, which is also a typical value for A-SGs.\
(iii) The effect of micro-turbulence on the derived effective temperature was negligible as long as Si lines from the two major ions are used to determine it.
3\. Based on our estimates and incorporating data from similar investigations [@crowther06; @Urb; @Przb06; @Lefever], we confirm previous results (e.g., @crowther06) on a 10% downwards revision of the effective temperature scale of early B-SGs, required after incorporating the effects of line blocking/blanketing. Furthermore, we suggest a similar correction for mid and late subtypes. When strong winds are present, this reduction can become a factor of two larger, similar to the situation encountered in O-SGs [@Crowther02].
4\. To our surprise, a comparison with data from similar SMC objects [@Trundle04; @Trundle05] did not reveal any systematic difference between the two temperature scales. This result is interpreted as an indication that the re-classification scheme as developed by @lennon97 to account for lower metal line strengths in SMC B-SGs also removes the effects of different degrees of line blanketing.
5\. Investigating the wind properties of a statistically significant sample of supergiants with between 10 and 45 kK, we identified a number of discrepancies between theoretical predictions [@Vink00] and observations. In fair accordance with recent results [@Evans2; @crowther06], our sample indicates a gradual decrease in in the bi-stability (“transition”) region, which is located at lower temperatures than predicted: 18 to 23 kK (present study) against 22.5 to 27 kK.
By means of a $newly$ defined, distance independent quantity, $Q'=
\Mdote/\Rstare^{1.5} \, \geff/\vinfe)$ we have investigated the behaviour of as a function of . Whereas inside the transition zone a large scatter is present (coupled with a potential [*local*]{} maximum in wind efficiency around 21 kK), $Q'$ remains a well defined function with low scatter in the hot and cool temperature region outside the transition zone. Combining the behaviour of $Q'$ and the modified wind-momentum rate, the change in over the bi-stability jump (from hot to cool) could be constrained to lie within the factors 0.4 to 2.5, to be conservative. Thus, either decreases in parallel with /(more probable), or, at most, the decrease in is just balanced by a corresponding increase in (less probable). This finding contradicts the predictions by @Vink00 that the decrease in should be [*over-compensated*]{} by an increase in , i.e., that the wind-momenta should increase over the jump. Considering potential clumping effects, we have argued that such effects will not change our basic result, unless hotter objects turn out to be substantially more strongly clumped than cooler ones. In any case, at least in the low temperature region present theoretical predictions for are too large!
This finding is somewhat similar to the recent “weak-wind problem” for late O-dwarfs[^21], though probably to a lesser extent. Thus, it might be that our understanding of radiation driven winds is not as complete as thought only a few years ago. Thus, it is of extreme importance to continue the effort of constructing sophisticated wind models, including the aforementioned effects (wind-clumping, diffuse radiation field, micro-turbulence), both in terms of stationary and time-dependent simulations. With respect to the objects of the present study, a re-analysis of the “peculiar” mid-type B-supergiants from the KPL99 sample is urgently required as well. Finally, let us (once more) point to the unresolved problem of macro-turbulence, which implies the presence of rather deep-seated, statistically distributed and highly supersonic velocity fields. How can we explain such an effect within our present-day atmospheric models of hot, massive stars?
[natbib]{}
Abt, H.A., Levato, H., Grosso, M. 2002, ApJ 573, 359
Asplund, M., Grevesse, N., Sauval, A.J. 2005, in ASP Conf. Ser. 336: Cosmic Abundances as Records of Stellar Evolution and Nucleosynthesis, eds. T.G. Barnes & F.N. Bash, 25
Azzopardi, M., Vigneau, J. 1975, A&AS 19, 271
Barbier-Brossat, M., Figon, P. 2000, A&AS 142, 217
Barlow, M.J., Cohen, M. 1977, ApJ 213, 737
Becker, S.R., Butler, K. 1990, A&A 235, 326
Benaglia, P., Vink, J.S., Marti, J., et al. 2007, A&A, in press, astro-ph/3577
Bianchi, L., Garcia, M. 2002, ApJ 581, 610
Bianchi, L., Efremova, B.V. 2006, AJ 132, 378
Bouret, J-C., Lanz, T., Hillier, D.J., et al. 2003, ApJ 595, 1182
Bouret, J-C., Lanz, T., Hillier, D.J. 2005, A&A 438, 301
Bresolin, F., Gieren, W., Kudritzki, R.-P., et al. 2002, ApJ 567, 277
Bychkov, V.D, Bychkova, L.V., Madej, L. 2003, A&A 407, 631
Conti, P.S., Ebets, D. 1977, ApJ, 213, 438
Crowther, P. 2004, EAS 13, 1
Crowther, P., Hillier, D.J., Evans, C.J., et al. 2002, ApJ 579, 774
Crowther, P.A., Lennon, D.J., Walborn, N.R. 2006, A&A 446, 279
de Koter, A., Heap, S.R., Hubeny, I. 1998, ApJ 509, 879
Denizman, L., Hack, M. 1988, A&AS 75, 79
Dufton, P.L., Ryans, R.S.I., Simon-Diaz, S., et al. 2006, A&A, 451, 603
Eissner, W., Jones, M., Nussbaumer, H. 1974, Comp. Phys. Comm. 8, 270
Evans, C.J., Lennon, D.J., Trundle, C., et al. 2004, ApJ, 607, 451
Fitzpatrick, E.L., Garmany, C.D. 1990, ApJ 363, 119
Fullerton, A., Massa, D.L., Prinja, R. 2006, ApJ 637, 1025
Garcia, M., Bianchi, L. 2004, ApJ 606, 497
Garmany, C.D., Stencel, R.E. 1992, AAS 94, 211
Gies, D.F., Lambert, D.L. 1992, ApJ 387, 673
Gray, D.F. 1973, ApJ 184, 461
Gray, D.F. 1975, ApJ 202,148
Grevesse, N., Sauval, A.J. 1998, SSRv 85, 161
Heap, S.R., Lanz, T., Hubeny, I. 2006, ApJ 638, 409
Herrero, A., Puls, J., Najarro, F. 2002, A&A 396, 946
Hillier, D.J., Miller, D.L. 1998, ApJ 496, 407
Hirschi, R., Meynet, G., Maeder, A. 2005, A&A 443, 581
Howarth, I.D., Siebert, K.W., Hussain, G.A.J., et al. 1997, MNRAS 284, 265
Humphreys, R. 1978, ApJS 38, 309
Humphreys, R., McElroy, D.B. 1984, ApJ 284, 565
Hunter, I., Dufton, P. L., Smartt, S.J., et al. 2007, A&A 466, 277
Kilian, J., Becker, S.R., Gehren, T., Nissen, et al. 1991, A&A 244, 419
Krticka, J., Kubat, J. 2004, A&A 417, 1003
Krticka, J., Kubat, J. 2007, A&A 464, L17
Kudritzki, R.-P. 1980, A&A 85, 174
Kudritzki, R.-P., Lennon, D.J., Puls, J. 1995, in: “Quantitative Spectroscopy of Luminous Blue Stars in Distant Galaxies”. ESO Astrophysics Symposia, Science with the VLT, eds. J.R. Walsh & I.J. Danziger, Springer, Heidelberg, p. 246
Kudritzki, R.-P., Puls, J., Lennon, D.J., et al. 1999, A&A 350, 970 (KPL99)
Kudritzki, R.-P., Puls, J. 2000, ARA&A 38, 613
Kurucz, R. L. 1992, Rev. Mex. Astron. Astrof. 23, 45
Lamers, H.J.G.M.L., Snow, T., Lindholm, D.M. 1995, ApJ 455, 269
Lefever, K., Puls, J., Aerts, C. 2007, A&A 463, 1093
Lennon, D.J. 1997, A&A 317, 871
Lennon, D.J., Dufton, P.L, Fitzsimmons, A. 1992, A&AS 94, 569
Lennon, D.J., Dufton, P.L, Fitzsimmons, A. 1993, A&AS 97, 559
Lucy, L.B. 2007, A&A, in press, astro-ph 3650 Markova, N., Puls, J., Repolust,T., et al. 2003, A&A 413, 693
Markova, N., Prinja, R., Morrison, N., et al. 2007, in preparation
Martins, F., Schaerer, D., Hillier, D.J., et al. 2004, A&A 420, 1087
Martins, F., Schaerer, D., Hillier, D.J., et al. 2005, A&A 441, 735
Massey, P., Bresolin, F., Kudritzki, R.-P., et al. 2004, ApJ 608, 1001
Massey, P., Puls, P., Pauldrach, A.W.A., et al. 2005, ApJ 627, 477
McErlean, N.D., Lennon, D.J., Dufton, P.L. 1998, A&A 329, 613
McErlean, N.D., Lennon, D.J., Dufton, P.L. 1999, A&A 349, 553
Meynet, G., Maeder, A. 2000, A&A 361,101
Mokiem, M.R., de Koter, A., Evans, C.J., et al. 2006, A&A 456, 1131
Najarro, F., Hillier, D.J., Puls, J., et al. 2006, A&A 456, 659
Nussbaumer, H., Storey, P.J. 1978, A&A 64, 139
Owocki, S.P. 1994, in: proc. of Isle-aux-Coudre Workshop “Instability and Variability of Hot-Star Winds”, Astrophysics and Space Science 221, 3
Owocki, S.P., Puls, J. 1996, ApJ 462, 894
Owocki, S., Puls, J. 1999, ApJ 510, 355
Pauldrach, A.W.A., Puls, J. 1990, A&A 237, 409
Pauldrach, A.W.A., Hoffmann, T.L., Lennon, M. 2001, A&A 375, 161
Prinja, R.K., Barlow, M.J., Howarth, I.D. 1990, ApJ 361, 607
Przybilla, N., Butler, K , Becker, S.R., et al. 2006, A&A 445, 1099
Puls, J., Kudritzki, R.-P., Herrero, A., et al. 1996, A&A 305, 171
Puls, J., Springmann, U., Lennon, M. 2000, A&AS 141, 23
Puls, J., Repolust, T., Hofmann, T., et al. 2003, IAUS 212, 61
Puls, J., Urbaneja, M.A., Venero, R. et al. 2005, A&A 435, 669
Puls, J., Markova, N,, Scuderi, S., et al. 2006, A&A 454, 625
Repolust, T., Puls, J., Herrero, A. 2004, A&A 415, 349
Rivinius, T., Stahl, O., Wolf, B., et al. 1997, A&A 318, 819
Rolleston, W.R.J., Smartt, S.J., Dufton, P.L., et al. 2000, A&A, 363, 537
Rosenhald, J.D 1970, ApJ 159, 107
Runacres, M.C., Owocki, S.P. 2002, A&A 381, 1015
Runacres, M.C., Owocki, S.P. 2005, A&A 429, 323
Ryans, R.S.I., Dufton, P.L., Rolleston, W.R.J., et al. 2002, MNRAS 336, 577
Santolaya-Rey, A.E., Puls, J., Herrero, A. 1997, A&A 323, 488
Schröder, S.E., Kaper, L., Lamers, H.J.G.L.M., et al., 2004, A&A 428, 149
Simon-Diaz, S., Herrero, A., Esteban, C., et al. 2006, A&A 448, 351
Simon-Diaz, S., Herrero, A. 2007, A&A 468, 1063
Smith, K.C., Howarth, I.D. 1998, MNRAS 299, 1146
Trundle, C., Dufton, P.L., Lennon, D.J., et al. 2002, A&A 395, 519
Trundle, C., Lennon, D.J., Puls, J., et al. 2004, A&A 417, 217
Trundle, C., Lennon, D.J. 2005, A&A 434, 677
Venn K.A. 1995, ApJS 99, 659
Villamariz, M.R., Herrero, A. 2000, A&A 357, 597
Vink, J.S., de Koter, A., Lamers, H.J.G.L.M. 2000, A&A 362, 295
Vrancken, M., Lennon, D.J., Dufton, et al. 2000, A&A 358, 639
Urbaneja, M.A. 2004, PhD Thesis, University of La Laguna, Spain
Urbaneja, M.A., Herrero, A., Bresolin, F., et al. 2003, ApJ 584, L73
de Zeeuw, P.T., Hoogerwerf, R., de Bruijne, J.H.J. et al. 1999, AJ 117, 354
[^1]: This detector is characterised by an $rms$ read-out noise of 3.3 electrons per pixel (2.7 ADU with 1.21 electrons per ADU).
[^2]: The IRAF package is distributed by the National Optical Astronomy Observatories, which is operated by the Association of Universities for Research in Astronomy, Inc., under contract with the National Sciences Foundation.
[^3]: For a hypergiant such as HD 190603 this value might be even higher.
[^4]: According to latest results [@Asplund], the actual solar value is slightly lower, log (Si/H) = - 4.49, but such a small difference has no effect on the quality of the line-profile fits.
[^5]: Note that in their FT procedure @simon have used a Gaussian profile (with EW equal to that of the observed profile) as “intrinsic profile”. Deviations from this shape due to, e.g., natural/collisional broadening are not accounted for, thus allowing only rough estimates of to be derived, which have to be adjusted during the fit procedure.
[^6]: Si IV $\lambda$4089 is unavailable in our spectra. Given that in early B-SGs this line is strongly blended by O II which cannot be synthesised by FASTWIND with our present model atoms, this fact should not affect the outcome of our analysis.
[^7]: Note that the Balmer lines remain almost unaffected by so that a [*direct*]{} effect of on the derived is negligible.
[^8]: Though He II is rather weak at these temperatures, due to the good quality of our spectra its strongest features can be well resolved down to B2.
[^9]: At these temperatures, only He I is present, and no conclusions can be drawn from the ionisation [*balance*]{}.
[^10]: A mass of 7 as derived for HD 198478 (second entry) seems to be rather low for a SG, suggesting that the B-V colour adopted from [*SIMBAD*]{} is probably underestimated.
[^11]: We do not directly derive the mass-loss rate by means of , but rather the corresponding optical depth invariant $Q$, see @Markova04 [@repo].
[^12]: and might introduce a certain ambiguity between $\beta$ and the run of the clumping factor, if the latter quantity is radially stratified
[^13]: Note that already @Lennon92 suggested that HD 191243 [**is likely**]{} a bright giant, but their argumentation was based more on qualitative rather than on quantitative evidence.
[^14]: Four B1 stars from the Lefever et al. sample have the same and thus appear as one data point in Figs 8 (right) and 9.
[^15]: using low quality objective prism spectra in combination with MK classification criteria, both of which contribute to the uncertainty.
[^16]: These authors have employed the [*unblanketed*]{} version of FASTWIND [@Santolaya] to determine wind parameters/gravities while effective temperatures were adopted using the unblanketed, plane-parallel temperature scale of @McE99.
[^17]: slope of regression should correspond to 4/$\alpha'$, [*if*]{} the relations were unique.
[^18]: HD 41117, with from being 0.37 dex lower than from the radio excess.
[^19]: In contrast to $Q'$, $\eta$ is not completely radius-independent, but includes a dependence $\propto \Rstare^{-0.5}$, both if is measured by and by the radio excess.
[^20]: from Eq. \[mdotfactor\] with ratios of $(5/2.5)^2$ and $(5/0.4)^2$
[^21]: A detailed UV-analysis by @martins04 showed the mass-loss rates of young late-O dwarfs in N81 (SMC) to be significantly smaller (factors 10 to 100) than theory predicts (see also @bouret03). In the Galaxy, the same dilemma applies to the O9V 10 Lac(@Herrero02) and maybe also for $\tau$ Sco (B0.2V), which show very low mass-loss rates.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We study a three-terminal setup consisting of a single-level quantum dot capacitively coupled to a quantum point contact. The point contact connects to a source and drain reservoirs while the quantum dot is coupled to a single base reservoir. This setup has been used to implement a noninvasive, nanoscale thermometer for the bath reservoir by detecting the current in the quantum point contact. Here, we demonstrate that the device can also be operated as a thermal transistor where the average (charge and heat) current through the quantum point contact is controlled via the temperature of the base reservoir. We characterize the performances of this device both as a transistor and a thermometer, and derive the operating condition maximizing their respective sensitivities. The present analysis is useful for the control of charge and heat flow and high precision thermometry at the nanoscale.'
author:
- Jing Yang
- Cyril Elouard
- Janine Splettstoesser
- Björn Sothmann
- Rafael Sánchez
- 'Andrew N. Jordan'
bibliography:
- 'Meine\_Bibliothek.bib'
title: 'Thermal transistor and thermometer based on Coulomb-coupled conductors'
---
Introduction
============
Engineering and controlling heat and its coupling with electricity at the nanoscale is one of the big challenges that the field of nanoelectronics is facing today. The phenomenon of thermoelectric transport at the mesoscopic level has been widely explored in nanostructures to achieve this goal [@sothmann_thermoelectric_2015; @benenti_fundamental_2017]. This includes the design of nanoscale heat engines [@staring_coulomb-blockade_1993; @dzurak_observation_1993; @humphrey_reversible_2002; @entin-wohlman_three-terminal_2010; @sanchez_optimal_2011; @sothmann_rectification_2012; @sothmann_magnon-driven_2012; @bergenfeldt_hybrid_2014; @sanchez_chiral_2015; @hofer_quantum_2015; @roche_harvesting_2015; @hartmann_voltage_2015; @thierschmann_three-terminal_2015; @Whitney2016Jan; @Schulenborg2017Dec; @josefsson_quantum-dot_2018; @Sanchez2018Nov], refrigerators [@giazotto_opportunities_2006; @pekola_normal-metal-superconductor_2007; @edwards_quantum-dot_1993; @prance_electronic_2009; @zhang_three-terminal_2015; @koski_-chip_2015; @hofer_autonomous_2016; @sanchez_correlation-induced_2017], thermal rectifiers [@scheibner_quantum_2008; @ruokola_single-electron_2011; @fornieri_normal_2014; @jiang_phonon_2015; @sanchez_heat_2015; @martinez-perez_rectification_2015], and thermal transistors [@li_negative_2006; @jiang_phonon_2015; @joulain_quantum_2016; @sanchez_single-electron_2017; @sanchez_all-thermal_2017; @zhang_coulomb-coupled_2018; @Tang2019Feb; @guo2018quantum]. Also the detection of heat flows in such devices via nanoscale low-temperature thermometers [@correa_individual_2015; @hofer_quantum_2017; @mehboudi_thermometry_2018; @de_pasquale_quantum_2018] has been addressed. Remarkable progress has been recently achieved with different mesoscopic devices for milikelvin [@spietz_primary_2003; @spietz_shot_2006; @gasparinetti2011probing; @mavalankar_non-invasive_2013; @feshchenko_primary_2013; @maradan_gaas_2014; @feshchenko_tunnel-junction_2015; @iftikhar_primary_2016; @Ahmed2018Oct; @karimi2018noninvasive; @halbertal2016nanoscale] or ultrafast [@zgirski2018nanosecond; @wang2018fast; @brange2018nanoscale] thermometry.
Recent interest has been raised in designing multiterminal devices able to separate charge and heat currents. Proposals include the use of three terminal configurations of capacitively coupled quantum dots where transport through two (source and drain) terminals responds to charge fluctuations in the third one (the base), involving heat but no charge transfer. Setups of this kind allow for the realization of heat engines [@sanchez_optimal_2011; @sothmann_rectification_2012; @roche_harvesting_2015; @hartmann_voltage_2015; @thierschmann_three-terminal_2015; @Whitney2016Jan; @dare_powerful_2017; @walldorf_thermoelectrics_2017; @strasberg_fermionic_2018], refrigerators [@zhang_three-terminal_2015; @koski_-chip_2015; @sanchez_correlation-induced_2017; @erdman_absorption_2018], thermal transistors [@thierschmann_thermal_2015; @sanchez_single-electron_2017; @sanchez_all-thermal_2017; @zhang_coulomb-coupled_2018] and thermometers [@Zhang2018Nov]. Of particular interest is the case of thermal transistors and non-invasive thermometers, where one seeks to maximize the response of the system (a current from source to drain) by minimizing the injection of heat from the base terminal. In the first case, transport in the system is modulated by changes in the base temperature, $\Theta$. Conversely in the second case, the current serves as readout of the temperature of the base. However, the tunneling current through a weakly-coupled quantum dot (based on single-electron transitions) is small.
![\[fig:three-terminal-device\] (a) The QPC (beige region) and a single electron quantum dot connected to three terminals. Source and drain reservoirs have chemical potentials $\mu_\text{L},\mu_\text{R}$ and temperatures $\Theta_\text{L},\Theta_\text{R}$. The base reservoir, with temperature $\Theta$ and electrochemical potential $\mu$, is tunnel-coupled to the quantum dot with tunneling rates $W_{\rm on/off}$. The capacitance $C$ mediates the dependence of the current through the QPC, $I_\alpha$, upon the charge state $\alpha$ of the quantum dot, (b) $\alpha=0$ or (c) $\alpha=1$.](QPCQDtransistor.pdf){width="\linewidth"}
To overcome this issue, we consider an alternative structure, shown in Fig. \[fig:three-terminal-device\], consisting of a capacitively coupled quantum dot and a quantum point contact (QPC). The quantum dot is tunnel coupled to the base reservoir, which has a given temperature $\Theta$. The steady-state population of the quantum dot impacts the average (charge and heat) currents through the QPC, which in turn is connected to source and drain reservoirs. This structure has been employed previously to study the full counting statistics of single electron transport in quantum dots both theoretically [@korotkov_continuous_1999; @korotkov_selective_2001; @korotkov_noisy_2002; @pilgram_efficiency_2002; @jordan_continuous_2005; @jordan_quantum_2005; @jordan_qubit_2006; @jordan_leggett-garg_2006; @sukhorukov_conditional_2007; @flindt_universal_2009] and experimentally [@gustavsson_counting_2006; @fujisawa_bidirectional_2006; @ubbelohde_measurement_2012; @kung_irreversibility_2012; @hofmann_equilibrium_2016; @hofmann_measuring_2016; @entin-wohlman_heat_2017], where the QPC acts as a charge detector that monitors the occupation of the dot. This setup was also experimentally demonstrated [@gasparinetti2012nongalvanic; @torresani_nongalvanic_2013; @mavalankar_non-invasive_2013; @maradan_gaas_2014] to behave as a thermometer, enabled by the fact that the population of the dot is sensitive to the temperature in the base reservoir.
In this paper, we show that the structure can be used as a thermal transistor as well. We analyze its performance both as a transistor and a thermometer by deriving the corresponding sensitivities and finding the operating conditions optimizing them. When the device is operated as a thermal transistor, the goal is that a small temperature change in the base reservoir triggers a large change of the average charge or heat current in the QPC. We quantify the device performance by the differential sensitivity of the average charge and heat current to the infinitesimal temperature change in the base, as well as by the power gain.
For the operation of the device as a thermometer, we apply metrological tools to characterize two measurement protocols. The first one involves the sequential coupling to the dot to the probed reservoir and the QPC. This corresponds to the paradigmatic protocol of classical and quantum metrology [@giovannetti_advances_2011]. In the second protocol, both interactions are always turned on. This latter procedure is easier to implement and was actually used in Refs. [@gasparinetti2012nongalvanic; @torresani_nongalvanic_2013; @mavalankar_non-invasive_2013; @maradan_gaas_2014]. Both protocols allow noninvasive temperature measurements since single electron tunneling only involves a very small amount of energy and charge exchange between the measured reservoir (the base) and part of the thermometer (the quantum dot). When optimally operated, we find that the thermometer’s sensitivity is for both protocols limited by telegraph noise induced in the QPC by electron tunneling in the dot. Interestingly, the optimal sensitivity of the thermometer occurs for the same quantum-dot parameters as the optimal sensitivity of the thermal transistor. In contrast to the thermal transistor, we find that the sensitivity limits of the thermometer do not depend on the QPC current in the two quantum-dot charge configurations. For the thermal transistor, instead, we find that the power gain is independent of the dot occupation and the base-reservoir temperature.
With respect to the previously studied setup containing two capacitively coupled dots, the device shown in Fig. \[fig:three-terminal-device\] has two definite advantages. First, the average currents flowing through the QPC are much larger than in the quantum-dot case where the Coulomb-blockade much reduces the conductance compared to the conductance quantum [@nazarov_quantum_2009]. Furthermore, backaction is suppressed: In a QPC, there is no significant charge-buildup in the vicinity of the saddle point potential, which forms the QPC. As a consequence, the quantum dot state modifies the transmission probability of the QPC without energy exchange between the quantum dot and the QPC. This property allows to obtain high power gain for the transistor and participates to make the thermometer noninvasive as only a small bounded amount of energy flows back and forth between the dot and the base without involving the QPC. This situation is different for transport through a quantum dot connecting source and drain lead. In the latter case, random fluctuations of the electrostatic potential caused by the nearby, capacitively coupled gate dot [@sanchez_all-thermal_2017; @sanchez_single-electron_2017] (or simply environmental fluctuations [@sothmann_rectification_2012; @ruokola_theory_2012; @rossello_dynamical_2017; @entin-wohlman_heat_2017]) do induce energy exchange between the source-drain system and the base part in general. Exceptions have been identified involving particular tunneling rate configurations or strongly coupled dots [@sanchez_all-thermal_2017; @sanchez_single-electron_2017].
The paper is organized as follows. In Sec. \[sec:setup\] we introduce the model of our setup. The operation as a thermal transistor is discussed in Sec. \[sec:thermal-transistor\] while the thermometer configuration is analyzed in Sec. \[sec:thermometer\]. Our results are summarized and conclusions are drawn in Sec. \[sec:conclusion\].
\[sec:setup\]Setup
==================
We consider a three-terminal device as shown in Fig. \[fig:three-terminal-device\]. It contains a spinless single-level quantum dot, which can be either empty, denoted as $0$, or occupied with a single electron, denoted as $1$ (this corresponds to a quantum dot in the spin-split Coulomb-blockaded regime). The addition energy $\varepsilon$ is defined as the energy difference of these two states. The dot is weakly tunnel coupled to the base reservoir with chemical potential $\mu$ and temperature $\Theta$. In what follows we choose the electrochemical potential of the base reservoir as a reference energy, $\mu=0$. Throughout this article, we set $k_\text{B} \equiv 1$. The coupling strength between dot and base reservoir is characterized by the rate $\Gamma$. The population dynamics of the weakly coupled dot, with $\hbar \Gamma/\Theta\ll 1$ is described by the rate equations $$\begin{aligned}
\dot{P}_{0}(t) = -W_{\text{off}}P_{0}(t)+W_{\text{on}}P_{1}(t)
\label{eq:diff-dot}\end{aligned}$$ and $\dot{P}_{1}(t)=-\dot{P}_{0}(t)$, where $P_{0}(t)$ and $P_{1}(t)$ denote the probability to find the dot empty or singly occupied at time $t$, respectively. According to Fermi’s golden rule, the transition rate from dot state $0$ to $1$ is $W_{\text{off}}=\Gamma f_{\Theta}(\varepsilon)$ and the transition rate from dot state $1$ to $0$ is $W_{\text{on}}=\Gamma[1-f_{\Theta}(\varepsilon)],$ where $f_{\Theta}(\varepsilon)=1/(e^{\varepsilon/\Theta}+1)$ is the Fermi function. The dot occupations relax to the steady state on a time scale $\Gamma^{-1}$. The steady state populations are $$\begin{aligned}
P_{0} & = & W_{\text{on}}/\Gamma=1-f_{\Theta}(\varepsilon),\label{eq:P0}\\
P_{1} & = & W_{\text{off}}/\Gamma=f_{\Theta}(\varepsilon).\label{eq:P1}\end{aligned}$$
The mesoscopic QPC is connected to two macroscopic source and drain reservoirs with electrochemical potentials and temperatures $\mu_\text{L}$, $\Theta_\text{L}$ and $\mu_\text{R}$, $\Theta_\text{R}$ respectively, as shown in Fig. \[fig:three-terminal-device\]. The average current in the QPC sensitively depends on the state of the nearby quantum dot due to Coulomb interactions. For an ideal, saddle shaped potential landscape where transport occurs only via the lowest transverse mode, the QPC transmission takes the form [@buttiker_quantized_1990] $$\mathcal{T}_{\alpha}(E)=\frac{1}{1+\exp\left(-2\pi\frac{E-U_{\alpha}}{\hbar\omega}\right) }.$$ Here $\alpha=0,\,1$ represents the empty or occupied state of the dot, $\omega$ characterizes the curvature of the potential, and $U_{\alpha}$ is the electrostatic potential at the bottom of the saddle point when the dot is empty or occupied. In the nonlinear transport regime, it is important to account for the dependence of $U_{\alpha}$ on the electrochemical potentials of the leads, $U_{\alpha}=U_{\alpha,0}+\lambda \mu_\text{L}+(1-\lambda)\mu_\text{R}$ with $0\leq\lambda \leq1$, to ensure gauge invariance [@christen_gauge-invariant_1996]. Throughout this paper, we assume that $\Theta_{i}$ $(i=\text{L,R})$ is much smaller than all relevant energy scales in the QPC subsystem, in particular, $|U_\alpha-\mu_{i}|$ and $eV=\mu_\text{L}-\mu_\text{R}$, so that $f_{\Theta_{i}}(E-\mu_{i})$ can be approximated by the Heaviside function. Then the energy relevant to electronic and heat transport falls in the range $[\mu_\text{R},\,\mu_\text{L}]$. The average charge and heat currents flowing *out of* a reservoir, for a given dot state, can be calculated from the Landauer-Büttiker formula [@datta_electronic_1997; @blanter_shot_2000]. Focusing on currents flowing out of the left reservoir, we find $$\langle I_{\alpha}\rangle=\frac{2e}{h}\int_{\mu_\text{R}}^{\mu_\text{L}}dE\mathcal{T}_{\alpha}(E)=\frac{e\omega}{2\pi^{2}}\ln\left[\frac{1+e^{2\pi(\mu_\text{L}-U_{\alpha})/(\hbar\omega)}}{1+e^{2\pi(\mu_\text{R}-U_{\alpha})/(\hbar\omega)}}\right],\label{eq:Ialph-ave}$$ $$\langle I_{\alpha}^{Q}\rangle=\frac{2}{h}\int_{\mu_\text{R}}^{\mu_\text{L}}dE\mathcal{T}_{\alpha}(E)(E-\mu_\text{L}).\label{eq:IalphQ-ave}$$ where for simplicity, we have chosen not to indicate the reservoir L, where we always assume currents to be detected. The subscript $\alpha$ stands for the dot state. Given the dot stays in state $\alpha$, the current noise in the QPC is fully characterized by shot noise. Since we assume the low-temperature limit, thermal noise can be neglected. The zero frequency shot noise power spectral density of the charge current—considering auto-correlations in reservoir L—is [@blanter_shot_2000] $$\begin{aligned}
S_{\alpha} &=& \frac{e^{2}}{\pi\hbar}\int_{\mu_\text{R}}^{\mu_\text{L}}dE\ \mathcal{T}_{\alpha}(E)[1-\mathcal{T}_{\alpha}(E)]\nonumber\\
&=&\frac{e^{2}\omega}{2\pi}\frac{\sinh\left[\frac{\pi(\mu_\text{L}-\mu_\text{R})}{\hbar\omega}\right]}{\cosh\left[\frac{\pi(\mu_\text{L}-U_{\alpha})}{\hbar\omega}\right]\cosh\left[\frac{\pi(\mu_\text{R}-U_{\alpha})}{\hbar\omega}\right]},\label{eq:Salpha}\end{aligned}$$ where, again, the subscript $\alpha$ indicates the dot state.
\[sec:thermal-transistor\]Thermal transistor
============================================
Sensitivity
-----------
![\[fig:current-sens\]The normalized current $(\langle I\rangle_{\Theta}-\langle I_{1}\rangle)/\Delta I$ and the normalized differential sensitivity $(\big|\varepsilon\big|/\Delta I)\big|d\langle I\rangle_{\Theta}/d\Theta\big|=\xi(\varepsilon/\Theta)$ versus the normalized temperature $\Theta/\big|\varepsilon\big|$. The blue and the red lines are for the cases $\varepsilon>0$ and $\varepsilon<0$ respectively. ](transistor)
When the device under study acts as a thermal transistor in the steady state, analogous to the electric transistor, we would like to see a large change of average charge or heat current in the QPC due to a small change in the temperature of the base reservoir. Control of charge and heat currents via temperature gradients can be useful if, e.g., a certain device operation should only be performed as long as a certain temperature is not exceeded.
When the dot reaches its stationary state at a given base-reservoir temperature $\Theta$, the average charge current flowing in the QPC is $$\langle I\rangle_{\Theta}=P_{0}\langle I_{0}\rangle+P_{1}\langle I_{1}\rangle=\langle I_{1}\rangle+[1-f_{\Theta}(\varepsilon)]\Delta I,\label{eq:I-ave}$$ where $$\Delta I\equiv\langle I_{0}\rangle-\langle I_{1}\rangle.\label{eq:DeltaI}$$ For the average heat current one just needs to replace $\langle I\rangle_{\Theta}$, $\langle I_{i}\rangle$ and $\Delta I$ with $\langle I^{Q}\rangle_{\Theta}$, $\langle I_{\alpha}^{Q}\rangle$ and $\Delta I^{Q}\equiv\big|\langle I_{0}^{Q}\rangle-\langle I_{1}^{Q}\rangle\big|$ respectively. We see from Eq. (\[eq:I-ave\]) that both the steady-state average charge and heat current depend on the temperature of the base reservoir through the steady-state population of the dot. The differential sensitivity of the average charge current in the QPC to the temperature of the base reservoir is[^1] $$\Bigg|\frac{d\langle I\rangle_{\Theta}}{d\Theta}\Bigg|=\Bigg|\frac{df_{\Theta}(\varepsilon)}{d\Theta}\Bigg|\Delta I=\xi\left(\varepsilon/\Theta\right)\frac{\Delta I}{\big|\varepsilon\big|},\label{eq:dIdTheta}$$ where we have introduced the function for the normalized differential sensitivity $$\xi(x)\equiv\frac{x^{2}}{2[1+\cosh(x)]}.$$ We observe that the asymmetric function $\xi(x)$ reaches its maximum at $\big|x\big|=2.4$ and decreases to half of its value at $\big|x\big|=1$ and $\big|x\big|=4.5$. Thus for fixed $\varepsilon$, the prefactor on the right hand side of Eq. (\[eq:dIdTheta\]) reaches its maximum $0.44$ at $\Theta\approx0.4\big|\varepsilon\big|$, with left width $0.2\big|\varepsilon\big|$ and right width $0.6\big|\varepsilon\big|$, as shown in Fig. \[fig:current-sens\]. From this, we derive the maximum differential sensitivity as $$\max_{\Theta}\Bigg|\frac{d\langle I\rangle_{\Theta}}{d\Theta}\Bigg|=\frac{0.44\Delta I}{\big|\varepsilon\big|}.\label{eq:trans-max-sens}$$ For the differential sensitivity of the average heat current, one just needs to replace $\Delta I$ in Eqs. (\[eq:dIdTheta\], \[eq:trans-max-sens\]) with $\Delta I^{Q}$.
Power gain
----------
![\[fig:MaxDI\]QPC transmission in the regime where the different dot occupations correspond to fully closed and open channels. The requirement to reach this regime is to have $(\mu_\text{R}-U_{0})\gg \hbar\omega$ and $(U_{1}-\mu_\text{L})\gg \hbar\omega$.](TransFunc)
In general, the power gain of a transistor is defined as the ratio between output and input power $\mathcal{G}_{P}\equiv P_{\text{out}}/P_{\text{in}}$. Here, the output power is given by the power difference as the temperature is changed from $\Theta_{1}$ to $\Theta_{2}$ at a given applied voltage $V$, i.e., $$P_{\text{out}}=(\langle I\rangle_{\Theta_{2}}-\langle I\rangle_{\Theta_{1}})V=\Delta I\big|f_{\Theta_{2}}(\varepsilon)-f_{\Theta_{1}}(\varepsilon)\big|V.$$ The definition of the input power is less straightforward, since within the considered model no energy flows from the quantum dot into the QPC circuit. Instead, the relevant energy flow is here the energy transferred from the base reservoir into the quantum dot within a time duration $\Gamma^{-1}$ set by the characteristic time scale of the quantum-dot tunneling dynamics, when the reservoir temperature is changed from an initial value $\Theta_{1}$ to a final value $\Theta_{2}$. Therefore, we find $$P_{\text{in}}=\Gamma \varepsilon\big|f_{\Theta_{2}}(\varepsilon)-f_{\Theta_{1}}(\varepsilon)\big|.$$ With this the result for the power gain is found to be $$\mathcal{G}_{P}=\frac{V\Delta I}{\Gamma \varepsilon}.\label{eq:Gp}$$ Interestingly, the information about the occupation of the dot and about the temperature of the base reservoir drops out.
For fixed applied bias $V$, one can further maximize the sensitivity, Eq. (\[eq:trans-max-sens\]), and the power gain, Eq. (\[eq:Gp\]), over $\Delta I$ or $\Delta I^{Q}$. It is readily found from Eqs. (\[eq:Ialph-ave\]) and (\[eq:IalphQ-ave\]) that the maxima of both quantities are reached when the different dot occupations (empty or occupied) result into a completely closed or open QPC channel, corresponding to $\mathcal{T}_{0}(E)\approx1$ and $\mathcal{T}_{1}(E)\approx0$ respectively. This regime can be reached by tuning the parameters $U_{\alpha}$ and $\omega$ such that one has $(\mu_\text{R}-U_{0})/\hbar\omega\gg1$ and $(U_{1}-\mu_\text{L})/\hbar\omega\gg1$ as shown in Fig. \[fig:MaxDI\]. In this regime, we have $\langle I_{1}\rangle=\langle I_{1}^{Q}\rangle=0$, but $\langle I_{0}\rangle=2e(\mu_\text{L}-\mu_\text{R})/h$, and $\langle I_{0}^{Q}\rangle=-2(\mu_\text{L}-\mu_\text{R})^{2}/h$.
\[sec:thermometer\]Thermometer
==============================
We now turn to the operation of the setup as a thermometer. We analyze two different types of protocols with the aim to sense the temperature $\Theta$ of the base reservoir. The standard protocol of metrology, shown in Fig. \[fig:The-general-protocol\] and described in detail in the figure caption, corresponds to a sequence of discrete measurements: a physical system (the quantum dot, in our case) is used as a probe which is prepared in some known initial state and then undergoes some physical process (charging/uncharging) which depends on the true value of the estimation parameter of interest (the temperature of the base reservoir, $\Theta$). In this way, the information about the estimation parameter is encoded in the state of the probe. Finally, one uses some measuring apparatus (the QPC) to measure the state of the probe. When repeating the above procedure $N$ times, the standard precision of the measurement scales as $1/\sqrt{N}$. An alternative but more practical protocol (this is the one actually used in experiments Refs. [@mavalankar_non-invasive_2013; @maradan_gaas_2014]) is to keep both interactions always on, avoiding the sequential coupling and decoupling procedures. The QPC as a measurement apparatus or detector of the state of the dot has been widely discussed in the context of mesoscopic measurement processes [@korotkov_continuous_1999; @korotkov_selective_2001; @korotkov_noisy_2002; @pilgram_efficiency_2002; @jordan_continuous_2005; @jordan_quantum_2005; @jordan_qubit_2006; @jordan_leggett-garg_2006; @sukhorukov_conditional_2007; @flindt_universal_2009; @gustavsson_counting_2006; @fujisawa_bidirectional_2006; @ubbelohde_measurement_2012; @kung_irreversibility_2012; @hofmann_equilibrium_2016; @hofmann_measuring_2016; @entin-wohlman_heat_2017]. Let us first for simplicity assume that the quantum dot is fixed to be either empty or filled, and that we want to determine the state of the dot by looking at the current flowing in the QPC. Although the on and off states of the dot correspond to different average current $\langle I_{1}\rangle$ and $\langle I_{0}\rangle$ in the QPC, we can not resolve these two current levels instantaneously, due to the shot noise. Therefore, to measure the state of the dot, one has to switch on the QPC circuit for some time duration. Suppose we measure for this time duration $\tau_\alpha$, which is much longer than the correlation time of the shot noise in the QPC and in principle depends on the dot state $\alpha$. Due to the central limit theorem, the distribution of the time average current $(1/\tau_\alpha)\int I_\alpha(t')dt'$ conditioned on the dot state $\alpha$ is a Gaussian with mean $\langle I_{\alpha}\rangle$ and variance $S_{\alpha}/\tau$, where $S_{\alpha}$ is the zero frequency conditioned shot noise spectral density defined in Eq. (\[eq:Salpha\]). To reach a signal to noise ratio of at least unity for distinguishing the two Gaussians, the measurement must be turned on for at least a time duration of $\max\{\tau_0,\tau_1\}$, where $\Delta I$ is defined in Eq. (\[eq:DeltaI\]) and $$\tau_{\alpha}\equiv\frac{S_{\alpha}}{(\Delta I)^{2}}\label{eq:tau-alph}$$ is the measurement time. If the QPC circuit is switched on for a time duration much longer than $\max\{\tau_0,\tau_1\}$, it effectively performs an ideal measurement of the quantum-dot occupation.
![\[fig:The-general-protocol\] A typical cycle of a metrological process consists of the following steps: (i) The probe (blue rectangle) is initially prepared in some known state. (ii) The probe interacts with some physical process (magenta rectangle) that depends on the true value of the estimation parameter of interest. The interaction is turned off before moving to the next step. (iii) The interaction between the probe and the measuring apparatus is turned on to perform a noiseless projective measurement. When the measurement is done, the interaction is turned off and the measurement outcome is sent to the computer for later processing. ](sensing.pdf){width="\linewidth"}
\[subsec:Standard-protocol\]Standard (discrete) protocol
--------------------------------------------------------
We now discuss the three steps of the protocol of Fig. \[fig:The-general-protocol\]. Therefore, (i) we initially prepare the quantum dot—the probe of our setup—in state $0$, meaning that no excess electrons are in the dot. Next, (ii) we turn on the interaction with the base reservoir for a time duration $\Delta t^{(ii)}=c^{(ii)}\Gamma^{-1}$, where the constant $c^{(ii)}\gg1$ is sufficiently large to guarantee that the dot reaches the stationary state. We then (iii) turn off the interaction with the base reservoir and at the same time turn on the measurement by the QPC to determine whether there is an electron on the dot or not. We measure for time duration $\Delta t^{(iii)}=c^{(iii)}\max\{\tau_0,\tau_1\}$ to perform an effectively noiseless ideal occupation measurement, where $c^{(iii)}\gg1$ is chosen to obtain a sufficiently large signal-to-noise ratio here. The measurement gives a binary outcome statistically described by the probability mass function $\{P_{\alpha}\}$, Eqs. (\[eq:P0\],\[eq:P1\]), corresponding to the two outcomes $\alpha=0,\,1$. The corresponding sensitivity of the measurement about the temperature $\Theta$ associated with a single cycle is quantified by the Fisher information, which is given by $$F_{\Theta}=\sum_{\alpha=0,1}\frac{(\partial_{\Theta}P_{\alpha})^{2}}{P_{\alpha}}=\frac{\xi\left(\varepsilon/\Theta\right)}{\Theta^{2}}.\label{eq:FTheta}$$ Note that the inverse of the Fisher information sets the lower bound of the variance of any temperature estimator, known as the Cramér-Rao bound [@kay_fundamentals_1993].
In a given time $t$, one can repeat the cycle $N=t/(\Delta t^{(ii)}+\Delta t^{(iii)})$ times. We denote the measurement outcome for the $n$-th cycle as $x_{n}$, where $x_{n}=0$ if the dot is empty and $x_{n}=1$ if the dot is occupied. Based on a series of measurement outcomes $\{x_{n}\}$, we propose to apply the asymptotically unbiased maximum likelihood estimator $\hat{\Theta}_{\text{MLE}}(\{x_{n}\})=\varepsilon/\ln[1/\hat{f}(\{x_{n}\})-1]$ to estimate the temperature, where $\hat{f}(\{x_{n}\})=\sum_{n=1}^{N}x_{n}/N$. When $N$ is sufficiently large, the maximum likelihood estimator can saturate the Cramér-Rao bound asymptotically [@kay_fundamentals_1993], which is given by $$\begin{aligned}
\text{Var}(\hat{\Theta}_{\text{MLE}}) & = & \frac{1}{NF_{\Theta}} \nonumber\\
&=& \frac{c}{t}\frac{\Gamma^{-1}+\max(\tau_{\alpha})}{F_{\Theta}}.\label{eq:VarTemp-noApprox}\end{aligned}$$ For simplicity, we here assumed $c^{(iii)}=c^{(ii)}\equiv c$. In a typical experiment, one can resolve the state of the dot in a time scale much shorter than the life time of the state limited by electron tunneling. This indicates that the time scale of the tunneling $\Gamma^{-1}$ is much longer than the time scale required to resolve the two current levels, $\max\{\tau_0,\tau_1\}$. With this approximation, Eq. (\[eq:VarTemp-noApprox\]) reduces to $$\text{Var}(\hat{\Theta}_{\text{MLE}})=\frac{c\Theta^{2}}{\Gamma t\xi(\varepsilon/\Theta)},\label{eq:std-var}$$ which is independent of the QPC parameters. Now, from the property of the function $\xi(x)$ discussed in Sec. \[sec:thermal-transistor\], we see that $$[\text{Var}(\hat{\Theta}_{\text{MLE}})]_{\min}=\frac{2.3c\Theta^{2}}{\Gamma t},\label{eq:std-min-var}$$ and the variance ranges between $[1,\,2]\times$ the minimum variance when $\big|\varepsilon\big|$ is tuned between $[\Theta,\,4.5\Theta]$. Equation (\[eq:std-min-var\]) is confirmed by Monte Carlo simulation as shown in Fig. \[fig:MC\], where we generate the noiseless measurement signals (electric current) in the QPC according to the probability mass function $\{P_{\alpha}(t)\}$ described by Eq. (\[eq:diff-dot\]). The simulation shows that one needs to take $c\gtrsim10$ in order to obtain a numerical variance (square markers) of the maximum likelihood estimator, which approaches the Cramér-Rao bound (\[eq:std-min-var\]) (red solid line) in the long time limit.
Since the optimal value of $\big|\varepsilon\big|$ depends on the true value of the estimated temperature, the maximum sensitivity given by Eq. (\[eq:std-min-var\]) can only be reached by adaptive measurements consisting of multiple rounds, where one needs to adjust the value of $\big|\varepsilon\big|$ according to the temperature estimate from the previous round. If $\big|\varepsilon\big|$ is kept fixed all the time, as shown in Fig. \[fig:sens-temp\], the normalized standard deviation $\sqrt{\mathrm{Var}(\hat{\Theta})}/\Theta$ of the temperature estimator diverges exponentially at low temperature and linearly at high temperature.
![\[fig:MC\]Comparison of analytics of the standard, Eq. (\[eq:std-min-var\]), and the always-on protocol, Eq. (\[eq:always-min-var\]), with Monte Carlo (MC) simulations. We ignore the shot noise in the QPC for both protocols. We take $\varepsilon/\Theta$ to be the optimal value $2.4$ for both protocols. In general, if our prior knowledge about $\Theta$ is very loose and $\varepsilon$ can be far detuned from its optimal value, then the optimal sensitivity shown here can be achieved by adaptive measurements.](thermometer)
Always-on estimation {#sec_always_on}
--------------------
Let us now consider the alternative protocol, where the measurement by the QPC and the interaction with the electron reservoir are always turned on. We estimate the temperature of the base reservoir using the transferred charge in the QPC. When the dot reaches its steady state, the average current in the QPC is described by Eq. (\[eq:I-ave\]), where the impact of the quantum dot enters via its steady-state occupation probabilities. This QPC current has statistical fluctuations, which can be attributed to two different sources of noise. The first is the already mentioned shot noise, described by Eq. (\[eq:Salpha\]), due to the partially open channel in the QPC. The other source is telegraph noise, which stems from stochastic switching of the dot states caused by electrons tunneling on and off the quantum dot. We define the transferred charge in the QPC between the beginning of the measurement at $t'=0$ up to time $t$ as $$Q=\int_{0}^{t}I(t')dt'.\label{eq:charge}$$ The full counting statistics gives the cumulants of $Q$ [@levitov_electron_1996; @jordan_transport_2004; @sukhorukov_conditional_2007; @singh_distribution_2016], see also Appendix \[sec:FCS\]. The first and second cumulants of $Q$ are $$\langle Q\rangle=t\langle I\rangle_{\Theta}=t[\langle I_{0}\rangle-f_{\Theta}(\varepsilon)\Delta I],\label{eq:Q-ave}$$ $$\text{Var}(Q)=t\left[\frac{2(\Delta I)^{2}W_{\text{on}}W_{\text{off}}}{\Gamma^{3}}+\sum_{\alpha=0,\,1}S_{\alpha}P_{\alpha}\right].\label{eq:Q-var}$$ Now we estimate the occupation $f$ by some specific sampling of $Q$ according to Eq. (\[eq:Q-ave\]). This results in the following estimator $$\hat{f}=\frac{1}{\Delta I}\left[\frac{Q}{t}-\langle I_{0}\rangle\right].\label{eq:fhat}$$ This estimator has a simple physical interpretation: the occupation of the dot is estimated from the duration of time spent on the dot, divided by the total time of the experiment. Note that in the long time limit, the distribution of $Q$ is approximately Gaussian. Then the occupation estimator (\[eq:fhat\]) is also the maximum likelihood estimator. According to Eq. (\[eq:Q-ave\]), it is an unbiased estimator, i.e., $\langle\hat{f}\rangle=f_\Theta(\varepsilon)$. The variance of such an estimator is $$\begin{aligned}
\text{Var}(\hat{f}) & =\frac{1}{t}\left[\frac{2}{\Gamma}f_{\Theta}(\varepsilon)[1-f_{\Theta}(\varepsilon)]+\sum_{\alpha=0,\,1}P_{\alpha}\tau_{\alpha}\right].\end{aligned}$$ For sufficiently long $t$ such that $\text{Var}(\hat{f})$ is small, we can calculate the variance of the temperature estimator through the following error propagation relation $$\delta\hat{\Theta}=\left(\frac{df_\Theta(\varepsilon)}{d\Theta}\right)^{-1}\delta\hat{f}.\label{eq:err-propag}$$ From Eq. (\[eq:err-propag\]), we find $$\text{Var}(\hat{\Theta})=\frac{2\Theta^{2}}{\xi(\varepsilon/\Theta)t}\left[\frac{1}{\Gamma}+\frac{\tau_{0}}{2f_\Theta(\varepsilon)}+\frac{\tau_{1}}{2[1-f_\Theta(\varepsilon)]}\right].\label{eq:full-var}$$ Direct minimization of Eq. (\[eq:full-var\]) is tedious but can be done numerically. As in Sec. \[subsec:Standard-protocol\], we assume the practical situation $\tau_{\alpha}\Gamma\ll1$ to simplify the minimization. With this assumption, we see that as long as the detuning $\big|\varepsilon\big|$ is not much larger than $\Theta$, such that $f_\Theta(\varepsilon)$ is neither close to $0$ nor $1$, the contributions from the shot noise in Eq. (\[eq:full-var\]) can be safely ignored. Therefore Eq. (\[eq:full-var\]) reduces to $$\text{Var}(\hat{\Theta})=\frac{2\Theta^{2}}{\xi(\varepsilon/\Theta)\Gamma t},\label{eq:always-var}$$ which is independent of the QPC parameters as in the standard protocol. When $\xi(x)$ is maximized at $\big|\varepsilon\big|/\Theta=2.4$, and consequently $f_\Theta(\varepsilon)\approx 10^{-2}$, the shot noise can be neglected as long as $\tau_{\alpha}\Gamma\ll10^{-2}$. In this case the minimum variance is $$[\text{Var}(\hat{\Theta})]_{\min}=\frac{4.6\Theta^{2}}{\Gamma t}.\label{eq:always-min-var}$$ Eq. (\[eq:always-min-var\]) is confirmed by Monte Carlo simulation as shown in Fig. \[fig:MC\], where we simulate the telegraph process to generate the measurement signals in the QPC. As in the standard protocol, the optimal regime requires the knowledge of the true value of $\Theta$ and therefore adaptive measurements are required in general. The behavior of the temperature estimator in a non-adaptive measurement is qualitatively the same as the standard protocol, as shown in Fig. \[fig:sens-temp\]. However, near the optimal regime the always-on protocol has a factor of 5 smaller variance than the standard protocol, as shown in Fig. \[fig:MC\].
![\[fig:sens-temp\]The normalized standard deviation of the thermometer against the normalized temperature when $\big|\varepsilon\big|$ is kept fixed. The blue and red lines are plotted according to Eqs. (\[eq:std-var\],\[eq:always-var\]) respectively, where $c=10$ in Eq.(\[eq:std-var\]). We see that for both protocols, the normalized standard deviation $\sqrt{\mathrm{Var}(\hat{\Theta})}/\Theta$ scales as $\exp(\big|\varepsilon \big|/\Theta)$ at low temperature and $\Theta/\big| \varepsilon \big|$ at high temperature. ](thermometer_fixeps)
\[sec:conclusion\]Conclusion
============================
We have analyzed a setup consisting of a quantum dot capacitively coupled to a QPC as shown in Fig. \[fig:three-terminal-device\] used as a nanoscale thermal transistor and noninvasive thermometer. The basic operation principle relies on the sensitivity of the average charge and heat current through the QPC to the average occupation of the quantum dot. The average dot occupation in turn depends on the temperature of the base reservoir. We characterized the performance of the thermal transistor by its power gain as well as the differential sensitivity of the average charge current through the QPC to a variation of the temperature of the base reservoir. The performance of the thermometer was characterized by the variance of the temperature estimator. Furthermore, the thermometer is noninvasive since reading out the temperature only involves single electron tunneling and it is assumed that there is no energy exchange between the base and the QPC .
Interestingly, as a consequence of the common operation principles, we have found that for both types of devices the maximal sensitivity occurs when the addition energy of the dot and the temperature of the base reservoir are related as $\varepsilon=2.4\Theta$. However, while for the transistor the base temperature is optimized at fixed addition energy, for the thermometer the optimization has to be performed over the addition energy keeping the base temperature fixed. Furthermore, while the sensitivity of the transistor depends on the difference $\Delta I$ of the average QPC currents, the sensitivity of the thermometer, characterized by the variance of the temperature estimator, is independent of $\Delta I$.
The setup proposed here has already been implemented experimentally [@gasparinetti2012nongalvanic; @torresani_nongalvanic_2013; @mavalankar_non-invasive_2013; @maradan_gaas_2014] for thermometers. Our work shows its interest for the purpose of controlling charge or heat flow at the nanoscale and sets its theoretical ideal performance.
We thank the KITP for hosting the program Thermodynamics of Quantum Systems: Measurement, engines, and control, where this work was initiated. This research was supported in part by the National Science Foundation under Grant No. NSF PHY-1748958. JY, CE, and ANJ acknowledge the support from the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences under Award Number DE-SC-0017890 and the National Science Foundation under Grant No. NSF PHY-1748958. JS acknowledges funding from the Swedish VR and the Knut and Alice Wallenberg foundation through an Academy Fellowship. BS acknowledges financial support from the Ministry of Innovation NRW via the “Programm zur Förderung der Rückkehr des hochqualifizierten Forschungsnachwuchses aus dem Ausland”. RS acknowledges support from the Ramón y Cajal program RYC-2016-20778 and the “María de Maeztu Programme for Units of Excellence in R&D (MDM-2014-0377).
\[sec:FCS\]Full counting statistics of the transmitted charge in the QPC
========================================================================
In this appendix we give a brief overview of the full counting statistics method applied to the current through the QPC, used in Sec. \[sec\_always\_on\]. We note that the statistical dependence of the transferred charges in consecutive time intervals, which are much larger than the typical correlation time of the current flowing in the QPC, can be neglected. Namely, for two consecutive time intervals $\delta t_{1}$, $\delta t_{2}$ where $\delta t_{1},\,\delta t_{2}\gg\Gamma^{-1}$, the charges transferred in the QPC denoted as $\delta Q_{1}$ and $\delta Q_{2}$ respectively can be treated as statistically independent. We define the total charge transferred during the two intervals as $\delta Q=\delta Q_{1}+\delta Q_{2}$ and denote the probability distributions of $\delta Q_{1}$, $\delta Q_{2}$ and $\delta Q$ as $P(\delta Q_{1},\,\delta t_{1})$, $P(\delta Q_{2},\,\delta t_{2})$ and $P(\delta Q,\,\delta t)$ respectively, where $\delta t=\delta t_{1}+\delta t_{2}$. Then $P(\delta Q,\,\delta t)$ is a convolution between $P(\delta Q_{1},\,\delta t_{1})$ and $P(\delta Q_2,\,\delta t_2)$ due to the statistical independence of $\delta Q_{1}$ and $\delta Q_{2}$ .
The moment and cumulant generating functions associated with $P(\delta Q,\,\delta t)$ are $$G(\lambda,\,\delta t)=\int d(\delta Q)\exp(\text{i}\lambda\delta Q)P(\delta Q,\,\delta t),$$ $$F(\lambda,\,\delta t)=\ln G(\lambda,\,\delta t).$$ Thus we have $$G(\lambda,\,\delta t)=G(\lambda,\,\delta t_{1})G(\lambda,\,\delta t_{2}),$$ $$F(\lambda,\,\delta t)=F(\lambda,\,\delta t_{1})+F(\lambda,\,\delta t_{2}).$$ As a consequence, the cumulant generating function of $P(Q,\,t)$, where $Q$ is defined in Eq. (\[eq:charge\]) takes the form $$F(\lambda,\,t)\equiv tH(\lambda).$$ We Taylor expand $F(\lambda,\,t)$ as $$F(\lambda,\,t)= \sum_{n=0}^{\infty}\frac{\lambda^{n}}{n!}{\langle\!\langle}Q^{n}{\rangle\!\rangle},$$ and define $${\langle\!\langle}I^{n}{\rangle\!\rangle}\equiv {\langle\!\langle}Q^{n}{\rangle\!\rangle}/t.\label{eq:culmQ}$$ Then, $H(\lambda,\,t)$ can be written as $$H(\lambda)=\sum_{n=0}^{\infty}\frac{\lambda^{n}}{n!}{\langle\!\langle}I^{n}{\rangle\!\rangle}.\label{eq:H(lamb)}$$
We denote the probability $P_{\alpha}(Q,\,t)$ as the probability of transferring charge $Q$ up to time $t$ and having the dot in state $\alpha$ at time $t$. We note that if there is no tunneling between the dot and the base reservoir, $$\begin{aligned}
P_{\alpha}(Q,\,t) & =\int\dfrac{d\lambda}{2\pi}\exp[-i\lambda Q+F_{\alpha}(Q,\,t)]\nonumber \\
& =\int\dfrac{d\lambda}{2\pi}\exp[-i\lambda Q+tH_{\alpha}(\lambda)],\end{aligned}$$ which is equivalent to writing the time-derivative as $$\dot{P}_{\alpha}(Q,\,t)=\int\dfrac{d\lambda}{2\pi}\exp[-i\lambda Q]G_{\alpha}(\lambda,\,t)H_{\alpha}(\lambda).$$ On top of this effect, we have to take into account the effect due to tunneling, which yields the master equation $$\begin{aligned}
\dot{P}_{0}(Q,\,t) & =\int\dfrac{d\lambda}{2\pi}\exp[-i\lambda Q]G_{0}(\lambda,\,t)H_{0}(\lambda)\nonumber \\
& -W_{\text{off}}P_{0}(Q,\,t)+W_{\text{on}}P_{1}(Q,\,t),\end{aligned}$$ $$\begin{aligned}
\dot{P}_{1}(Q,\,t) & =\int\dfrac{d\lambda}{2\pi}\exp[-i\lambda Q]G_{1}(\lambda,\,t)H_{1}(\lambda)\nonumber \\
& -W_{\text{on}}P_{1}(Q,\,t)+W_{\text{off}}P_{0}(Q,\,t).\end{aligned}$$ Rewriting both sides in terms of the generation functions $G_{\alpha}(\lambda,\,t)=\exp[tH_{\alpha}(\lambda)]=\int dQ\exp(\text{i}\lambda Q)P_{\alpha}(Q,\,t)$ gives $$\begin{aligned}
\dot{G}_{0}(\lambda,\,t) & = [H_{0}(\lambda)-W_{\text{off}}]G_{0}(\lambda,t)+W_{\text{on}}G_{1}(\lambda,\,t),\\
\dot{G}_{1}(\lambda,\,t) & = W_{\text{off}}G_{0}(\lambda,t)+[H_{1}(\lambda)-W_{\text{on}}]G_{1}(\lambda,t).\end{aligned}$$ The above equation can be rewritten as $$\dot{\boldsymbol{G}}=\boldsymbol{H}\boldsymbol{G},\label{eq:G-diff}$$ where $$\boldsymbol{G}(\lambda,\,t)\equiv\begin{bmatrix}G_{0}(\lambda,\,t)\\
G_{1}(\lambda,\,t)
\end{bmatrix},$$ and $$\boldsymbol{H}(\lambda)\equiv\begin{bmatrix}H_{0}(\lambda)-W_{\text{off}} & W_{\text{on}}\\
W_{\text{off}} & H_{1}(\lambda)-W_{\text{on}}
\end{bmatrix}.$$ Thus the solution to Eq. (\[eq:G-diff\]) is $$\boldsymbol{G}(\lambda,\,t)=\exp[t\boldsymbol{H}(\lambda)]\boldsymbol{G}(\lambda,\,0).$$ When $t$ is sufficient large, the unconditional cumulant generating function $$H(\lambda)=\lim_{t\to\infty}\frac{\ln[\sum_{\alpha}G_{\alpha}(\lambda,\,t)]}{t}$$ approaches the maximum eigenvalue of $\boldsymbol{H}(\lambda)$, which gives $$\begin{aligned}
H(\lambda) & =\frac{1}{2}[\sum_{\alpha}H_{\alpha}(\lambda)-\Gamma]\nonumber \\
& +\sqrt{[H_{1}(\lambda)-H_{0}(\lambda)-\Delta\Gamma]^{2}/4+W_{\text{on}}W_{\text{off}}}.\end{aligned}$$ From this equation one can find $${\langle\!\langle}I{\rangle\!\rangle}=\partial H(\lambda)/\partial\lambda\big|_{\lambda=0}=\sum_{\alpha}P_{\alpha}{\langle\!\langle}I_{\alpha}{\rangle\!\rangle},\label{eq:Iculm}$$ $$\begin{aligned}
{\langle\!\langle}I^{2}{\rangle\!\rangle}& =\partial^{2}H(\lambda)/\partial\lambda^{2}\big|_{\lambda=0}\nonumber \\
& =2(\Delta I)^{2}W_{\text{on}}W_{\text{off}}/\Gamma^{3}+\sum_{\alpha=0,\,1}{\langle\!\langle}I_{\alpha}^{2}{\rangle\!\rangle}P_{\alpha\Theta},\label{eq:I2culm}\end{aligned}$$ where ${\langle\!\langle}I_{\alpha}{\rangle\!\rangle}=\partial H_{\alpha}(\lambda)/\partial\lambda\big|_{\lambda=0}$ and ${\langle\!\langle}I_{\alpha}^{2}{\rangle\!\rangle}=\partial^{2}H_{\alpha}(\lambda)/\partial\lambda^{2}\big|_{\lambda=0}$ . In the long time limit, we can easily obtain $${\langle\!\langle}I_{\alpha}{\rangle\!\rangle}=\lim_{t\to\infty}\frac{{\langle\!\langle}Q_\alpha {\rangle\!\rangle}}{t}=\lim_{t\to\infty}\frac{1}{t}\int_{0}^{t}\langle I_\alpha(\tau)\rangle d\tau=\langle I_{\alpha}\rangle,\label{eq:ll-I-alpha-rr}$$ and $$\begin{aligned}
{\langle\!\langle}I_{\alpha}^{2}{\rangle\!\rangle}= & \lim_{t\to\infty}\frac{{\langle\!\langle}Q_\alpha^{2}{\rangle\!\rangle}}{t}\nonumber \\
= & \lim_{t\to\infty}\frac{1}{t}\left[\int_{0}^{t}\int_{0}^{t}\langle I_\alpha(\tau_{1})I_\alpha(\tau_{2})\rangle-\langle I_{\alpha}\rangle^{2}t^{2}\right]\nonumber \\
= & \lim_{t\to\infty}\int_{-t}^{t}\langle\delta I_\alpha(\tau)\delta I_\alpha(0)\rangle d\tau\nonumber \\
= & S_{\alpha},\label{eq:ll-Ialpha2-rr}\end{aligned}$$ where $\delta I_\alpha(\tau)\equiv I_\alpha(\tau)-\langle I_{\alpha}\rangle$, $\langle I_{\alpha}\rangle$ and $S_{\alpha}$ are conditional average electric current in the QPC and the shot noise power spectral density defined in Eqs. (\[eq:Ialph-ave\], \[eq:Salpha\]) respectively. With Eqs. (\[eq:ll-I-alpha-rr\], \[eq:ll-Ialpha2-rr\], \[eq:Iculm\], \[eq:I2culm\]), one can easily obtain Eqs. (\[eq:Q-ave\], \[eq:Q-var\]) in the main text.
[^1]: Note that we exclude the case $\varepsilon=0$, where the device is fully insensitive to temperature changes.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Consider the goal of visiting every part of a room that is not blocked by obstacles. Doing so efficiently requires both sensors and planning. Our findings suggest a method of inexpensive optical range finding for robotic room traversal. Our room traversal algorithm relies upon the approximate distance from the robot to the nearest obstacle in 360 degrees. We then choose the path with the furthest approximate distance. Since millimeter-precision is not required for our problem, we have opted to develop our own laser range finding solution, in lieu of using more common, but also expensive solutions like light detection and ranging (LIDAR). Rather, our solution uses a laser that casts a visible dot on the target and a common camera (an iPhone, for example). Based upon where in the camera frame the laser dot is detected, we may calculate an angle between our target and the laser aperture. Using this angle and the known distance between the camera eye and the laser aperture, we may solve all sides of a trigonometric model which provides the distance between the robot and the target.'
author:
- |
(TR2018-991)\
\
Cole Smith\
Eric Lin\
Dennis Shasha
bibliography:
- 'refs.bib'
date: 'October 31, 2018'
nocite: '[@*]'
title: Robotic Room Traversal using Optical Range Finding
---
Problem Statement
=================
How can a robot make an efficient traversal of a room with the least amount of passes over the room, and how do we measure distances from obstacles such that the robot traverses all human-accessible areas within the room?
Related Work
============
The complete traversal of robotics through different terrain is not a new problem, and there has been similar work done before. However, these approaches require advanced hardware. We instead propose a cost-efficient manner of room traversal using more common materials. In a paper from the IEEE 2000 International Conference on Intelligent Robots and Systems, C. Eberst et al. showed that a robot could successfully travel through doorways and avoid obstacles using a multiple-camera array. In addition to optical methods, Eberst et al. also utilizes ultra-sonic sensors and laser scanning for increased navigation reliability [@20000575611]. Our implementation differs from these solutions in that we minimized the variety and cost of required sensors.
Another such approach was conducted by Jonathan Klippenstein and Hong Zhang from the University of Alberta, Canada. Klippenstein and Zhang performed research in feature extraction from visual simultaneous localization and mapping solutions (vSLAM) [@894673]. Similarly, Alpen et al. at the 8th IFAC Symposium on Intelligent Autonomous Vehicles in 2013 explored SLAM features for Unmanned Autonomous Vehicle (UAV) flight for indoor robotic traversal [@ALPEN2013268], and Sergio García et al. from University of Alcala proposes a solution for aerial vSLAM in a single-camera approach [@7781977]. Our approach uses simpler image transformations and filtering, meaning that our methods are affected less by low computational power, and low camera resolution.
Regardless of the sensory approach for the complete traversal of spaces by robots, Edlinger and Puttkamer at the University of Kaiserslautern propose a solution for an autonomous vehicle to build an internal, two-dimensional map of the traversal space with no prior knowledge about the traversal space itself. In addition to their traversal approach, Edlinger and Puttkamer also leverage optical range finding for navigation [@EDLINGER]. Our approach is only concerned with the traversal geometry of the robot, so the room geometry need not be stored for our algorithm.
Materials
=========
Hardware
--------
- ARM, x86-64 OS for Go programs
- iPhone 6s Plus
- Generic iPhone Suction Car Mount
- Stepper Motor (Any step count, 12v)
- iRobot Roomba model 600
Software
--------
- Room Traversal Algorithm
- EasyDriver board for Stepper Motor
- GPIO Driver for Roomba
Cost Analysis
-------------
The system is designed to be as cost effective as possible. Our camera solution total cost was less than \$90. The cost, without camera or robotic platform, can be broken down as:
- Raspberry Pi Model 3B: \$35
- Generic Stepper Motor: \$12
- EasyDriver Stepper Motor Driver: \$14
- Generic iPhone Suction Car Mount: \$8
- Plastic housing: \$11
- **Total Approximate Cost: \$80**
An iPhone is not required to use the camera interface. Any camera can be used so long as it can export PNG files to our system. For example, the Raspberry Pi Camera retails for \$26 as of 2018. Comparable LIDAR solutions can cost > \$300[@huang].
Hardware Implementation
=======================
Robotic Testing Platform
------------------------
For our robotic platform, we are using an iRobot Roomba Model 600. Direct control is assumed over the Roomba using the provided serial port at the top of the unit. Our system implements a module that sends debug commands over the Raspberry Pi GPIO pins as serial output.
The Roomba is set to manual-drive mode using a specific serial command, and then subsequent move commands are sent to the Roomba when required. Since the Roomba is always set to move at a constant speed, location can be measured by counting the encoder values of the Roomba’s wheels and comparing the value to the total time that the wheels are turning.
Camera Testing Platform
-----------------------
The flow of image processing is as follows:
1. Image is streamed from iPhone to program as PNG, using HTTP server and client programs. (CameraStreamer iPhone Application)
2. PNG is converted to 2D pixel array in HSV color space.
3. Array is passed through luminosity thresholding filter.
4. Array is passed through color thresholding filter.
5. Blob detector is run on array, and the centroid of the blobs are detected.
6. Ovals are rejected from the system, leaving only the laser dot centroid.
7. The offset from the vertical center of the camera plane and laser dot is modeled as an angle, and used to calculate distance from obstacle to robot.
Laser Dot Detection
===================
Data Flow and Filtering
-----------------------
PNG files are streamed from the CameraStreamer iPhone application to our program and converted into a 2D matrix in HSV (Hue, Saturation, Value) color space. Pixel values are stored as a struct, and each Hue, Saturation, and Value variable are normalized to be in the range \[0,1\]. This matrix will undergo a series of filtering steps to convert it into a binary image mask. Three filters are used to detect a laser dot within an image: Luminosity, Color, and Oval Rejection. Each filter defines a set of thresholds and target values for which to convert the HSV matrix into a boolean matrix.
Luminosity Filtering
--------------------
The first pass through the HSV matrix filters away pixels of undesired luminosity. For our purposes, we select only the brightest pixels within the image, since those are likely to represent the laser dot in the image frame. The conversion function creates a mask where “true” values are defined for pixels above or equal to the threshold value, and “false” for pixels below the threshold value.
``` {.Go}
func (image ImageMatrix) ConvertToMonoImageMatrixFromValue(valueThreshold float64)
*MonoImageMatrix
```
The conversion function defines a method on the struct and takes its Value threshold as an argument, . This float defines the minimum cutoff for the Value (of HSV) of a pixel, in normalized range of \[0,1\]. The function then returns a pointer to a struct. This struct masks pixels of insufficient luminosity.
Color Filtering
---------------
A second pass is made over the HSV matrix to filter Hue values within the HSV color space. A target hue and a hue threshold are defined in which pixels are masked if the absolute value of the difference between the hue and the target hue exceeds the hue threshold.
``` {.Go}
func (image ImageMatrix) ConvertToMonoImageMatrixFromHue(hueTarget, hueThreshold float64)
*MonoImageMatrix
```
The function defines another method on . A pointer to a is returned, as it was in the previous luminosity filtering, with the pixels that deviate further than from masked, and represented visually as black.
The two masks, luminosity and color, are then combined into one . Rendered, a masked image of a laser dot will appear like the following:
![A filtered image showing a green laser dot and reflection[]{data-label="fig:mask"}](test1.png){height="2in"}
Blob Detection
--------------
The masked matrix is then traversed using a 4-connected blob detection algorithm. Small artifacts can be rejected by defining a constant in number of pixels. The blobs are then returned as an array of pixel coordinate groups (X,Y) which are connected and “true” within the boolean image matrix. In figure \[fig:mask\], two groups of pixels will be returned.
Oval Rejection
--------------
The filtering passes will successfully mask out all light that does not conform to the luminosity and color profile defined as thresholds to the filtering methods. This will leave a boolean mask of the laser dot and any reflections of the laser dot in the image frame. Since these reflections will often appear less circular than the laser dot itself, we may reject the blobs that do not conform to a defined circular ratio.
``` {.Go}
// Given a series of connected coords, take the difference of
// min and max values for X and Y. The differences for X and Y
// are made a ratio as:
// [ abs(minX) - abs(maxX) ] / [ abs(minY) - abs(maxY) ]
// or
// [ abs(minY) - abs(maxY) ] / [ abs(minX) - abs(maxX) ]
// A ratio of 1.0 denotes a perfectly square bounding rectangle,
// (a circle blob). Anything less, denotes the oval ratio
func getCircleRatio(blob []*coord) float64
```
In figure \[fig:mask\], the leftmost oval will be rejected from consideration as a laser dot, leaving only the rightmost blob. The centroid of this pixel coordinates group is then calculated. The pixel distance of the centroid to the vertical center of the camera plane is then used to determine the physical distance between the laser dot and the laser diode.
Range Finding Model
===================
Calibration
-----------
The range finding program must first be calibrated before it may be used. Two methods may be used: (1) One calibration step is used, with the camera rotated with every distance reading during normal operation, to determine the angle needed to match the laser dot to the vertical center of the camera plane. (2) Multiple calibrations steps are used, with the camera not rotating while taking distance readings during normal operation, to determine the rate at which the laser dot moves away from the vertical center of the camera plane.
Using method (1), the user will place the robot such that the laser diode is 1 unit of distance (1 meter, 1 foot, etc.) away from the laser dot projected on a clean surface. The camera will then rotate using the stepper motor until the laser dot converges to the vertical center of the camera plane. The angle of rotation found in this calibration step defines a triangle for which the side opposite to the hypotenuse is one unit of measure. Subsequent measurements during normal operation will be defined as a continuous proportion of this unit measure.
Using method (2), the user will originally place the robot as they would in method (1). More than one calibration steps are used, and the user will then place the robot at 2, 3, and 4 units of measure away from the laser dot. At each calibration step, the distance of the laser dot to the vertical center of the camera plane is recorded, and the rate of change in pixel distance is determined as the robot moves further away from the laser dot. This method of calibration has the limitation that the physical distance reading during normal operation is only as accurate as the approximated rate of change in pixel distance determined by this calibration method. For distances greater than the N units of measure conducted during calibration, the distance measured will be extrapolated from this approximated rate of change. For this reason, our implementation supports method (1) only.
![Triangular Distance Model[]{data-label="fig:tri"}](Range_Finding_Math_Model.jpg){height="4in"}
Range Calculation
-----------------
For each distance reading, the camera is rotated until the laser offset from the vertical center of the camera plane is minimized, or ideally zero. The angle of rotation required is recorded, and given to a simple function to solve the triangular model in Figure \[fig:tri\] using the Sine Law. The camera rotation is performed using the stepper motor and iPhone mount seen in Figure \[fig:robot\]
``` {.Go}
// Returns the distance from the laser diode to the target based upon the
// provided angle, at which the pixel offset was corrected by rotating the camera
// such that the laser dot was in the center plane of the camera.
func GetLaserDistance(angle float64, triangleBase float64) float64 {
sineLawBase := (triangleBase / math.Sin(angle))
sineOfAngleC := math.Sin(90 - angle)
return sineLawBase * sineOfAngleC
}
```
##### Limitations
The constraints of our range finding model have theoretical limits based upon camera resolution and stepper motor resolution (how many discrete “steps” are available in 360 continuous degrees). As camera resolution increases, the model can gauge distance further as the increased pixel count allows for greater room between the laser dot and vertical camera plane, such that the vanishing point (where the laser dot and vertical camera place will naturally converge as distance increases) will be further from the laser diode. While greater resolution will increase computation time polynomially, more possible steps within the stepper motor allow for a more accurate angle when rotating the vertical camera place to the laser dot.
Room Traversal Method
=====================
{height="4in"}
Based on the above image, the user will pick a starting point in a given room. The robotic vehicle will then scan the surrounding area in 60 degree increments for the direction it can travel the furthest, giving 6 possible directions of travel. Once a direction has been determined, it will start its navigation towards that direction, keeping a predetermined amount of threshold between the vehicle and other potential obstacles.
The vehicle continues down the direction until the threshold eventually stops the vehicle from traveling in that direction, and then it scans the room again for the furthest direction to travel without backtracking.
The vehicle will eventually reach a point where it cannot move forward without backtracking, and once that point is reached, it will first decrease the obstacle threshold and determine whether it allows the vehicle to move in additional spaces it has not been to before. An algorithm can be defined as follows:
1. START at doorway or accessible entrance
2. Take 6 distance readings in 360 degrees and begin traversing the path of maximum distance
3. Stop when distance to obstacle in the forward travel direction is less than threshold distance
4. Take 6 distance readings in 360 degrees
5. If all distance readings are below threshold, temporarily lower threshold to maximum distance of previous reading
6. If threshold has been lowered to less than the width of the robot (the robot can no longer traverse into a space), backtrack out of space by following previous line of travel, goto 4.
7. Else if forward movement crossed a line of previous traversal (cycle detected), stop, find path to starting position using previously traversed paths, follow path, END.
8. Else, goto 2
Once all available space has been traversed, it will then return to the starting position through the nearest path it can find to return.
Results
=======
We compared our approach to a naive-bounce approach, in which the robot will make 90 degree turns when bumping into an obstacle. The traversal concludes when the robot reaches its starting position.
We found our approach to offer improved traversal since it prevents the robot from entering infinite bounce-loops, or ending its traversal early, as seen in the figure below.
The below results compare our algorithm to the naive bounce approach. The grey blocks represent obstacles. Yellow lines denote the robots traversal path, which begins in the lower-right corner. For our algorithm’s approach, the initial obstacle threshold was set to 20 units. The
![Robot Traversal using Naive Bounce (90 Degrees)[]{data-label="fig:bounce"}](bounce.png)
![Robot Traversal using our algorithm (threshold: 20 units)[]{data-label="fig:my_label"}](robot.png)
Limitation: No Outdoor Robotic Traversal
========================================
Our current robotic platform will not perform well in rough terrain outside of an office or home setting. Our system assumes flat ground with distance determined in 2 dimensions around the robot. As the robot encounters rougher terrain, a pitch in the Y dimension will be introduced. In order to properly handle outdoor situations, our model will have to be adjusted to incorporate distance calculation in 3 dimensions. This could be achieved by measuring the pixel distance between the laser dot and the horizontal center of the camera plane.
Parallel Camera Support
=======================
Our language choice (Go) naturally allows for more than one camera to operate in parallel. As a further step, more cameras and mounts can be added behind, or to the sides of the robot to decrease the need for the robot itself to rotate to gauge distance in 360 degrees. Currently, the laser itself is fixed, so the robot must rotate its entire assembly to point the laser at a different obstacle. Additionally, distance can be determined in 3 dimensions by adding cameras and lasers pointing upwards or at a pitched angle.
Conclusion
==========
The current mono-camera SLAM has been tested and shown to be successful in a flat indoor areas. Our system provides an autonomous and cost-effective solution to room traversal in stable environments. For further considerations, we would like to improve the ease of threshold tuning for the machine vision pipeline model, and expand the CameraStreamer iOS application to provide control over the entire system. In doing so, we would allow our system the first steps into less stable environments such as outdoor scenarios with high brightness. In extremely bright environments, our model will support the use of IR lasers and cameras. Additionally, the pitch of the camera in the Y dimension would need to be considered for non-flat terrain. Given these issues are addressed, this research provides future expansion into areas such as unmanned aerial vehicle navigation, since our long term considerations include abstracting our model to a more general method of environment traversal.
We provide a basic SLAM scaffold for any robotic vehicle using a single camera setup on our repository home page:
<https://github.com/NYU-Efficient-Room-Traversal>
Acknowledgements
================
This research was supported in part by the funding of NYU College of Arts and Sciences Dean’s Undergraduate Research Fund and NYU WIRELESS.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We propose an L-BFGS optimization algorithm on Riemannian manifolds using minibatched stochastic variance reduction techniques for fast convergence with constant step sizes, without resorting to linesearch methods designed to satisfy Wolfe conditions. We provide a new convergence proof for strongly convex functions without using curvature conditions on the manifold, as well as a convergence discussion for nonconvex functions. We discuss a couple of ways to obtain the correction pairs used to calculate the product of the gradient with the inverse Hessian, and empirically demonstrate their use in synthetic experiments on computation of Karcher means for symmetric positive definite matrices and leading eigenvalues of large scale data matrices. We compare our method to VR-PCA for the latter experiment, along with Riemannian SVRG for both cases, and show strong convergence results for a range of datasets.'
author:
- |
Anirban Roychowdhury\
Department of Computer Science and Engineering\
Ohio State University\
Columbus, OH 43210\
`[email protected]`\
bibliography:
- 'refpaper.bib'
title: 'Accelerated Stochastic Quasi-Newton Optimization on Riemannian Manifolds'
---
Introduction
============
Optimization algorithms are a mainstay in machine learning research, underpinning solvers for a wide swath of problems ranging from linear regression and SVMs to deep learning. Consequently, scaling such algorithms to large scale datasets while preserving theoretical guarantees is of paramount importance. An important challenge in this field is designing scalable algorithms for optimization problems in the presence of constraints on the search space, a situation all too often encountered in real life. One approach to handling such constrained optimization problems on vector spaces is to reformulate them as optimization tasks on a suitable Riemannian manifold, with the constraints acting as manifold parametrization. Often, the problems can be shown to possess desirable geometric properties like convexity with respect to distance-minimizing geodesics on the manifold, leading to provably efficient optimization algorithms [@absilbook; @zhangcolt16; @conicsuvrit; @ring_wirth]. These ideas can then be combined with stochastic optimization techniques influenced by [@robbins_monro], to deal with large datasets with theoretical convergence guarantees. See [@bonnabel; @zhangnips16] for recent examples. For instance, we can consider the problem of computing leading eigenvectors in the PCA setting [@shamiricml15] with unit-norm constraints. Projection-based strategies are normally used for this kind of problems [@oja], but alternating between solving and projecting can be prohibitively expensive in high dimensions. However, the unit-norm constraint can be used to cast the eigenvector problem into an unconstrained optimization scenario on the unit sphere, which happens to be one of the most well-behaved Riemannian manifolds.
Once the problems have been cast onto manifolds, one would want fast optimization algorithms that potentially use stochastic minibatches to deal with very large datasets. Such algorithms operating in Euclidean space have been widely researched in the optimization literature, but their development for Riemannian manifolds has been limited so far. In particular, one should note that the convergence speed limitations of unconstrained stochastic algorithms in the Euclidean case apply to manifold optimization as well; for instance a straightforward port of stochastic gradient descent to Riemannian manifolds [@zhangcolt16] attains the same sublinear convergence seen in Euclidean space. There has been extensive work in the Euclidean domain using variance-reduced gradients to address this issue, with the aim of improving convergence rates by explicitly reducing the variance of stochastic gradients with suitably spaced full-gradient evaluations [@shamiricml15; @rienips13]. Another nice advantage of this technique is the removal of the need for decaying learning rates for proving convergence, thereby solving the sublinearity issue as well sidestepping the nontrivial task of selecting an appropriate decay rate for SGD-like algorithms in large-scale optimization scenarios. Researchers have begun porting these methods to the manifold optimization domain, with a stochastic first-order variance reduced technique [@zhangnips16] showing robust convergence guarantees for both convex and nonconvex problems on geodesically complete manifolds.
Another complementary approach to improving convergence rates is of course using second-order updates for the iterates. In the Euclidean setting, one can show quadratic convergence rates for convex problems using Newton iterations, but these tend to be prohibitively expensive in high-dimensional big-data settings due to the need to store and invert the Hessian matrix. This limitation has led to the development of quasi-Newton methods, most notably L-BFGS [@liulbfgs], which uses lower-order terms to approximate the inverse Hessian. The curvature information provided by the Hessian estimate allows superlinear convergence in ideal settings [@nocedal_wright]. While widely used for small-to-medium scale problems, adoption of these methods for big data problems has been limited, since the second order updates can be prohibitively expensive to compute in these situations. However, most optimization algorithms in the literature that use stochastic minibatching techniques to deal with large datasets are modifications of first order gradient-descent [@bottounips04; @bottouiccs10] with relatively slower convergence in practical situations. This has recently begun to be addressed, with researchers devising stochastic variants of the L-BFGS technique [@byrdarxiv14], with straightforward convergence analyses. This has also been combined with variance reduction techniques and shown to have a linear convergence rate for convex problems in Euclidean space [@moritzaist16]. Our work in this paper is in a similar vein: we study quasi-Newton L-BFGS updates with stochastic variance reduction techniques for optimization problems on Riemannian manifolds, and analyze their convergence behavior for convex and nonconvex functions.
[**Contributions:**]{} The main contributions of this work may be summarized as follows:
**1.** We propose a stochastic L-BFGS method for Riemannian manifolds using stochastic variance reduction techniques for the first-order gradient estimates, and analyze the convergence for both convex and nonconvex functions under standard assumptions.
**2.** Our proof for strongly convex functions is different from those of recently proposed stochastic L-BFGS algorithms using variance-reduced gradients in Euclidean space [@moritzaist16] due to different bounds on the stochastic gradients. We do not use sectional curvature bounds in our proof for the convex case, making it structurally different from that of Riemannian SVRG [@zhangnips16].
**3.** We show strong experimental results on Karcher mean computations and calculation of leading eigenvalues, with noticeably better performance than Riemannian SVRG and VR-PCA; the latter is one of the best performing Euclidean algorithms for the finding dominant eigenvalues, that also uses stochastic variance-reduced gradients.
Preliminaries {#prelims}
=============
Riemannian geometry {#prelimone}
-------------------
We begin with a brief overview of the differential geometric concepts we use in this work. We consider $C^{\infty}$ (smooth) manifolds that are locally homeomorphic to open subsets of $\mathbb{R}^{D}$, in the sense that the neighborhood of each point can be assigned a system of coordinates of appropriate dimensionality. Formally, this is defined with the notion of a *chart* $c:U\rightarrow\mathbb{R}^{D}$ at each $x\in\mathcal{M}$, where $U\subset\mathcal{M}$ is an open subspace containing $x$. Smooth manifolds are ones with covering collections of differentiable ($C^{\infty}$) charts. A *Riemannian metric* $g(\cdot,\cdot)$ is a bilinear $C^{\infty}$ tensor field of type $0\choose 2$, that is also symmetric and positive definite. A manifold endowed with such a metric is called a Riemannian manifold. The tangent space $T_{x}\mathcal{M}$ at every $x\in\mathcal{M}$ is a vector space, with the Riemannian metric $g:T_{x}\mathcal{M}\times T_{x}\mathcal{M}\rightarrow\mathbb{R}$ as the attendant metric. $g$ then induces a norm for vectors in the tangent space , which we denote by $\|\cdot\|$.
Riemannian manifolds are endowed with the Levi-Civita connection, which induces the notion of parallel transport of vectors from one tangent space to another along a geodesic, in a metric preserving way. That is, we have an operator $\Gamma_{\gamma}:T_{x}\mathcal{M}\rightarrow T_{y}\mathcal{M}$ where, informally speaking, $\gamma$ joins $x\text{ and }y$, and for any $u,\nu\in T_{x}\mathcal{M}$, we have $g(u,\nu)=g(\Gamma(u), \Gamma(\nu))$ . The parallel transport can be shown to be an isometry.
For every smooth curve $\gamma:[0,1]\rightarrow\mathcal{M}$ lying in $\mathcal{M}$, we denote its velocity vector as $\dot{\gamma}(t)\in T_{x}\mathcal{M}$ for each $t\in[0,1]$, with the “speed” given by $\|\dot{\gamma}(t)\|$. The length of such a curve is usually measured as $L(\gamma) = \int\limits_{0}^{1}\|\dot{\gamma}(t)\|dt.$ Denoting the covariant derivative along $\gamma$ of some $\nu\in T_{x}\mathcal{M}$, with respect to the Riemannian (Levi-Civita) connection by $A\nu$, we call $A\dot{\gamma}$ the *acceleration* of the curve. Curves with constant velocities ($A\dot{\gamma}\equiv 0$) are called *geodesics*, and can be shown to generalize the notion of Euclidean straight lines. We assume that every pair $x,y\in\mathcal{M}$ can be connected by a geodesic $\gamma$ s.t. $\gamma(0)=x\text{ and }\gamma(1)=y$. Immediately we have the notion of “distance” between any $x,y\in\mathcal{M}$ as the minimum length of all geodesics connecting $x\text{ and }y$, assuming the manifolds are *geodesically complete* as mentioned above, in that every countable decreasing sequence of lengths of geodesics connecting a pair of points has a well-defined limit. The geodesic induces a useful operator called the *exponential map*, defined as $\text{Exp}_{x}:T_{x}\mathcal{M}\rightarrow\mathcal{M}\text{ s.t. }{\ensuremath{\text{Exp}_{x}(\nu)}}=\gamma(1)\text{ where }\gamma(0)=x,\gamma(1)=y\text{ and }\dot{\gamma}(0)=\nu$. If there is a unique geodesic connecting $x\text{ and }y$, then the exponential map has an inverse, denoted by ${\ensuremath{\text{Exp}^{-1}_{x}(y)}}$. The length of this geodesic can therefore be seen to be $\|{\ensuremath{\text{Exp}^{-1}_{x}(y)}}\|$.
The derivative $D$ of a differentiable function is defined using the Riemannian connection by the following equivalence: $Df(x)\nu=\nu f$, where $\nu\in T_{x}\mathcal{M}$. Then, by the Riesz representation theorem, there exists a gradient $\nabla f(x)\in T_{x}\mathcal{M}$ s.t. $\forall\nu\in T_{x}\mathcal{M},\>Df(x)\nu=g_{x}(\nabla f(x),\nu)$. Similarly, we can denote the Hessian as follows: $D^{2}f(x)(\cdot,\cdot):T_{x}\mathcal{M}\times T_{x}\mathcal{M}\rightarrow\mathbb{R}$. We denote the mapping from $\nu\in T_{x}\mathcal{M}$ to the Riesz representation of $D^{2}f(x)(\nu,\cdot)$ by $\nabla^{2}f(x)$. One can consult standard textbooks on differential geometry [@leebook; @boothby] for more details.
Convexity and Lipschitz smoothness on manifolds
-----------------------------------------------
Similar to [@zhangcolt16; @conicsuvrit; @zhangnips16], we define manifold (or geodesic) convexity concepts analogous to the Euclidean baselines, as follows : a set $U\subset\mathcal{M}$ is convex on the manifold if $\forall x,y\in U$ there exists a geodesic $\gamma$ connecting $x,y$ that completely lies in $U$, i.e. $\gamma(0)=x,\gamma(1)=y$. Then, a function can be defined as convex w.r.t. geodesics if $\forall x,y\in U\text{ where }\exists\gamma$ connecting $x,y$ on the manifold, we have: $$f(\gamma(t))\leq tf(x)+(1-t)f(y)\>\forall t\in[0,1].$$
We can also define a notion of strong convexity as follows: a function $f$ is called $S-$strongly convex if for any $x,y\in U$ and (sub)gradient $\nabla_{x}$, we have $$\label{eqn:sconvex}
f(y)\geq f(x)+g_{x}(\nabla_{x},{\ensuremath{\text{Exp}^{-1}_{x}(y)}})+\frac{S}{2}{\ensuremath{\|{\ensuremath{\text{Exp}^{-1}_{x}(y)}}\|^{2}}}.$$
We define Lipschitz smoothness of a function $f$ by imposing Lipschitz continuity on the gradients, as follows: $\forall x,y\in U,$ $$\|\nabla(x)-\Gamma_{\gamma}\nabla(y)\|\leq L\|{\ensuremath{\text{Exp}^{-1}_{x}(y)}}\|,$$ where L is the smoothness parameter. Analogous to the Euclidean case, this property can also be formulated as:: $$\label{eqn:lsmooth}
f(y)\leq f(x)+g_{x}(\nabla_{x},{\ensuremath{\text{Exp}^{-1}_{x}(y)}})+\frac{L}{2}{\ensuremath{\|{\ensuremath{\text{Exp}^{-1}_{x}(y)}}\|^{2}}}.$$
Stochastic Riemannian L-BFGS
============================
In this section we present our stochastic variance-reduced L-BFGS algorithm on Riemannian manifolds and analyze the convergence behavior for convex and nonconvex differentiable functions on Riemannian manifolds. We assume these manifolds to be $L$-Lipschitz smooth, as defined above, with existence of unique distance-minimizing geodesics between every two points, i.e. our manifolds are geodesically complete; this allows us to have a well-defined inverse exponential map that encodes the distance between a pair of points on the manifold. For the convergence analysis, we also assume $f$ to have a unique minimum at $x^{*}\in U$, where $U$ is a compact convex subset of the manifold.
The Algorithm
-------------
The pseudocode is shown in Algorithm \[alg:srlbfgs\]. We provide a brief discussion of the salient properties, and compare it to similar algorithms in the Euclidean domain, for example [@byrdarxiv14; @moritzaist16], as well as those on Riemannian manifolds, for example [@conicsuvrit]. To begin, note that $\nabla$ denotes the Riesz representation of the gradient $D$, as defined in $\S$\[prelimone\]. We denote full gradients by $\nabla$ and stochastic gradients by $\tilde{\nabla}$. Similar to other stochastic algorithms with variance-reduction, we use two loops: each iteration of the inner loop corresponds to drawing one minibatch from the data and performing the stochastic gradient computations (Steps $10$, $11$), whereas each outer loop iteration corresponds to two passes over the full dataset, once to compute the full gradient (Step $6$) and the other to make multiple minibatch runs (Steps $8$ through $30$). Compared to the Euclidean setting, note that the computation of the variance-reduced gradient in Step $11$ involves an extra step: the gradients ($\nabla f(x)$-s) reside in the tangent spaces of the respective iterates, therefore we have to perform parallel transport to bring them to the same tangent space before performing linear combinations.
To avoid memory issues and complications arising from Hessian-vector computations in the Riemann setting, we chose to update the second correction variable $y_{r}$ using a simple difference of gradients approximation: $y_{r}=\tilde{\nabla}f(u_{r})-\Gamma_{\gamma}\tilde{\nabla}f(u_{r-1})$. We should note here that the parallel transport parametrization should be clear from the context; $\Gamma_{\gamma}$ here denotes transporting the vector $\tilde{\nabla}f(u_{r-1})\in T_{u_{r-1}}\mathcal{M}$ to $T_{u_{r}}\mathcal{M}$ along the connecting geodesic $\gamma$. We omit any relevant annotations from the transport symbol to prevent notational overload. We calculate the first correction pair $z_{r}$ in one of two ways: (a) as $z_{r}=\Gamma_{\gamma}\left(\eta_{2}\rho_{t-1}\right)$, or (b) as $z_{r}=\Gamma_{\gamma}\left(-\eta_{1}\nu_{\text{prev}}\right)$. We denote these by **Option** $\mathbf{1}$ and **Option** $\mathbf{2}$ respectively in Alg. \[alg:srlbfgs\]. Note that in both cases, $\Gamma_{\gamma}$ denotes the parallel transport of the argument to the tangent space at $x_{t}^{s+1}$. In our experiments, we noticed faster convergence for the strongly convex centroid computation problem with **Option** $\mathbf{1}$, along with computation of the correction pairs every iteration and a low memory pool. For calculating dominating eigenvalues on the unit-radius sphere, **Option** $\mathbf{2}$ yielded better results. Once the corrections pairs $z_{r}, y_{r}$ have been computed, we compute the descent step in Step 21 using the standard two-loop recursion formula given in [@nocedal_wright], using the $M$ correction pairs stored in memory. Note that we use fixed stepsize in the update steps and in computing the correction pairs, and do not impose or perform calculations designed to have them satisfy Armijo or Wolfe conditions.
[1.2]{}
Initial value $x^{0}$, parameters $M\text{ and }R$, learning rates $\eta_{1},\eta_{2}$, minibatch size $mb$. Initialize $c = 1$; Set $r=0$; Initialize $H_{0}$; Set $x_{0}^{t+1}=x^{s}$; Compute full gradient $g^{t+1}=N^{-1}\sum_{i=1}^{N}\nabla f_{i}(x^{t})$; Sample minibatch $I_{i,mb}\subset{1,\ldots,N}$; Compute $\tilde{\nabla}f(x_{i}^{i+1})\text{ and }\tilde{\nabla}f(x^{i})$ using $I_{i,mb}$; Set $\nu_{i}^{t+1}=\tilde{\nabla}f(x_{i}^{t+1})-\Gamma_{\gamma}(\tilde{\nabla}f(x^{t})-g^{t+1})$; Set $r=r+1$; Set $u_{r}^{t+1}=x_{i}^{t+1}$; **Option 1**: Compute $z_{r}^{t+1}=\Gamma_{\gamma}\left(\eta_{2}\rho_{i-1}^{t+1}\right)$; **Option 2**: Compute $z_{r}^{t+1}=\Gamma_{\gamma}\left(-\eta_{1}\nu_{\text{prev}}\right)$; Compute $y_{r}^{t+1}=\tilde{\nabla}f(u_{r}^{t+1})-\Gamma_{\gamma}\tilde{\nabla}f(u_{i-1}^{t+1})$ using $I_{i,mb}$; Store correction pairs $z_{r}^{t+1}$ and $y_{r}^{t+1}$, using $r$ to maintain memory depth $M$; Set $x_{\text{prev}}=x_{i}^{t+1}$, $\nu_{\text{prev}}=\nu_{i}^{t+1}$; Set $x_{i+1}^{t+1}={\ensuremath{\text{Exp}_{x_{i}^{t+1}}(-\eta_{1}\nu_{i}^{t+1})}}$; Compute $\rho_{i}^{t+1}=H_{r}^{t+1}\nu_{i}^{t+1}$, as mentioned in the text; Set $x_{i+1}^{t+1}={\ensuremath{\text{Exp}_{x_{i}^{t+1}}(\eta_{2}\rho_{i}^{t+1})}}$; Set $c=c+1$; Set $x^{t+1}=x_{m}^{t+1}$;
Compared to the Euclidean algorithms [@byrdarxiv14; @moritzaist16], Alg.\[alg:srlbfgs\] has some key differences: 1) we did not notice any significant advantage from using separate minibatches in Steps $10$ and $18$, therefore we use the same minibatch to compute the VR gradient and the correction elements $y_{r}$; 2) we do not keep a running average of the iterates for computing the correction element $z_{r}$ (Steps $15$ through $17$); 3) we use constant stepsizes throughout the whole process, in contrast to [@byrdarxiv14] that uses a decaying sequence. Note that, as seen in Step $24$, we use the first-order VR gradient to update the iterates for the first $2R$ iterations; this is because we calculate correction pairs every $R$ steps and evaluate the gradient-inverse Hessian product (Step $26$) once at least two pairs have been collected. Similar to [@nocedal_wright], we drop the oldest pair to maintain the memory depth $M$. Compared to the algorithms in [@conicsuvrit; @reshadnips15], ours uses stochastic VR gradients, with all the attendant modifications and advantages, and does not use linesearch techniques to satisfy Wolfe conditions.
Analysis of convergence
-----------------------
In this section we provide the main convergence results of the algorithm. We analyze convergence for finite-sum empirical risk minimization problems of the following form: $$\label{eq:fsum}
\min_{x\in \mathcal{M}} f(x) = \frac{1}{N}\sum\limits_{i=1}^{N}f_{i}(x),$$ where the Riemannian manifold is denoted by $\mathcal{M}$. Note that the iterates are updated in Algorithm \[alg:srlbfgs\] by taking the exponential map of the descent step multiplied by the stepsize, with the descent step computed as the product of the inverse Hessian estimate and the stochastic variance-reduced gradient using the standard two-loop recursion formula. Thus, to bound the optimization error using the iterates, we will need bounds on both the stochastic gradients and the inverse Hessians. As mentioned in [@zhangnips16], the methods used to derive the former bounds for Euclidean algorithms cannot be ported directly to manifolds due to metric nonlinearities; see the proof of Proposition \[prop:one\] for details. For the latter, we follow the standard template for L-BFGS algorithms in the literature [@ring_wirth; @nocedal_wright; @byrdarxiv14]. To begin, we make the following assumptions:
**Assumption 1.** The function $f$ in is strongly convex on the manifold, whereas the $f_{i}$s are individually convex.
**Assumption 2.** There exist $\lambda,\Lambda\in (0,\infty),\>\lambda<\Lambda$ s.t. $\lambda\|\nu\|_{x}^{2}\leq D^{2}f\leq \Lambda\|\nu\|_{x}^{2}\quad\forall\nu\in T_{x}\mathcal{M}$.
These two assumptions allow us to (a) guarantee that $f$ has a unique minimizer $x^{*}$ in the convex sublevel set $U$, and (b) derive bounds on the inverse Hessian updates using BFGS update formulae for the Hessian approximations. Similar to the Euclidean case, these can be written as follows: $$\label{eqn:bfgsone}
\hat{B}_{r}=\Gamma_{\gamma}\left[\hat{B}_{r-1}-\frac{B_{r-1}(s_{r-1},\cdot)\hat{B}_{r-1}s_{r-1}}{B_{r-1}(s_{r-1},s_{r-1})}\right]\Gamma_{\gamma}^{-1},$$ and by the Sherman-Morrison-Woodbury lemma, that of the inverse: $$H_{r} = \Gamma_{\gamma}\left[G^{-1}H_{r-1}G+\frac{g_{x_{r-1}}(s_{r-1},\cdot)s_{r-1}}{y_{r-1}s_{r-1}}\right]\Gamma_{\gamma}^{-1},$$ where $G = I-\frac{g_{x_{r-1}}(s_{r-1},\cdot)\hat{y}_{r-1}}{y_{r-1}s_{r-1}}$, and $\hat{B}_{r}$ is the Lax-Milgram representation of the Hessian. Details on these constructs can be found in [@ring_wirth], in addition to [@leebook; @boothby].
### Trace and determinant bounds
To start off our convergence discussions for both convex and nonconvex cases, we derive bounds for the trace and determinants of the Hessian approximations, followed by those for their inverses. The techniques used to do so are straightforward ports of the Euclidean originals [@nocedal_wright], with some minor modifications to account for differential geometric technicalities. Using the assumptions above, we can prove the following bounds [@ring_wirth]:
\[lem:trdet\] Let $B_{r}^{s+1}=\left(H_{r}^{s+1}\right)^{-1}$ be the approximation of the Hessian generated by Algorithm \[alg:srlbfgs\], and $\hat{B}_{r}^{s+1}$ and $\hat{H}_{r}^{s+1}$ be the corresponding Lax-Milgram representations. Let $M$, the memory parameter, be the number of correction pairs used to update the inverse Hessian approximation. Then, under Assumptions 1 and 2, we have: $$\begin{aligned}
\mathrm{tr}(\hat{B}_{r}^{s+1})\leq \mathrm{tr}(\hat{B}_{0}^{s+1})+M\Lambda,\quad \det(\hat{B}_{r}^{s+1})&\geq \det(\hat{B}_{0}^{s+1})\frac{\lambda^{M}}{(\mathrm{tr}(\hat{B}_{0}^{s+1})+\Lambda M)^{M}}.\end{aligned}$$ Also, $\gamma I\preceq \hat{H}_{r}^{s+1}\preceq \Gamma I $, for some $\Gamma\geq\gamma>0$.
From a notational perspective, recall that our notation for the parallel transport operator is $\Gamma_{\gamma}$, with the subscript denoting the geodesic. The symbols $\gamma$ and $\Gamma$ in Lemma \[lem:trdet\] above are unrelated to these geometric concepts, merely being the derived bounds on the eigenvalues of inverse Hessian approximations. The proof is given in the supplementary for completeness.
### Convergence result for strongly convex functions
Our convergence result for strongly convex functions on the manifold can be stated as follows:
\[prop:one\] Let the Assumptions 1 and 2 hold. Further, let the $f(\cdot)$ in be S-strongly convex, and each of the $f_{i}$ be L-smooth, as defined earlier. Define the following constants: $p=\left[LS^{-1}+2\eta_{2}S^{-1}\left\lbrace2\eta L^{3}\Gamma^{2}-S\kappa\gamma\right\rbrace\right]$, and $q^{\prime}=6\eta^{2}L^{3}\Gamma^{2}S^{-1}$. Denote the global optimum by $x^{*}$. Then the iterate $x^{T+1}$ obtained after $T$ outer loop iterations will satisfy the following condition: $$\begin{aligned}
\mathbb{E}\left[f(x^{T+1})-f(x^{*})\right]\leq LS^{-1}\beta^{T}\mathbb{E}\left[f(x^{0})-f(x^{*})\right],\end{aligned}$$ where the constants are chosen to satisfy $\beta=\left(1-p\right)^{-1}\left(q^{\prime}+p^{T}(1-p-q^{\prime})\right)<1$ for linear convergence.
For proving this statement, we will use the $L$-smoothness and $S$-strong convexity conditions mentioned earlier. As in the Euclidean case [@rienips13; @moritzaist16], we will also require a bound on the stochastic variance-reduced gradients. These can be bounded using triangle inequalities and $L$-smoothness on Riemannian manifolds, as shown in [@zhangnips16]. This alternative is necessary since the Euclidean bound first derived in [@rienips13], using the difference of the objective function at the iterates, cannot be ported directly to manifolds due to metric nonlinearities. Thus we take a different approach in our proof compared to the Euclidean case of [@moritzaist16], using the interpoint distances defined with the norms of inverse exponential maps. We do not use trigonometric distance inequalities [@conicsuvrit; @bonnabel] for the convex case either, making the overall structure different from the proof of Riemannian SVRG as well. The details are deferred to the supplementary due to space limitations. However we do use the trigonometric inequality along with assumed lower bounds on sectional curvature for showing convergence for nonconvex functions, as described next.
### Convergence for the nonconvex case
Here we provide a convergence result for nonconvex functions satisfying the following condition: $f(x^{t})-f(x^{*})\leq \kappa^{-1}\|\nabla f(x^{t})\|^{2}$, which automatically holds for strongly convex functions. We assume this to hold even if $f$ is nonconvex, since it allows us to show convergence of the iterates using $\|\nabla f(x^{t})\|^{2}$. Further, similar to [@zhangcolt16; @zhangnips16] we assume that the sectional curvature of the manifold is lower bounded by $c_{\delta}$. This allows us to derive a trigonometric inequality analogous to the Euclidean case, where the sides of the “triangle” are geodesics [@bonnabel]. The details are given in the supplementary. Additionally, we assume that the eigenvalues of the inverse Hessian are bounded by $(\gamma,\Gamma)$ within some suitable region around an optimum. The main result of this section may be stated as follows:
Let the sectional curvature of the manifold be bounded below by $c_{\delta}$, and the $f_{i}$ be $L$-smooth. Let $x^{*}$ be an optimum of $f(\cdot)$ in . Assume the eigenvalues of the inverse Hessian estimates are bounded. Set $\eta_{2}=\mu_{0}/\left(\Gamma Ln^{\alpha_{1}}\eta^{\alpha_{2}}\right)$, $K=mT$, and $m=\lfloor n^{\nicefrac{3\alpha_{1}}{2}}/\left(3\mu_{0}\zeta^{1-2\alpha_{2}}\right)\rfloor$, where $\alpha_{1}\in(0,1]$ and $\alpha_{2}\in[0,2]$. Then, for suitable choices of the inverse Hessian bounds $\gamma,\Gamma$, we can find values for the constants $\mu_{0}>0$ and $\epsilon>0$ so that the following holds: $$\begin{aligned}
\mathbb{E}\|\nabla f(x^{T})\|^{2}\leq(K\epsilon)^{-1}L\eta_{2}^{\alpha_{1}}\zeta^{\alpha_{2}}\left(f(x^{0})-f(x^{*})\right).\end{aligned}$$
$\zeta$ is defined as $\zeta=\left(\tanh\left(d\sqrt{|c_{\delta}|}\right)\right)^{-1}d\sqrt{|c_{\delta}|}$ if $c_{\delta}<0$, and $0$ otherwise; $d$ is an upper bound on the diameter of the set $U$ mentioned earlier, containing an optimum $x^{*}$. The proof is inspired by similar results from both Euclidean [@reddinonconvex] and Riemannian [@zhangnips16] analyses, and is given in the supplementary. One way to deal with negative curvature in Hessians in Euclidean space is by adding some suitable positive $\alpha$ to the diagonal, ensuring bounds on the eigenvalues. Investigation of such “damping” methods in the Riemannian context could be an interesting area of future work.
Experiments
===========
Karcher mean computation for PD matrices
----------------------------------------
We begin with a synthetic experiment on learning the Karcher mean (centroid) [@bhatiabook] of positive definite matrices. For a collection of matrices $\left\lbrace\mathbf{W}_{i}\right\rbrace_{i=1}^{N}$, the optimization problem can be stated as follows: $$\begin{aligned}
\operatorname*{\arg\!\min}_{\mathbf{W}\succeq 0}\left\lbrace \sum\limits_{i=1}^{N}\|\log\left(\mathbf{W}^{-\nicefrac{1}{2}}X_{i}\mathbf{W}^{-\nicefrac{1}{2}}\right)\|_{F}^{2}\right\rbrace.\end{aligned}$$ We compare our minibatched implementation of the Riemannian SVRG algorithm from [@zhangnips16], denoted as rSVRG, with the stochastic variance-reduced L-BFGS procedure from Algorithm \[alg:srlbfgs\], denoted rSV-LBFGS. We implemented both algorithms using the Manopt [@manopt] and Mixest [@mixest] toolkits. We generated three sets of random positive definite matrices, each of size $100\times 100$, with condition numbers $10, 1e2,$ and $1e3$, and computed the ground truths using code from [@binietal]. Matrix counts were $100$ for condition number $1e-2$, and $1000$ for the rest. Both algorithms used equal batchsizes of $50$ for the first and third datasets, and $5$ for the second, and were initialized identically. Both used learning rates satisfying their convergence conditions. In general we found rSV-LBFGS to perform better with frequent correction pair calculations and a low retention rate, ostensibly due to the strong convexity; therefore we used $R=1$, $M=2$ for all three datasets. As mentioned earlier, the $z_{r}$ correction pair was calculated using **Option 1**: $z_{r}=\Gamma_{\gamma}\left(\eta_{2}\rho_{t}\right)$ where $\rho_{t}$ is calculated using the two-loop recursion. We used standard retractions to approximate the exponential maps. The retraction formulae for both symmetric PD and sphere manifolds used in the next section are given in the supplementary.
[0.32]{} {width="\textwidth"}
[0.32]{} {width="\textwidth"}
[0.32]{} {width="\textwidth"}
We calculated the error of iterate $\mathbf{W}$ as $\|\mathbf{W}-\mathbf{W}^{*}\|_{F}^{2}$, with $\mathbf{W}^{*}$ being the ground truth. The log errors are plotted vs the number of data passes in Fig.\[figpdkm\]. Comparing convergence speed in terms of \# data passes is often the preferred approach for benchmarking ML algorithms since it is an implementation-agnostic evaluation and focuses on the key bottleneck (I/O) for big data problems. Comparisons of rSVRG with Riemannian gradient descent methods, both batch and stochastic, can be found in [@zhangnips16]. From Fig.\[figpdkm\], we find rSV-LBFGS to converge faster than rSVRG for all three datasets.
[0.49]{} {width="\textwidth"}
[0.49]{} {width="\textwidth"}
[0.49]{} {width="\textwidth"}
[0.49]{} {width="\textwidth"}
Leading eigenvalue computation
------------------------------
Next we conduct a synthetic experiment on calculating the leading eigenvalue of matrices. This of course is a common problem in machine learning, and as such is a unit-norm constrained nonconvex optimization problem in Euclidean space. It can be written as: $$\begin{aligned}
\min_{\mathbf{z}\in\mathbb{R}^{d}:\mathbf{z}^{T}\mathbf{z}=1} -\mathbf{z}^{T}\left(\frac{1}{N}\sum\limits_{i=1}^{N}d_{i}d_{i}^{T}\right)\mathbf{z},\end{aligned}$$ where $D^{d\times N}$ is the data matrix, and $d_{i}$ are its columns. We can transform this into an unconstrained manifold optimization problem on the sphere defined by the norm constraint. To that end, we generated four sets of datapoints, for eigengaps $0.005$, $0.05$, $0.01$ and $0.1$, using the techniques described in [@shamiricml15]. Each dataset contains $100,000$ vectors of dimension $1000$. We used a minibatch size of $100$ for the two Riemannian algorithms. As before, learning rates for rSVRG were chosen according to the bounds in [@zhangnips16]. Selecting appropriate values for the four parameters in rSV-LBFGS (the first and second-order learning rates, $L$ and $M$) was a nontrivial task; after careful grid searches within the bounds defined by the convergence conditions, we chose $\eta_{1}=0.001, \eta_{2}=0.1,$ and $M=10$ for all four datasets. $L$ was set to $5$ for the dataset with eigengap $0.005$, and $10$ for the rest. The $z_{r}$ correction pair was calculated using **Option 2**: $z_{r}=\Gamma_{\gamma}\left(-\eta_{1}\nu_{\text{prev}}\right)$. We plot the performance of rSV-LBFGS, rSVRG and VR-PCA in Fig.\[fig:eigvsynth\]. Extensive comparisons of VR-PCA with other Euclidean algorithms have been conducted in [@shamiricml15]; we do not repeat them here. We computed the error of iterate $\mathbf{z}$ as $1-\left(Ne^{*}\right)^{-1}\|D^{T}\mathbf{z}\|_{2}^{2}$, $e^{*}$ being the ground truth obtained from Matlab’s $eigs$.
We see that the rSV-LBFGS method performs well on all four datasets, reaching errors of the order of $1e-15$ well before VR-PCA and rSVRG in the last three cases. The performance delta relative to VR-PCA is particularly noticeable in each of the four cases; we consider this to be a noteworthy result for fixed-stepsize algorithms on Riemannian manifolds.
Conclusion
==========
We propose a novel L-BFGS algorithm on Riemannian manifolds with variance reduced stochastic gradients, and provide theoretical analyses for strongly convex functions on manifolds. We conduct experiments on computing Riemannian centroids for symmetric positive definite matrices, and calculation of leading eigenvalues, both using large scale datasets. Our algorithm outperforms other Riemannian optimization algorithms with fixed stepsizes in both cases, and performs noticeably better than one of the fastest stochastic algorithms in Euclidean space, VR-PCA, for the latter case.
Appendices
==========
We present the convergence results from Propositions 1 and 2 in the main text in this section.
Analysis of convergence
-----------------------
We analyze convergence for finite-sum empirical risk minimization problems of the following form: $$\label{eq:fsum}
\min_{x\in \mathcal{M}} f(x) = \frac{1}{N}\sum\limits_{i=1}^{N}f_{i}(x),$$ where the Riemannian manifold is denoted by $\mathcal{M}$. Note that the iterates are updated in Algorithm \[alg:srlbfgs\] by taking the exponential map of the descent step multiplied by the stepsize, with the descent step computed as the product of the inverse Hessian estimate and the stochastic variance-reduced gradient using the standard two-loop recursion formula. Thus, to bound the optimization error using the iterates, we will need bounds on both the stochastic gradients and the inverse Hessians. As mentioned in [@zhangnips16], the methods used to derive the former bounds for Euclidean algorithms cannot be ported directly to manifolds due to metric nonlinearities; see the proof of Proposition \[prop:one\] for details. For the latter, we follow the standard template for L-BFGS algorithms in the literature [@ring_wirth; @nocedal_wright; @byrdarxiv14]. To begin, we make the following assumptions:
**Assumption 1.** The function $f$ in is strongly convex on the manifold, whereas the $f_{i}$s are individually convex.
**Assumption 2.** There exist $\lambda,\Lambda\in (0,\infty),\>\lambda<\Lambda$ s.t. $\lambda\|\nu\|_{x}^{2}\leq D^{2}f\leq \Lambda\|\nu\|_{x}^{2}\quad\forall\nu\in T_{x}\mathcal{M}$.
These two assumptions allow us to (a) guarantee that $f$ has a unique minimizer $x^{*}$ in the convex sublevel set $U$, and (b) derive bounds on the inverse Hessian updates using BFGS update formulae for the Hessian approximations. Similar to the Euclidean case, these can be written as follows: $$\label{eqn:bfgsone}
\hat{B}_{r}=\Gamma_{\gamma}\left[\hat{B}_{r-1}-\frac{B_{r-1}(s_{r-1},\cdot)\hat{B}_{r-1}s_{r-1}}{B_{r-1}(s_{r-1},s_{r-1})}\right]\Gamma_{\gamma}^{-1},$$ and by the Sherman-Morrison-Woodbury lemma, that of the inverse: $$H_{r} = \Gamma_{\gamma}\left[G^{-1}H_{r-1}G+\frac{g_{x_{r-1}}(s_{r-1},\cdot)s_{r-1}}{y_{r-1}s_{r-1}}\right]\Gamma_{\gamma}^{-1},$$ where $G = I-\frac{g_{x_{r-1}}(s_{r-1},\cdot)\hat{y}_{r-1}}{y_{r-1}s_{r-1}}$. The $\hat{B}_{r}$ is the Lax-Milgram representation of the Hessian [@ring_wirth].
### Trace and determinant bounds
To start off our convergence discussions for both convex and nonconvex cases, we derive bounds for the trace and determinants of the Hessian approximations, followed by those for their inverses. The techniques used to do so are straightforward ports of the Euclidean originals [@nocedal_wright], with some minor modifications to account for differential geometric technicalities. Using the assumptions above, we can prove the following bounds [@ring_wirth]:
\[lem:trdet\] Let $B_{r}^{s+1}=\left(H_{r}^{s+1}\right)^{-1}$ be the approximation of the Hessian generated by Algorithm \[alg:srlbfgs\], and $\hat{B}_{r}^{s+1}$ and $\hat{H}_{r}^{s+1}$ be the corresponding Lax-Milgram representations. Let $M$, the memory parameter, be the number of correction pairs used to update the inverse Hessian approximation. Then, under Assumptions 1 and 2, we have: $$\begin{aligned}
\mathrm{tr}(\hat{B}_{r}^{s+1})\leq \mathrm{tr}(\hat{B}_{0}^{s+1})+M\Lambda,\quad \det(\hat{B}_{r}^{s+1})&\geq \det(\hat{B}_{0}^{s+1})\frac{\lambda^{M}}{(\mathrm{tr}(\hat{B}_{0}^{s+1})+\Lambda M)^{M}}.\end{aligned}$$ Also, $\gamma I\preceq \hat{H}_{r}^{s+1}\preceq \Gamma I $, for some $\Gamma\geq\gamma>0$.
For brevity of notation we temporarily drop the $(s+1)$ superscript. The proof for the Euclidean case [@byrdarxiv14; @mokhtarijmlr] can be generalized to the Riemannian scenario in a straightforward way, as follows. Define the average Hessian $G_{r}$ by $$G_{r}(\cdot,\cdot)=\int\limits_{0}^{1}D^{2}[f(tz_{r})](\cdot,\cdot)dt,$$ such that $y_{r}=G_{r}(z_{r},\cdot)$. Then, it can be easily shown that $G_{r}$ satisfies the bounds in Assumption 2. Furthermore, we have the following useful inequalities $$\label{eqn:useful}
\frac{y_{r}z_{r}}{{\ensuremath{\|z_{r}\|^{2}}}}=\frac{G_{r}(s_{r},s_{r})}{{\ensuremath{\|z_{r}\|^{2}}}}\geq\lambda,\qquad\frac{{\ensuremath{\|y_{r}\|^{2}}}}{y_{r}z_{r}}\leq\Lambda.$$ Let $\hat{y}_{r}$ be the Riesz representation of $y_{r}$. Recall that parallel transport is an isometry along the unique geodesics, which implies invariance of the trace operator. Then using the L-BFGS update and , we can bound the trace of the Lax-Milgram representation of the Hessian approximations as follows: $$\begin{aligned}
\mathrm{tr}(\hat{B}_{r})&=\mathrm{tr}(\Gamma_{\gamma}\hat{B}_{r-1}\Gamma_{\gamma}^{-1})-\frac{\|\Gamma_{\gamma}\hat{B_{r-1}}s_{r-1}\|^{2}}{B_{r-1}(s_{r-1},s_{r-1})}+\frac{\|\Gamma_{\gamma}\hat{y}_{r-1}\|^{2}}{y_{r-1}s_{r-1}} \\
&\leq\>\mathrm{tr}(\Gamma_{\gamma}\hat{B}_{r-1}\Gamma_{\gamma}^{-1})+\frac{\|\Gamma_{\gamma}\hat{y}_{r-1}\|^{2}}{y_{r-1}s_{r-1}} \\
&\leq\>\mathrm{tr}(B_{0})+M\Lambda.\end{aligned}$$ This therefore proves boundedness of the largest eigenvalue of the $\hat{B}_{r}$ estimates.
Similarly, to get a lower bound for the minimum eigenvalue, we bound the determinant as follows: $$\begin{aligned}
\det(\hat{B}_{r})=\>&\det(\Gamma_{\gamma}B_{r-1}\Gamma_{\gamma}^{-1})\cdot\det\left(I-\frac{\hat{B}_{r-1}s_{r-1}s_{r-1}}{B_{r-1}(s_{r-1},s_{r-1})} +\hat{B}_{r-1}^{-1}\frac{y_{r-1}y_{r-1}}{y_{r-1}s_{r-1}}\right) \\
=\>&\det(\Gamma_{\gamma}B_{r-1}\Gamma_{\gamma}^{-1})\frac{y_{r-1}s_{r-1}}{B_{r-1}(s_{r-1},s_{r-1})} \\
=\>&\det(\Gamma_{\gamma}B_{r-1}\Gamma_{\gamma}^{-1})\frac{y_{r-1}s_{r-1}}{\|s_{r-1}\|^{2}}\cdot\frac{\|s_{r-1}\|^{2}}{B_{r-1}(s_{r-1},s_{r-1})}\\
\geq\>&\det(\Gamma_{\gamma}B_{r-1}\Gamma_{\gamma}^{-1})\frac{\lambda}{\lambda_{max}(B_{r-1})},\end{aligned}$$ where we use $\lambda_{\text{max}}$ to denote the maximum eigenvalue of $B_{r-1}$, and use . Since $\lambda_{\text{max}}$ is bounded above by the trace of $(\hat{B}_{r})$, we can telescope the inequality above to get $$\det(\hat{B}_{r})\geq\>\det(B_{0})\frac{\lambda^{M}}{(\mathrm{tr}(B_{0})+M\Lambda)^{M}}.$$ The bounds on the maximum and minimum eigenvalues of $B_{r}$ thus derived allows us to infer corresponding bounds for those of $H_{r}$ as well, since by definition $H_{r}=\hat{B}_{r}^{-1}$.
### Convergence results for the strongly convex case
Next we provide a brief overview of the bounds necessary to prove our convergence result. First, note the following bound implied by the Lipschitz continuity of the gradients: $$\begin{aligned}
f(x_{t+1}^{s+1}) \leq f(x_{t}^{s+1}) +g(\nabla f(x_{t}^{s+1}), {\ensuremath{\text{Exp}^{-1}_{x_{t}^{s+1}}(x_{t+1}^{s+1})}}) + \frac{L}{2}{\ensuremath{\|{\ensuremath{\text{Exp}^{-1}_{x_{t}^{s+1}}(x_{t+1}^{s+1})}}\|^{2}}}.\end{aligned}$$ Note the update step fom line 11 of Algorithm \[alg:srlbfgs\]: $x_{t+1}^{s+1}={\ensuremath{\text{Exp}_{x_{t}^{s+1}}(-\eta H_{r}^{s+1}\nu_{t}^{s+1})}}$. We can replace the inverse exponential map in the inner product above by the quantity in the parentheses. In order to replace $H_{r}^{s+1}$ by the eigen-bounds from Lemma \[lem:trdet\], we invoke the following result (Lemma 5.8 from [@leebook]:
For any $D\in{\ensuremath{T_{x}\mathcal{M}}}$ and $c,t\in\mathbb{R}$, $\gamma_{cD}(t)=\gamma_{D}(ct)$,
where $\nu$ is the “speed” of the geodesic. This allows us to write ${\ensuremath{\text{Exp}_{x}(c\nu)}}=\gamma_{\nu}(c)=\gamma_{c\nu}(1)$. Recall that for Riemann geodesics we have $\|\dot{\gamma(t)}\|=\bar{s}\text{ for all }t\in[0,1]$, a constant.
\[prop:one\] Let the Assumptions 1 and 2 hold. Further, let the $f(\cdot)$ in be S-strongly convex, and each of the $f_{i}$ be L-smooth, as defined earlier. Define the following constants: $p=\left[LS^{-1}+2\eta_{2}S^{-1}\left\lbrace2\eta L^{3}\Gamma^{2}-S\kappa\gamma\right\rbrace\right]$, and $q^{\prime}=6\eta^{2}L^{3}\Gamma^{2}S^{-1}$. Denote the global optimum by $x^{*}$. Then the iterate $x^{T+1}$ obtained after $T$ outer loop iterations will satisfy the following condition: $$\begin{aligned}
\mathbb{E}\left[f(x^{T+1})-f(x^{*})\right]\leq LS^{-1}\beta^{T}\mathbb{E}\left[f(x^{0})-f(x^{*})\right],\end{aligned}$$ where the constants are chosen to satisfy $\beta=\left(1-p\right)^{-1}\left(q^{\prime}+p^{T}(1-p-q^{\prime})\right)<1$ for linear convergence.
From the $L$-smoothness condition , we have the following: $$\begin{aligned}
f(x_{i+1}^{t+1})&\leq f(x_{i}^{t+1})+g\left(\nabla f(x_{i}^{t+1})\cdot{\ensuremath{\text{Exp}^{-1}_{x_{i}^{t+1}}(x_{i+1}^{t+1})}}\right)\frac{L}{2}\|{\ensuremath{\text{Exp}^{-1}_{x_{i}^{t+1}}(x_{i+1}^{t+1})}}\|^{2} \\
&=f(x_{i}^{t+1})-\eta_{2}\cdot g\left(\nabla f(x_{i}^{t+1}), H_{r}^{t+1}\nu_{i}^{t+1}\right)+\frac{L\eta_{2}^{2}}{2}\|H_{r}^{t+1}\nu_{i}^{t+1}\|^{2},\end{aligned}$$ where we have omitted subscripts from the metric. Taking expectations, and using the bounds on the inverse Hessian estimates derived in Lemma \[lem:trdet\], we have the following: $$\begin{aligned}
\label{eqn:temp1}
\mathbb{E}f(x_{i+1}^{t+1})\leq \mathbb{E}f(x_{i}^{t+1})-\eta_{2}\gamma\|\nabla f(x_{i}^{t+1})\|^{2}+\eta_{2}^{2}L^{3}\Gamma^{2}\left[2\|{\ensuremath{\text{Exp}^{-1}_{x_{i}^{t+1}}(x^{*})}}\|^{2}+3\|{\ensuremath{\text{Exp}^{-1}_{x^{t}}(x^{*})}}\|^{2}\right],\end{aligned}$$ where we have used the following bound on the stochastic variance-reduced gradients derived in [@zhangnips16]: $$\begin{aligned}
\mathbb{E}\|\nu_{i}^{t+1}\|^{2}\leq 4L^{2}\|{\ensuremath{\text{Exp}^{-1}_{x_{i}^{t+1}}(x^{*})}}\|^{2}+6L^{2}\|{\ensuremath{\text{Exp}^{-1}_{x^{t}}(x^{*})}}\|^{2}.\end{aligned}$$ This can be derived using triangle inequalities and the $L$-smoothness assumption. Note that the bound is different from the Euclidean case [@rienips13], due to technicalities introduced by the Riemannian metric not being linear in general.
Now, recall the condition $f(x^{t})-f(x^{*})\leq 2\kappa)^{-1}\|\nabla f(x^{t})\|^{2}$, which follows from strong convexity. Using this, we can derive a bound on the $\nabla$ term in as follows: $$\begin{aligned}
\|\nabla f(x_{i}^{t+1})\|^{2}\geq 2\kappa\left(f(x_{i}^{t+1})-f(x^{*})\right)\geq S\kappa\|{\ensuremath{\text{Exp}^{-1}_{x_{i}^{t+1}}(x^{*})}}\|^{2},\end{aligned}$$ where the second inequality follows from $S$-strong convexity , since $\nabla f(x^{*})=0$. Plugging this into , we have the following: $$\begin{aligned}
\label{eqn:temp2}
\begin{split}
\mathbb{E}f(x_{i+1}^{t+1})\leq f(x_{i}^{t+1})&+\eta_{2}\left[2\eta L^{3}\Gamma^{2}-S\kappa\gamma\right]\|{\ensuremath{\text{Exp}^{-1}_{x_{i}^{t+1}}(x^{*})}}\|^{2} \\
&+3\eta_{2}^{2}L^{3}\Gamma^{2}\|{\ensuremath{\text{Exp}^{-1}_{x^{t}}(x^{*})}}\|^{2}.
\end{split}\end{aligned}$$ Now, note that $S$-strong convexity allows us to write the following: $$\begin{aligned}
\frac{S}{2}\|{\ensuremath{\text{Exp}^{-1}_{x_{i+1}^{t+1}}(x^{*})}}\|^{2}&\leq f(x_{i+1}^{t+1})-f(x^{*}) \\
&=\left[f(x_{i+1}^{t+1})-f(x_{i}^{t+1})\right] + \left[f(x_{i}^{t+1}) -f(x^{*})\right] \\
&\leq\left[f(x_{i+1}^{t+1})-f(x_{i}^{t+1})\right] + \frac{L}{2}\|{\ensuremath{\text{Exp}^{-1}_{x_{i}^{t+1}}(x^{*})}}\|^{2},\end{aligned}$$ where the last step follows from $L$-smoothness. Taking expectations of both sides, and using for the first component on the right, we have $$\begin{aligned}
\label{eqn:temp3}
\begin{split}
\mathbb{E}\|{\ensuremath{\text{Exp}^{-1}_{x_{i+1}^{t+1}}(x^{*})}}\|^{2}\leq &\left[\frac{L}{S}+\frac{2\eta_{2}}{S}\left\lbrace2\eta L^{3}\Gamma^{2}-S\kappa\gamma\right\rbrace\right]\|{\ensuremath{\text{Exp}^{-1}_{x_{i+1}^{t+1}}(x^{*})}}\|^{2} \\
&+ \frac{6\eta_{2}^{2}L^{3}\Gamma^{2}}{S}\|{\ensuremath{\text{Exp}^{-1}_{x^{t}}(x^{*})}}\|^{2}.
\end{split}\end{aligned}$$ Now, we denote $p=\left[\frac{L}{S}+\frac{2\eta_{2}}{S}\left\lbrace2\eta L^{3}\Gamma^{2}-S\kappa\gamma\right\rbrace\right]$, and $q^{\prime}= \frac{6\eta_{2}^{2}L^{3}\Gamma^{2}}{S}$. Then, taking expectations over the sigma algebra of all the random variables till minibatch $m$, and some algebra, it can be shown that: $$\begin{aligned}
\mathbb{E}\|{\ensuremath{\text{Exp}^{-1}_{x_{m}^{t+1}}(x^{*})}}\|^{2}-q\mathbb{E}\|{\ensuremath{\text{Exp}^{-1}_{x_{0}^{t+1}}(x^{*})}}\|^{2}\leq p^{m}\left(1-q\right)\|{\ensuremath{\text{Exp}^{-1}_{x_{0}^{t+1}}(x^{*})}}\|^{2},\end{aligned}$$ where $q=(1-p)^{-1}q^{\prime}$. Note that this provides a bound on the iterate at the end of the inner minibatch loop. Telescoping further, we have the bound $$\begin{aligned}
\mathbb{E}\|{\ensuremath{\text{Exp}^{-1}_{x^{T+1}}(x^{*})}}\|^{2}\leq\beta^{T}\mathbb{E}\|{\ensuremath{\text{Exp}^{-1}_{x^{0}}(x^{*})}}\|^{2},\end{aligned}$$ where $\beta=\frac{q^{\prime}+p^{T}(1-p-q^{\prime})}{1-p}$. Then, using this result with a final appeal to the $L$-Lipschitz and $S$-strong convexity conditions, we have the bounds $$\begin{aligned}
\mathbb{E}\left[f(x^{T+1})-f(x^{*})\right]&\leq\frac{L}{2}\mathbb{E}\|{\ensuremath{\text{Exp}^{-1}_{x^{T+1}}(x^{*})}}\|^{2} \\
&\leq\frac{L}{S}\beta^{T}\left[f(x^{0})-f(x^{*})\right],\end{aligned}$$ thereby completing the proof.
### Convergence results for the nonconvex case
We begin with the following inequality involving the side lengths of a geodesic “triangle” [@conicsuvrit; @bonnabel]:
\[lem:trig\] Let the sectional curvature of a Riemannian manifold be bounded below by $c_{\delta}$. Let $A$ be the angle between sides of length $b$ and $c$ in a triangle on the manifold, with the third side of length $a$, as usual. Then the following holds: $$\begin{aligned}
a^{2}\leq \frac{c\sqrt{|c_{\delta}|}}{\tanh\left(c\sqrt{|c_{\delta}|}\right)}b^{2}+c^{2}-2bc\cos A.\end{aligned}$$
The cosine is defined using inner products, as in the Euclidean case, and the distances using inverse exponential maps, as seen above. The following sequence of results and proofs are inspired by the basic structure of [@reddinonconvex], with suitable modifications involving the inverse Hessian estimates from the L-BFGS updates.
\[lem:lyap\] Let the assumptions of Proposition 2 hold. Define the following functions: $$\begin{aligned}
c_{i}&=c_{i+1}\left(1+\beta\eta_{2}\Gamma+2\zeta L^{2}\eta_{2}^{2}\Gamma^{2}\right)+L^{3}\eta_{2}^{2}\Gamma^{2}, \\
\delta_{i}&=\eta_{2}\gamma - \frac{c_{i+1}\eta_{2}\Gamma}{\beta}-L\eta_{2}^{2}\Gamma-2c_{i+1}\zeta\eta_{2}^{2}\Gamma^{2}>0,\end{aligned}$$ where $c_{i},c_{i+1},\beta,\eta_{2}>0$. Further, for $0\leq t\leq T-1$, define the Lyapunov function $R_{i}^{t+1}=\mathbb{E}\left[f(x_{i}^{t+1})+c_{i}\|{\ensuremath{\text{Exp}^{-1}_{x^{t}}(x_{i}^{t+1})}}\|^{2}\right]$. Then we have the following bound: $$\begin{aligned}
\mathbb{E}\|\nabla f(x_{i}^{t+1})\|^{2}\leq \frac{R_{i}^{t+1}-R_{i+1}^{t+1}}{\delta_{i}}.\end{aligned}$$
As with the proof of Proposition 1, we begin with the following bound derived from $L$-smoothness: $$\begin{aligned}
\mathbb{E}f(x_{i+1}^{t+1})\leq\mathbb{E}\left[f(x_{i}^{t+1})-\eta_{2}\gamma\|\nabla f(x_{i}^{t+1})\|^{2}+\frac{L\eta_{2}^{2}\Gamma^{2}}{2}\|\nu_{i}^{t+1}\|^{2}\right],\end{aligned}$$ where we have bound the bounds on the inverse Hessian derived earlier. Then using Lemma \[lem:trig\] above, we have: $$\begin{aligned}
\mathbb{E}\|{\ensuremath{\text{Exp}^{-1}_{x^{t}}(x_{i+1}^{t+1})}}\|^{2}&\leq\mathbb{E}\|{\ensuremath{\text{Exp}^{-1}_{x^{t}}(x_{i}^{t+1})}}\|^{2}+\zeta\|{\ensuremath{\text{Exp}^{-1}_{x_{i}^{t+1}}(x_{i+1}^{t+1})}}\|^{2}-2g\left({\ensuremath{\text{Exp}^{-1}_{x_{i}^{t+1}}(x_{i+1}^{t+1})}},{\ensuremath{\text{Exp}^{-1}_{x_{i}^{t+1}}(x^{*})}}\right) \\
&\leq\mathbb{E}\left[\|{\ensuremath{\text{Exp}^{-1}_{x^{t}}(x_{i}^{t+1})}}\|^{2}+\zeta\eta_{2}^{2}\Gamma^{2}\|\nu_{i}^{t+1}\|^{2}\right]\\
&\quad+2\eta_{2}\Gamma\left[(2\beta)^{-1}\|\nabla f(x_{i}^{t+1})\|^{2}+\frac{\beta}{2}\|{\ensuremath{\text{Exp}^{-1}_{x^{t}}(x_{i}^{t+1})}}\|^{2}\right],\end{aligned}$$ where we have used $g(a,b)\leq\frac{1}{2\beta}\|a\|^{2}+\frac{\beta}{2}\|b\|^{2}$. Note that we have used the norm of the inverse exponential maps as the side lengths in Lemma \[lem:trig\]. Using these last two results, we can derive the following bound for the Lyapunov functions $R_{i+1}^{t+1}$: $$\begin{aligned}
R_{i+1}^{t+1}&\leq\mathbb{E}\left[f(x_{i}^{t+1})-\left\lbrace\eta_{2}\gamma-\frac{c_{i+1}\eta_{2}\Gamma}{\beta}\right\rbrace\|\nabla f(x_{i}^{t+1})\|^{2}\right]+\Gamma^{2}\left\lbrace c_{i+1}\zeta\eta_{2}^{2}+\frac{L\eta_{2}^{2}}{2}\right\rbrace\mathbb{E}\|\nu_{i}^{t+1}\|^{2} \\
&\quad+c_{i+1}\left\lbrace 1+\eta_{2}\Gamma\beta\right\rbrace\mathbb{E}\|{\ensuremath{\text{Exp}^{-1}_{x^{t}}(x_{i}^{t+1})}}\|^{2}.\end{aligned}$$ The norm of the stochastic variance reduced gradient can be bounded as follows [@zhangnips16; @reddinonconvex]: $$\begin{aligned}
\mathbb{E}\|\nu_{i}^{t+1}\|^{2}\leq 2L^{2}\mathbb{E}\|{\ensuremath{\text{Exp}^{-1}_{x^{t}}(x_{i}^{t+1})}}\|^{2}+2\mathbb{E}\|\nabla f(x_{i}^{t+1})\|^{2}.\end{aligned}$$ This allows us to bound the Lyapunov function above as: $$\begin{aligned}
R_{i+1}^{t+1}\leq R_{i}^{t+1}-\left\lbrace\eta_{2}\gamma-\frac{c_{i+1}\eta_{2}\Gamma}{\beta}-L\eta_{2}^{2}\Gamma^{2}-2c_{i+1}\zeta\eta_{2}^{2}\Gamma^{2}\right\rbrace\mathbb{E}\|\nabla f(x_{i}^{t+1})\|^{2},\end{aligned}$$ which completes the proof.
Next we present a bound on $\|\nabla f(\cdot)\|^{2}$ using the $\delta_{i}$’s defined above (Thm 6 of [@zhangnips16]):
\[lem:lastbutone\] Let the conditions of Lemma \[lem:lyap\] hold, and define the quantities therein. Let $\delta_{i}>0$ $\forall i\in[0,m]$, and $c_{m}=0$. Let $\delta_{\delta}=\min_{i}\delta_{i}$, and $K=mT$. Then if we randomly return one of the iterates $\left\lbrace x_{i}^{t+1}\right\rbrace_{i=1}^{m}$ as $x^{t+1}$, then: $$\begin{aligned}
\mathbb{E}\|\nabla f(x^{t})\|^{2}\leq \frac{f(x^{0})-f(x^{*})}{K\delta_{\delta}}.\end{aligned}$$
This result can be shown by telescoping the bound derived in the previous lemma for the Lyapunov functions, using $c_{m}=0$.
Let the sectional curvature of the manifold be bounded below by $c_{\delta}$, and the $f_{i}$ be $L$-smooth. Let $x^{*}$ be an optimum of $f(\cdot)$ in . Assume the eigenvalues of the inverse Hessian estimates are bounded. Set $\eta_{2}=\mu_{0}/\left(\Gamma Ln^{\alpha_{1}}\eta^{\alpha_{2}}\right)$, $K=mT$, and $m=\lfloor n^{\nicefrac{3\alpha_{1}}{2}}/\left(3\mu_{0}\zeta^{1-2\alpha_{2}}\right)\rfloor$, where $\alpha_{1}\in(0,1]$ and $\alpha_{2}\in[0,2]$. Then, for suitable choices of the inverse Hessian bounds $\gamma,\Gamma$, we can find values for the constants $\mu_{0}>0$ and $\epsilon>0$ so that the following holds: $$\begin{aligned}
\mathbb{E}\|\nabla f(x^{T})\|^{2}\leq(K\epsilon)^{-1}L\eta_{2}^{\alpha_{1}}\zeta^{\alpha_{2}}\left(f(x^{0})-f(x^{*})\right).\end{aligned}$$
We define $\beta=L\zeta^{1-\alpha_{2}}/\left(n^{\nicefrac{\alpha_{1}}{2}}\Gamma\right)$. Also, as mentioned in the proposition, $\eta_{2}=\mu_{0}/\left(\Gamma Ln^{\alpha_{1}}\eta^{\alpha_{2}}\right)$, with appropriate $\alpha_{1}$, $\alpha_{2}$. Note that we need a bound for $\delta_{\delta}$ to plug in the denominator of the bound in Lemma \[lem:lastbutone\] above. This quantity can be lower bounded as follow: $$\begin{aligned}
\delta_{\delta}&=\min_{i}\delta_{i}\\
&=\min_{i}\left\lbrace \eta_{2}\gamma-\frac{c_{i+1}\eta_{2}\Gamma}{\beta}-L\eta_{2}^{2}\Gamma^{2}-2c_{i+1}\zeta^{2}\eta_{2}^{2}\Gamma^{2}\right\rbrace \\
&\geq\left\lbrace \eta_{2}\gamma-\frac{c_{0}\eta_{2}\Gamma}{\beta}-L\eta^{2}\Gamma^{2}-2c_{0}\zeta\eta_{2}^{2}\Gamma^{2}\right\rbrace.\end{aligned}$$ Now we need to bound $c_{0}$. To that end, telescoping the $c_{i+1}$ function defined in Lemma \[lem:lyap\] above with $c_{m}=0$, and denoting $\theta=\eta_{2}\beta\Gamma+2\zeta\eta_{2}^{2}L^{2}\Gamma^{2}$, we get the following: $$\begin{aligned}
c_{0}=\frac{L\mu_{0}^{2}\left\lbrace\right(1+\theta)^m-1\rbrace}{n^{2\alpha_{1}}\zeta^{2\alpha_{2}}\theta}.\end{aligned}$$ Using the definitions of $\eta_{2}$ and $\beta$ above, we note that $\theta <1/m$, implying $c_{0}\leq\frac{L\mu_{0}}{\zeta n^{\nicefrac{\alpha_{1}}{2}}}(e-1)$. Plugging this in the bound above, we posit that $\delta_{\delta}$ can be bounded below as follows: $$\begin{aligned}
\delta_{\delta}&\geq \eta_{2}\left\lbrace \gamma-\frac{\mu_{0}\Gamma(e-1)}{\zeta^{2-\alpha_{2}}}-\frac{\mu_{0}}{n^{\alpha_{2}}\zeta^{\alpha_{2}}}-\frac{2\mu_{0}^{2}(e-1)}{n^{\nicefrac{3\alpha_{1}}{2}}\zeta^{\alpha_{2}}}\right\rbrace \\
&\geq \frac{\epsilon}{Ln^{\alpha_{1}}\zeta^{\alpha_{2}}},\end{aligned}$$ for some sufficiently small $\epsilon$, and suitable choices of the inverse Hessian bounds $\gamma,\Gamma$ and the rest of the parameters. Using this bound in the denominator of the right hand side of Lemma \[lem:lastbutone\] above completes the proof.
Retractions
-----------
We approximated the exponential maps with retractions from the Manopt [@manopt] toolbox. We used the following formulae: $\mathbb{R}_{x}(\eta\rho)=x\cos\|\eta\rho\|_{F}+\frac{\eta\rho}{\|\eta\rho\|_{F}}\sin\|\eta\rho\|_{F}$ for the sphere manifold, and $\mathbb{R}_{x}(\eta\rho)=x\cdot M_{x}(x \setminus \eta\rho)$ for the manifold of symmetric PD matrices, where $M_{x}$ denotes the matrix exponential, and $\setminus$ is matrix division. Here $x\in\mathcal{M}$ is some point on the manifold, $\rho\in T_{x}\mathcal{M}$ is some descent step evaluated at $x$, and $\eta$ is the stepsize.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Features of the emission of jets by driven [Bose–Einstein]{} condensates, discovered by [Clark et al. (Nature 551, 356359)]{}, can be understood by drawing analogies with particle and nuclear physics. In particular, the widening of the [$\Delta\phi=\pi$]{} peak in the angular correlation function is due to a dijet acollinearity, which I estimate to be about 5 RMS. I also propose new correlation studies using observables commonly used in studies of the quark-gluon plasma.'
address: 'Department of Physics, University of California, Berkeley, CA 94720, USA.'
author:
- Miguel Arratia
title: 'On the jets emitted by driven Bose–Einstein condensates'
---
\[sec:level1\]Introduction
==========================
Clark et al. [@Clark] recently discovered a new phenomenon in which a stimulated Bose–Einstein condensate emits a burst of collimated jets of atoms. The data analyzed by means of a second-order angular correlation function show two peaks at [$\Delta\phi=0$]{} and [$\Delta\phi=\pi$]{}. These were attributed to back-to-back emission of jets, reflecting momentum conservation in the primary atom–atom scattering that triggers the dijet runaway formation.
However, their data deviates from the predicted correlation, specially in the region of the the peak at [$\Delta\phi=\pi$]{}. This is much wider than the peak at [$\Delta\phi=0$]{} and has only about 70–85% of its integral. This fact is not totally understood, and the authors claimed in Ref. [@Clark] that [*“further investigation into the differences between the two peaks is required”*]{}.
Here, I offer an explanation for the broadening of the peak at [$\Delta\phi=\pi$]{} by drawing analogies with observations in high-energy particle and nuclear physics. I also propose new measurements on this jet phenomenon inspired in studies of the quark–gluon plasma.
This article is organized as follows: section \[sec:acoplanarity\] shows an estimate of the dijet acollinearity present in the Clark et al. experiment by drawing an analogy with proton–proton collisions with “intrinsic parton $k_{T}$”; section \[sec:vertical\] discusses the vertical dimension of the Bose–Einstein condensate; section \[sec:proposals\] shows proposals of new observables; and section \[sec:conclusions\] describes the conclusions.
\[sec:acoplanarity\]Dijet acollinearity
=======================================
Parton–parton scattering
------------------------
In the mid seventies, the production of roughly back-to-back sprays of collimated hadrons (jets) in proton collisions was attributed to collinear parton–parton scattering with large momentum transfer. However, this model did fail to describe the data from experiments at the CERN Intersecting Storage Rings—the world’s first hadron collider.
In 1977, Feynman et al. [@Feynman] modified the collinear parton–parton scattering by introducing an [“extra kick”]{} to the partons, intrinsic parton $k_{T}$, that yielded a dijet acollinearity. This allowed them to explain, among other things, the data from two-particle correlations that showed a peak at [$\Delta\phi=\pi$]{} that was broader than the peak at [$\Delta\phi=0$]{}.
Atom–atom scattering
--------------------
Clark et al. compared their measured second-order angular correlation function, $g^{2}$, with an analytically calculation given by: $$\label{corr}
g^{2}(\Delta\phi) = 1 + \left|\frac{2J_{1}(k_{f}R\Delta\phi)}{k_{f}R\Delta\phi}\right|^{2}+\left|\frac{2J_{1}(k_{f}R[\Delta\phi-\pi])}{k_{f}R[\Delta\phi-\pi]}\right|^{2},$$ where $J_{1}$ is the first Bessel function (resulting from the Fourier transform of the density of a [two-dimensional]{} uniform disk), $k_{f}$ is the wavenumber of the ejected atoms, and $R$ is the radius of the [Bose–Einstein]{} condensate.
This function shows two identical peaks at [$\Delta\phi=0$]{} and [$\Delta\phi=\pi$]{}, reflecting the assumption of exactly back-to-back emission of jets that is based on [*“conservation of momentum in the underlying pair-scattering process"*]{}[@Clark]. I suggest that the deviation of data from equation \[corr\] arises from a small dijet acollinearity.
Estimate of dijet acollinearity
-------------------------------
The dijet acollinearity can be estimated from the widths of [$\Delta\phi=0$]{} and [$\Delta\phi=\pi$]{} peaks in the measured $g^{2}(\Delta\phi)$, following a method first used in particle physics by the CCOR collaboration [@CCORS] about 40 years ago, and more recently by the PHENIX collaboration [@PHENIX; @JJ]. This method relies on a Gaussian approximation for the jet transverse spread and basic trigonometry to obtain an average angle of dijet acollinearity.
For the case of the jets observed in the Clark. et al. experiment, the equations involved are simplified because all the jet constituents (atoms) have roughly the same momentum instead of being power-law distributed like in hadrons of QCD jets. In the Clark et al. experiment, the average atom transverse momentum relative to the jet axis is the same for all jets. This is because they are the result of a bosonic enhacement and their angular spread reflects the size of the source (i.e. a Handbudy Brown and Twiss bunching). The average transverse momentum can be estimated from the [$\Delta\phi=0$]{} peak of the correlation function, $\sigma_{\mathrm{N}}$. This can be combined with the width of the [$\Delta\phi=\pi$]{} peak, $\sigma_{\mathrm{A}}$, to extract the average dijet acollinearity:
$$\langle \phi\rangle \approx \frac{\langle k_{T}\rangle}{k_{f}}
= \frac{1}{\sqrt{2}}\sqrt{\sin^{2}\left(\sqrt{2} \frac{\sigma_{A}}{\sqrt{\pi}}\right) - \large(\frac{\sigma_{\mathrm{N}}}{\sqrt{\pi}}\large)^{2}}.
\label{eq:kt}$$
From Ref. [@Clark], we know that the half-maximum half-width of the [$\Delta\phi=0$]{} peak is about [2]{}, for [$R$=8.5 $\mu$m]{} and [$f$=2 kHz]{}, and the width of the [$\Delta\phi=\pi$]{} peak is about three times larger. It follows from Equation \[eq:kt\] that $\langle \phi\rangle$ is about of about 5. Note that this small angle has a big effect in the width of the [$\Delta\phi=\pi$]{} peak (that measures inter-jet correlations) but no effect in the [$\Delta\phi=0$]{} peak (that measures intra-jet correlations).
Numerical calculation of $g^{2}$ with dijet acollinearity
---------------------------------------------------------
To illustrate the effect of a dijet acollinearity on the measured $g^{2}$ function, I used a simple numerical simulation in which the azimuthal angle between the jet centers is drawn from a Gaussian with standard deviation of 5 ; the jets angular density, $n(\phi)$, is approximated by a Gaussian with a standard deviation of 1.5. The $g^{2}$ function is calculated as:
$$g^{2}(\Delta\phi) = \frac{\langle\int d\theta n(\theta)n(\theta+\Delta\phi)\rangle}{\langle \int d\theta n(\theta)\rangle^{2}},$$
where the average is taken over 1000 draws of different acollinearity angles, minimizing the statistical uncertainty on the calculation.
Figure \[simulation\] shows the result of this calculation with data from the Clark et al. experiment. The calculated $g^{2}$ is multiplied by a constant factor such that the height of the $\Delta\phi=0$ peak roughly matches the data. The width of the $\Delta\phi=0$ peak in the calculation matches the width of the measured peak by construction; the discrepancy in the tails arise due to the Gaussian approximation of the jet densities. The width of the $\Delta\phi=\phi$ peak of the calculation matches the data well.
![Angular correlation function, $g^{2}$, from Ref [@Clark] (black) and numerical calculations assuming back-to-back dijets (gray) and acollinear dijets with an acollinearity of 5 on RMS (orange).[]{data-label="simulation"}](ToyMC.pdf){width="0.78\columnwidth"}
This result is based on general assumptions and it simply states that the data in Ref. [@Clark] can be explained with a dijet acollinearity of $5\degree$ RMS. The explanation of the origin of such dijet acollinearity lies beyond the scope of this work. In Ref. [@AngularMomentum], this was attributed to [*“the destructive interferences between atoms with different angular momenta”*]{}.
\[sec:vertical\]Vertical direction
==================================
The dimensions of the Bose–Einstein condensate described in Ref. [@Clark] are given by a typical $R$ value of 8.5 $\mu$m, and vertical extend of 0.5 $\mu$m (root-mean-square). So, most of the atoms scattered with a polar angle smaller than [$\arcsin(0.5/8.5)\approx3 \degree$]{} will traverse most of the [Bose–Einstein]{} condensate, thus forming an observable dijet.
As noted by Clark et al., some atoms within a jet might lie outside the field-of-view of the experiment (in particle physics jargon, this is an acceptance loss due to limited pseudorapidity coverage). This can explain part of the 15–30$\%$ difference of the integral of the peaks at [$\Delta\phi=0$]{} and [$\Delta\phi=\pi$]{}, and the discrepancy between equation \[corr\] and data at the [$\Delta\phi=0$]{} peak.
Here, I suggest that this loss could be corrected, or at least be considered when calculating the predicted correlation function. This should consider the vertical structure of the Bose–Einstein condensate. The improved calculation might describe data better and serve as a baseline to search for anomalous effects, such as those described in Ref. [@Ogren] and Section \[sec:proposals\].
\[sec:proposals\]Proposed new studies
=====================================
Clark et al. suggested that [*“one could probe excitations that are present in more exotic states of matter by amplifying them to form detectable jets"*]{} [@Clark]. That would not be the first time that “jets" are used as “probes" of exotic states of matter. Here I suggest measurements inspired in angular correlations and jet studies that probe the quark–gluon plasma, which is also a strongly interacting system.
The events shown in Ref. [@Clark] have multiple dijets. The authors claim that the dijet directions are random. However, I note that the spacing between dijets looks suspiciously uniform. Given that driven Bose-Einstein condensates are a quantum many-body system, is not unreasonable to expect an overall pattern caused by a collective behaviour. This might be even more evident when probing the appearance of vortices, solitons, and other exotic effects alluded in Ref. [@Clark].
To further study this and search for more complex correlations than what is caused by momentum conservation and HBT bunching, I suggest to perform a multi-particle correlation study like the ones described in Refs. [@Cumulant1; @Cumulant2; @Cumulant3]. These “cumulants” techniques were designed to study the collective behaviour caused by hydrodynamical flow of the quark-gluon plasma, which manifests as an anisotropy in the particle emission. These techniques suppress “non-flow” correlations that arise due to momentum conservation, jets, and HBT correlations.
While in principle these sources of correlations can be suppressed using higher-order correlation functions, $g^{n}$ with large $n$, in practice the calculations get cumbsersome quickly. In contrast the cumulants analysis can use all particles in the event in an efficient way. More importantly, they can reveal true collective behaviour that might be obscured by strong correlations among a small number of atoms[^1].
\[sec:conclusions\]Conclusions
==============================
In conclusion, this work explains features of the novel phenomenon of atom jets emitted by driven Bose–Einstein condensates. The broader peak in the angular correlation function at [$\Delta\phi=\pi$]{} can be explained by a dijet acollinearity of about [5]{}. I also have suggested novel observables for the study of driven Bose–Einstein condensates inspired by studies of the quark-gluon plasma. This is among first papers on the phenomenology of jets in driven Bose–Einstein condensates.
\[sec:level1\]Acknowledgements
==============================
I thank the Chicago group for providing the data of Ref. [@Clark] and useful discussions, and to the LBNL RNC group for useful discussions.
References {#references .unnumbered}
==========
[9]{} L. W. Clark et al., Nature 551, 356–359. R. P. Feynman et al., Nucl. Phys B128 (1977) 1-65. T. Sjostrand et al., JHEP 0605:026,2006. CCOR Collaboration, 1979 Phys. Scr. 19 116. PHENIX Collaboration, Phys. Rev. D74:072002,2006. J. Jia, J. Phys. G31 (2005) S521-S532. M. Ogren, K. V. Kheruntsyan, Phys. Rev. A 79, 021606(R) (2009). Z. Wu, H. Zhai, arXiv:1804.08251v2. L. Feng et al., arXiv:1803.01786.
A. Bilandzic et al., Phys. Rev. C 83 (2011) 044913. R. S. Bhalerao et al., Nucl. Phys. A 727 (2003) 373. N. Borghini et al., J. Phys. G 30 (2004) S1213.
[^1]: After the release of the first draft of this manuscript, the Chicago group independently released a preprint with a measurement of higher-order correlations [@ComplexCorrelations]; they indeed observed higher-order correlations indicating a more complex pattern than reported in [@Clark]
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We describe the generation of powerful dispersive waves that are observed when pumping a dual concentric core microstructured fiber by means of a sub-nanosecond laser emitting at the wavelength of 1064 nm. The presence of three zeros in the dispersion curve, their spectral separation from the pump wavelength, and the complex dynamics of solitons originated by the pump pulse break-up, all contribute to boost the amplitude of the dispersive wave on the long-wavelength side of the pump. The measured conversion efficiency towards the dispersive wave at 1548 nm is as high as 50%. Our experimental analysis of the output spectra is completed by the acquisition of the time delays of the different spectral components. Numerical simulations and an analytical perturbative analysis identify the central wavelength of the red-shifted pump solitons and the dispersion profile of the fiber as the key parameters for determining the efficiency of the dispersive wave generation process.'
author:
- 'D. Modotto'
- 'M. Andreana'
- 'K. Krupa'
- 'G. Manili'
- 'U. Minoni'
- 'A. Tonello'
- 'V. Couderc'
- 'A. Barthélémy'
- 'A. Labruyère'
- 'B. M. Shalaby'
- 'P. Leproux'
- 'S. Wabnitz'
- 'A. B. Aceves'
title: Efficiency of dispersive wave generation in dual concentric core microstructured fiber
---
Introduction
============
Dispersive waves (DWs), also known as Cherenkov or nonsolitonic radiations, are generated in fibers when optical soliton pulses are perturbed by higher-order dispersion terms. As initially predicted by Menyuk and co-workers for standard fibers, an intense pulse whose input spectrum is close to the zero-dispersion wavelength (ZDW) and in the presence of third-order dispersion (TOD) sheds power in the form of a long pulse, whose spectrum is a narrow peak [@Wai1986; @Wai1987]. By assuming that the TOD and the fourth-order dispersion coefficients are known, and by relying on an accurate formula for the perturbed soliton, Akhmediev and Karlsson have shown that it is possible not only to calculate the DW frequency, but also its temporal shape and the amount of radiated energy [@Akhmediev1995]. Even when the full dispersion curve is taken into account, the DW wavelength can be readily calculated by imposing a matching condition between the phases of the soliton and the DW. In fact it can be shown that, depending on the shape of the dispersion curve, the DW may be generated at shorter or longer wavelengths than that of the pump. Moreover, two or more DWs can be simultaneously generated [@Roy2009; @Roy2010; @Stark2011].
Microstructured fibers (MFs) allow for a great freedom in the design of the dispersion curve and mode effective area [@Eggleton2001; @Russell2006; @Poli2010]: the former feature is essential to control the DW spectral position and the latter helps to maximize the nonlinear coefficient. In fact, many groups have reported an efficient DW generation (for instance: [@Tartara2003; @Chang2010; @Yuan2011]) when the MF is pumped by pulses whose duration is of a few hundreds femtoseconds or even shorter. When sufficiently intense input pulses are used, the DW generation is accompanied by the formation of a supercontinuum (SC) spectrum, whose bandwidth can reach the extension of a full octave [@Genty2004; @Travers2008; @Kudlinski2009]. Different phenomena contribute to SC generation: self-phase modulation (SPM) and modulation instability (MI) lead to the initial spectral broadening, which can be further enlarged by four-wave mixing (FWM) [@Wadsworth2004; @Stone2008; @Lesvigne2007; @Manili2011], soliton fission [@Herrmann2002; @Demircan2007; @Driben2013] and Raman soliton self-frequency shift [@Mitschke1986]. Moreover, the large number of pulses fostered by the break-up of a long pump pulse can also interact with the DW, leading to the onset of new spectral peaks and contributing to determine the extension and flatness of the output spectrum [@Genty2004bis; @Skryabin2005; @Gorbach2006; @Driben2010].
The development of practical DW sources is generally limited by the small efficiency of the energy transfer from the pump to the DW. In most of experiments reported to date, the power carried by the DW is only a small percentage of the input pump power; it is possible to increase the conversion efficiency (up to 65% in recent experiments) only by resorting to a femtosecond laser source (like a Ti:sapphire laser) [@Tartara2003; @Chang2010; @Yuan2011; @Zhang2013; @Zhang2015]. In this case, intense DWs have been observed in the visible range (even in the violet [@Zhang2013bis]) as well in the near and mid-infrared [@Yuan2013; @Zhang2013]. Indeed with ultrashort pump pulses their spectrum may be so wide that it incorporates the resonant condition for DW generation.
There is a long way ahead in order to achieve high conversion efficiencies when using nanosecond pulses emitted by low-cost, widely used sources, such as Nd:YAG microchip lasers [@Wadsworth2004]. Nanosecond pulses cannot directly generate DWs, owing to their extremely narrowband spectra. However high energy nanosecond pulses may break, after a first stage of nonlinear propagation in a MF, into a bunch of ultrashort pulses, which in turn can generate DWs.
It must be emphasized that a pulse that sheds light into a DW is not necessarily restricted to a fundamental or higher-order soliton. In fact, it has been shown that intense pulses traveling in the normal dispersion regime of the fiber and satisfying a phase-matching relation may also effectively build up a DW [@Roy2010; @Webb2013]. Moreover, DWs have also been observed in a line-defect photonic crystal waveguide [@Colman2012] with a length of only 1.5 mm: the measured 30% conversion efficiency could be explained by means of the locking of the velocities of the pump soliton and the DW.
We have recently reported the experimental observation of a gigantic DW, generated inside a dual concentric core MF pumped by a microchip laser, and carrying up to 50% of the input pump power at the fiber output [@Manili2012]. In the present work, we further clarify, by means of an extensive experimental analysis, the physical mechanism for such huge energy transfer into the DW. Section 2 gives an account of the spectra measured when pumping at the wavelengths of 1064 nm and 1030 nm. Whereas Section 3 describes the spectro-temporal analysis of a 200 nm wide bandwidth centered around the emitted DW. Section 4 is devoted to the numerical and analytical study of the DWs: it also unveils the roles of the pump soliton central wavelength and of the fiber dispersion profile. Finally, Section 5 briefly summarizes the results of our study.
Experiments with two different laser sources
============================================
Our dual concentric core MF has an inner core and a second external concentric annular core. These two cores are obtained by filling some of the holes in a triangular lattice; the holes have a radius of 0.65 $\mu m$ and are separated by a pitch of 2.6 $\mu m$. A scanning electron microscope (SEM) image of the fiber cross section is displayed in the inset of Fig. \[fig1\]: the two glass hexagonal cores are easily recognized. The central core is also doped by Germanium in order to increase both local refractive index and nonlinearity [@Nakajima2002; @Yatsenko2009; @Labruyere2010]. This kind of double core structure has been proposed and demonstrated to be a very effective design to control the magnitude and sign of the group velocity dispersion (GVD) in a selected spectral range [@Gerome2004; @Gerome2006].
The linear guiding properties of our double core MF can be calculated from the SEM picture through a numerical mode solver: Fig. \[fig1\](a) and Fig. \[fig1\](b) show the group velocity ($1/\beta_1$) and its dispersion ($\beta_2$), respectively. Looking at the GVD it is interesting to observe the presence of a large positive dispersion peak: as a result, wavelengths in the spectral region around 1650 nm are expected to overtake spectral components at 1450 nm by 31.5 ps/m. This condition is quite unusual, since in conventional solid core microstructured fibers wavelengths around 1650 nm are in the anomalous dispersion region. Hence typical SC generation exhibits wavelength components around 1650 nm that appear in the trailing edge of the output pulse. According to our numerical results, the normal dispersion peak reaches its maximum amplitude at 1515 nm, and it is bounded by two ZDWs at 1353 nm and at 1669 nm, respectively. It is worth noticing that in our fiber dispersion is anomalous in the range between 1018 nm and 1353 nm: as a consequence, solitons generated by the break-up of a pulse centered at 1064 nm (or at 1030 nm) may only exist in this wavelength range. [ The fundamental mode of our MF is well confined inside the central core for wavelengths shorter than 1400 nm. Whereas for wavelengths greater than 1400 nm an important fraction of the optical power is confined by the external hexagonal core (as for the fundamental supermode of the dual concentric core fiber discussed in Ref. [@Gerome2006]). It is this abrupt change of the mode profile and consequently of its dispersion relation which gives rise to the dispersion peak. For instance, the calculated effective area at 1064 nm is 6.2 $\mu m^2$ and it increases up to 41.6 $\mu m^2$ at 1550 nm. These numerical results were experimentally confirmed by inspecting the mode profile at the fiber output by means of an infrared camera.]{}
![Numerically calculated parameters of the double core MF: (a) group velocity and (b) its dispersion. The vertical solid red line indicates the 1064 nm wavelength; vertical dashed green lines indicate the three ZDWs (1018 nm, 1353 nm, 1669 nm). The inset shows a SEM photograph of the fiber cross section.[]{data-label="fig1"}](fig1.eps){width=".8\columnwidth"}
By inserting the numerically calculated effective refractive index profile in the well-known soliton-DW phase-matching relation (see for instance Refs. [@Akhmediev1995; @Genty2004; @Chang2010; @Zhang2013bis]), the wavelengths of the DW radiation peaks can be easily calculated. For our dual core MF, all resonant DW wavelengths are plotted as a function of the soliton wavelength in Fig. \[fig2\]. Here the small contribution due to the soliton nonlinear phase shift has been neglected. We verified the validity of this approximation by evaluating the nonlinear contribution to the DW wavelength: for a soliton with a peak power smaller than 10 kW, the longer DW wavelength (upper curve of Fig. \[fig2\]) increases by less than 5 nm.
In the wavelength range considered in Fig. \[fig2\] there are always two resonant DWs: for a pump soliton at 1064 nm, the calculated DW wavelengths are 928 nm ($DW_1$, blue curve) and 1587 nm ($DW_2$, green curve). A noticeable feature is that the spectral position of the $DW_1$ spans a window which is two times wider than that of the mate $DW_2$.
![Calculated positions of $DW_1$ (blue line) and $DW_2$ (green line) versus the pump soliton wavelength. The vertical lines indicate the wavelengths of 1064 nm (dashed red) and 1100 nm (solid red).[]{data-label="fig2"}](fig2.eps){width=".8\columnwidth"}
In this work, two different laser sources are used as the pumps to obtain spectral broadening and DW generation in the same 4 m long dual core MF. The first source is a compact microchip Nd:YAG laser (similar to the one used in Ref. [@Wadsworth2004; @Stone2008]), whose pulses centered at 1064 nm have a duration of 900 ps. The input beam was carefully focused on the fiber facet in order to maximize the coupling with the fundamental mode. The second source is a mode-locked fiber laser emitting 120 ps pulses at 1030 nm: its output fiber pigtail was directly spliced to the MF. In both cases, the peak power injected inside the MF fiber can be varied up to a maximum value of a few kilowatts.
The spectra obtained by using the microchip laser at 1064 nm are shown in Fig. \[fig3\], for four different values of the peak power injected inside the MF. At the lowest power level of 0.1 kW, a broad red-shifted spectral peak appears: the maximum of this peak is 17 dB lower than the residual pump, and it is located at about 1100 nm; this provides a first evidence that the broadening mechanism is quite asymmetric around the pump, thus favoring the energy transfer towards longer wavelengths. Since this spectrum is measured after 4 m of propagation, the two MI peaks are only scarcely discernible. On the other hand, when measuring the spectra after a propagation of less than 1 m the initial growth of two almost symmetric MI peaks is apparent (as in the experimental data reported in Ref. [@Manili2012]). By increasing the input pump power, the output spectrum broadens further, and a DW (henceforth $DW_2$) peak grows around 1510 nm. The peak at 1100 nm is still present, and it is roughly 5 dB higher than the spectral density dip at the long wavelength side of the pump. Note that spectral broadening towards shorter wavelengths abruptly stops at about 800 nm. At the maximum injected pump power of 4 kW, the $DW_2$ peak is shifted to 1548 nm, with a spectral intensity that is only 4 dB lower than the residual pump peak. Quite strikingly, the $DW_2$ carries an impressive 50% of the total output average power. We also underline that the beam image at the fiber output (observed with an infrared camera around 1500 nm) did agree quite well with the profile predicted by the mode solver.
![Measured output spectra when the source is the Nd:YAG microchip laser at 1064 nm for four different levels of injected input peak power:(a) 0.1 kW, (b) 0.6 kW, (c) 2 kW, (d) 4 kW.[]{data-label="fig3"}](fig3.eps){width=".8\columnwidth"}
In the measured spectra, there is always a spectral hump around 1100 nm which corresponds to the largest fraction of solitons generated by the input pulse break-up. By considering an average soliton central wavelength of 1100 nm, the $DW_2$ peak is expected to be at 1575 nm (see Fig. \[fig2\]), which is reasonably close to the measured value; the residual discrepancy is ascribed to a small error in our estimation of the MF refractive index transverse profile. The other resonant dispersive wave peak $DW_1$, which is predicted to occur at 874 nm, was not observed in the measured spectra, at least in the form of a sharp and isolated peak. In fact the spectrum broadens towards shorter wavelengths in a rather uniform way: its short-wavelength edge, located at around 800 nm, is limited by the condition of group-velocity matching with the infrared part of the SC (see Fig. \[fig1\](a)). Hence its location is only slightly affected by variations in the input pump power. We expect that in our experiments the DWs are shed by a large number of solitons with a distribution of peak powers and time widths: this situation is markedly different from the case of DWs that are emitted by means of stable femtosecond laser sources.
Solitons generated by the 1064 nm pump do not remain confined around 1100 nm, but travel towards longer wavelengths, and give rise to a long spectral tail in the 1100-1400 nm range (see Fig. \[fig3\](b)): this spectral red-shift is due to TOD, and is enhanced by the Raman effect [@Mitschke1986]. The more the pulses are red-shifted, the more they slow down (see Fig. \[fig1\](a)): for this reason, solitons interact and frequently overlap in time. Colliding solitons may significantly enhance the efficiency of DW generation with respect to the single soliton case, see Refs. [@Erkintalo2010; @Erkintalo2010bis; @Tonello2015]. In our experiments, the large number of collisions among the solitons originating from pump pulse break-up will thus largely contribute to reinforce the energy transfer towards the DWs.
Note that the soliton tunneling mechanism [@Kibler2007; @Poletti2008; @Guo2013] may also lead to a high DW conversion efficiency. However we may rule out the soliton tunneling mechanism for the generation of $DW_2$. In fact, the $DW_2$ spectral peak is located well inside the normal dispersion region, and it starts growing well before solitons approach the ZDW at the border of the normal dispersion barrier (whose calculated value is 1353 nm). The initial stages of the spectral broadening process have been reported in Ref. [@Manili2012]. An experimental spectrum similar to the one that we obtain at high powers (Fig. \[fig3\](c)-(d)) was previously reported for a solid core, double zero dispersion wavelength MF by Chapman et al. [@Chapman2010], who measured a broad and intense peak at 1.98 $\mu m$ beyond a 5 dB flat SC, but in that experiment the DW peak only grew after that the red-shifted solitons had approached the barrier of normal dispersion.
We also observe that our spectra exhibit a hump close to 1400 nm (this is clearly visible in Fig. \[fig3\](b)-(c)), which could be due to the accumulation of solitons, if we suppose that the actual ZDW in our MF is situated more than 50 nm above the numerically estimated value. The growth of a DW in the infrared was also observed by Kudlinski et al. [@Kudlinski2009] by using a MF with two zero dispersion wavelengths: however in those experiments the DW was about 15 dB lower than the residual pump, likely because of the low energy or the limited spectrum of the accumulated solitons.
Once again, it must be underlined that the DW that is observed in Fig. \[fig3\] is so intense because it is fueled by the solitons as soon as they are formed from the break-up of the pump pulse. Moreover, these solitons carry a large fraction of the total energy that is coupled inside the MF. The mechanisms that are responsible for the exceptional growth of $DW_2$ at 1.548 $\mu m$ as well as for the negligible growth of the spectral intensity of $DW_1$ around 0.9 $\mu m$ will be more deeply investigated in the following sections.
For comparison, we report in Fig. \[fig4\] the spectra that are measured at the MF output by using the fiber laser emitting at 1030 nm. As can be seen, even at very high input powers a drastic reduction in DW conversion efficiency is observed. At the lowest level of pump power that is shown in Fig. \[fig4\](a), two small MI peaks on both sides of the residual pump can be identified at the wavelengths of 1065 nm and 994 nm, respectively. The spectral density distribution preserves the same qualitative shape even when the peak pump power is increased up to the value of 2.7 kW (Fig. \[fig4\](b)), but an isolated DW peak appears at the wavelength of 1504 nm when the injected peak power reaches the value of 5.3 kW (Fig. \[fig4\](c)). At such power the output spectrum extends from 828 nm to 1418 nm (when considering the -40 dB level). In Fig. \[fig4\](c) the DW peak amplitude is 39 dB lower than the residual pump at 1030 nm, and the DW peak rises up to -25 dB at the maximum available power of 8 kW (Fig. \[fig4\](d)).
![Measured output spectra when the source is the mode-locked fiber laser at 1030 nm for four different levels of injected input peak power:(a) 1.37 kW, (b) 2.7 kW, (c) 5.3 kW, (d) 8 kW.[]{data-label="fig4"}](fig4.eps){width=".8\columnwidth"}
The laser operating at 1030 nm has a pulse duration that is about 7 times shorter than that of the microchip laser at 1064 nm. Although it is reasonable to suppose that a reduction in pulse duration causes a reduction in the spectral broadening [@Andreana2012] and hence in the population of solitons, with a consequent drop in the emission of DWs, this cannot entirely explain the dramatic drop of DW generation efficiency which is observed when using the fiber laser. We may thus conclude that the DW emission efficiency and its spectral position are strongly connected with both the carrier wavelengths of the solitons and the MF dispersion profile.
Spectro-temporal characterization of the DW
===========================================
We limit our spectro-temporal analysis to the case where a maximum conversion efficiency is obtained, [*i.e.*]{}, when pumping the MF at 1064 nm. Figure \[fig5\] presents the experimental setup that has been used to measure the relative time-delay among the different spectral components of the DW. A cube polarizer (CP) splits the optical beam of the microchip laser in two directions in order to obtain, by means of a photodiode (A), a trigger signal to be used in an electrical 16 GHz bandwidth digital sampling oscilloscope (DSO). Light is injected into the inner core of the 4 m long MF by means of a micro-lens (L2); the outgoing spectrum is collimated through a micro-lens (L3), dispersed by means of a diffractive grating (DG) and an aperture of 1 mm: this structure works as a tunable bandpass filter with a bandwidth of 6 nm. Each spectrum slice is then first measured by an optical spectrum analyzer (OSA) and then by a photodiode (B), which can be interchanged by means of a linear translation stage. It is thus possible to measure the relative time delay between the arrival time of a given spectrum slice in B and the trigger signal in A. It is worthy to remind that the microchip laser is operating in Q-switching mode, thus the pulse-to-pulse timing jitter needs a self-referencing with the time of emission of each pulse, and this is the role of photodiode A. Both photodiodes A and B are InGaAs PIN diodes, with 12.5 GHz bandwidth.
![Experimental setup for the spectro-temporal characterization of the DWs emitted by the dual concentric core MF; L1,L2,L3,L4 are micro-lenses and the $\lambda/2$ elements are half-wavelength plates to control the polarization state (see the text for a detailed description of the setup).[]{data-label="fig5"}](fig5.eps){width=".8\columnwidth"}
We show in Fig. \[fig6\](a) a collection of temporal profiles corresponding to different slices of the DW; panels (b) and (c) of the same figure report summary graphs for the pulse full-width at half-maximum (FWHM) and the pulse delay recorded at different central wavelengths. The spectral region at 1650 nm leads the wavelengths around 1450 nm by 163 ps after 4 m of fiber, which fits fairly well with the 31.5 ps/m $\times$ 4 m=126 ps of time delay predicted by the numerical analysis of the MF guiding properties. However, we underline that the pulse duration that is used in our experiment is longer than the time delay under test. Therefore our conclusions are drawn by estimating the shift of the center of mass of the recorded temporal profiles. From our measurements, it is confirmed that the infrared part of the spectrum is present in the leading edge of the pulse, owing to the normal dispersion region of the MF. To conclude this section, we may note that the measured time delay remains nearly unchanged for all considered power levels, as the delay is related to the linear guiding properties of the dual concentric core MF.
![Spectro-temporal experimental characterization of $DW_2$ when the pump is the Nd:YAG laser: (a) spectrogram around the central wavelength of $DW_2$; (b) $DW_2$ pulse FWHM versus wavelength; (c) $DW_2$ pulse delay versus wavelength (the choice of the time origin is arbitrary). The filled circles in (b) and (c) are the measured values and the continuous lines are guides for the eye.[]{data-label="fig6"}](fig6.eps){width=".8\columnwidth"}
Intensity of the DW upon pump wavelength
========================================
In this Section we present a detailed analytical and numerical study of the generation of dispersive waves in our dual-core MF, in order to further clarify the physical mechanisms which may explain the observed large conversion efficiencies and their dependence upon the pump wavelength and MF dispersion profile. By numerically solving the Generalized Nonlinear Schrödinger Equation (GNLSE) by means of the split-step Fourier method, the input pulse break-up and the resulting formation of solitons can be calculated both in the time and in the frequency domains. In our numerical simulations we did not include the variation of the mode effective area, since it only has a minor contribution on the qualitative dynamics of the propagation of solitons in the region 1064-1400 nm.
Let us first consider for simplicity a long square shaped pulse with peak power P=1.5 kW and width of $T_L$=500 ps: this [*ansatz*]{} could represent the internal part of a sub-nanosecond laser pulse. If we consider a central wavelength of 1064 nm, the numerically calculated group velocity dispersion is $\beta_2=-5.88\times 10^{-27}$ s$^2$/m. By assuming a nonlinear coefficient $\gamma=21\times 10^{-3}$ W$^{-1}$m$^{-1}$, we may estimate a MI frequency $f_{MI}=16.4$ THz, hence a modulation period $T_{MI}=61$ fs. From the ratio $T_L/T_{MI}$, we expect to obtain about $8000$ solitons, with individual energy equal to $E_S=P/f_{MI}=91$ pJ. When considering only first-order solitons, we may calculate a corresponding reference time duration parameter $T_0=2|\beta_2|/(\gamma E_S)=6$ fs, which corresponds to a soliton FWHM of $10.6$ fs, and peak power $P_S=|A_S|^2=E_S/(2T_0)=7.7$ kW. Although this reasoning is only approximate (the pulse widths are of the order of just a few optical cycles), the previous values may provide a first-order estimation of the magnitude and number of solitons that are generated from the decay of the initial long pump pulse.
From our full numerical solutions of the GNLSE with either single sub-nanosecond input pump pulses or with trains of femtosecond solitons (not shown here), we may conclude that the formation of the observed huge DW peak cannot be explained if the input solitons are individually considered. In other words, the experimentally observed DW peak cannot be reproduced by simply adding all the DWs that are generated by each single soliton. Actually, when numerically simulating the formation and evolution of a bunch of these solitons along the dual concentric core MF, it is apparent that they reach different peak amplitudes and central wavelengths. Hence, due to their different group velocities, solitons interact and undergo multiple collisions. Indeed it was already pointed out that soliton-soliton collisions may increase the DW intensity by orders of magnitude [@Erkintalo2010; @Erkintalo2010bis; @Tonello2015]. Thus we focus our analysis here on the role of the soliton parameters in determining the relative amplitude of the two predicted DWs: this problem is relevant since only one peak (namely, $DW_2$) is clearly visible in our measured spectra.
As discussed in the previous Section, for an input pump pulse centered at 1064 nm, during the first stage of propagation the generated solitons have wavelengths that are mostly located around 1100 nm (Fig. \[fig3\](a)). As the measured spectra show, at the end of the 4 m long MF these solitons entirely fill the spectral region where dispersion is anomalous (Fig. \[fig3\](b)-(d)). Numerical simulations confirm this picture, where solitons slow down and red-shift; the Raman effect gives a contribution to this spectral shift, however our numerical results show that a considerable red-shift is present even if the Raman coefficient is set to zero.
The blue thick curve of Fig. \[fig7\] shows the numerically computed spectrum at the output of the 4 m long double core MF, computed by assuming an input pump pulse at 1064 nm, with a duration of 200 ps and a peak power of 1 kW. The Raman effect was included in our simulations. This spectrum can be compared with the measured values in Fig. \[fig3\](d): both the numerical result and the experiment show a residual MI peak around 1100 nm, an intense $DW_2$ and a region of wide spectral broadening in the 800-1600 nm range (see also the numerical results reported in Ref. [@Manili2012]). The red thick curve in Fig. \[fig7\] is obtained by shifting the carrier wavelength of the input pump pulse to 1030 nm. The output spectrum is qualitatively similar to the previous one, it exhibits a decrease in $DW_2$, but it does not agree with the experiment of Fig. \[fig4\]. In fact, the pulses emitted by the fiber laser at 1030 nm are seven times shorter than the pulses emitted by the microchip laser at 1064 nm. We thus decreased the pump pulse duration to 30 ps: the black thin curve of Fig. \[fig7\] shows a consistent drop in $DW_2$ in satisfactory agreement with the experimental data of Fig. \[fig4\](d). Additionally, note that the black curve also exhibits a sharp sideband peak at 788 nm which is generated by four-wave mixing in the early stages of the propagation (the associated sideband is at 1486 nm), i.e., before the pulse break-up. Fig. \[fig4\](d) shows the presence of a little sideband on the short wavelength limit of the SC, and a similar feature is also weakly visible in Fig. \[fig3\](b).
![ (Color online) Numerically calculated output spectra (the Raman effect is included) with different input pulses with a common peak power of 1 kW. Blue thick curve: pulse duration of 200 ps and central wavelength of 1064 nm; red thick curve: pulse duration of 200 ps and central wavelength of 1030 nm; black thin curve: pulse duration of 30 ps and central wavelength of 1030 nm. []{data-label="fig7"}](fig7.eps){width=".8\columnwidth"}
The nanosecond pump pulse may decay in a significant number of solitons, nevertheless we focus our attention on the propagation of just one of them, since we may expect that the soliton central wavelength has an impact not only on the DW position, but also on its amplitude.
As a first example, we show in Fig. \[fig8\] the spectral evolution of a 30 fs soliton initially centred at 1100 nm. The color code represents the spectral intensity in logarithmic scale, and again we included in these simulations the Raman effect. Both dispersive waves $DW_1$ and $DW_2$ are clearly evident, and the soliton exhibits a marginal red-shift. Most of the radiated energy is carried by $DW_1$ in agreement with the theory that we will illustrate later on, however this situation is far from what we observed in the experiments.
Figure \[fig9\] shows the evolution of the same 30 fs soliton but with a central wavelength of 1200 nm. Now the soliton and DW generation dynamics is completely different from the previous case, and it can be decomposed in two steps: a first stage leading to a net red-shift of about 30 THz is followed by a relevant emission of $DW_2$ that inhibits further red-shift. Surprisingly, such a huge soliton self-frequency shift induces a change in the intensity of $DW_2$, while leaving its spectral position nearly unchanged.
Larger energy leakages into DWs can substantially reduce the soliton red-shift: this is the case of Fig. \[fig10\], where the soliton was initially centered at 1300 nm. Figures \[fig9\] and \[fig10\] show the asymmetry in favour of $DW_2$ which is observed in our experiments. More specifically, Fig. \[fig9\] shows how this effect is intensified after the initial stage of red-shift of the soliton central wavelength.
From the measured spectra of Fig. \[fig3\] we may expect that the pump pulse break-up eventually leads to a significant population of red-shifted solitons between 1100 nm and about 1400 nm. Simulations, such as those of Fig. \[fig9\], prove that this particular spectral distribution of solitons contributes to generate a red-shifted $DW_2$ that is much larger than the blue-shifted $DW_1$.
![ Numerically calculated spectral evolution of an input soliton of 30 fs centered at 1100 nm. $DW_1$ and $DW_2$ are both clearly identifiable.[]{data-label="fig8"}](fig8.eps){width=".8\columnwidth"}
![ Numerically calculated spectral evolution of an input soliton of 30 fs centered at 1200 nm. The soliton follows a rapid red-shift and then $DW_2$ is emitted.[]{data-label="fig9"}](fig9.eps){width=".8\columnwidth"}
![ Numerically calculated spectral evolution of an input soliton of 30 fs centered at 1300 nm. There is no noticeable soliton red-shift and the $DW_2$ is emitted since the beginning.[]{data-label="fig10"}](fig10.eps){width=".8\columnwidth"}
It would be interesting to extrapolate the dependence of the DW amplitude upon the soliton wavelength, in order to understand the amount of $DW_2$ that one can potentially generate. To this end, we applied the theory developed by Akhmediev and Karlsson [@Akhmediev1995] to the specific case of our MF fiber. We evaluated the frequencies of resonant coupling and the initial DW spectrum for different values of the soliton carrier wavelength. To simplify the approach we do not take into account the soliton self-frequency shifts that originate from either the Raman effect or the high-order dispersion terms. Still we do consider the full dispersion profile in order to properly localize the dispersive waves. In fact Fig. \[fig9\] shows how, in a first approximation, red-shift and resonant emission can be considered separately.
Let us start from the GNLSE (with Raman effect now neglected) written in the frequency domain as $$\frac{\partial \hat A}{\partial z}-i\kappa(\omega)\hat A -i\gamma{\cal F}[|A|^2A]=0
\label{gnlse}$$ where $A(t,z)$ is the complex envelope of the optical field and $\hat A(\omega,z)={\cal F}[A]=\int_{-\infty}^{+\infty}A(t,z)\exp(i\omega t) \,dt$ is its Fourier transform. Equation (\[gnlse\]) is written in a reference frame moving at the group velocity of the reference physical angular frequency $\omega_0$. In what follows we set the reference frequency $\omega_0$ as the carrier angular frequency of a soliton, so that $\omega$ measures the shift of the angular frequency of the optical field $\omega+\omega_0$ from $\omega_0$. The full dispersion profile of the MF is described by the function $\kappa(\omega)=\beta(\omega+\omega_0)-\beta(\omega_0)
-\beta_{1}(\omega_0) \,\omega$, where $\beta_1(\omega_0)$ has the physical meaning of the reciprocal of the group velocity at $\omega_0$. Note that varying the reference angular frequencies $\omega_0$ leads to different dispersion profiles $\kappa(\omega)$.
The relative time delay between different spectral components of the field and the reference frame is proportional to $$k'(\omega)=\frac{\partial \kappa}{\partial\omega}=\beta_{1}(\omega+\omega_0)
-\beta_{1}(\omega_0)
\label{gvm}$$ From Eq. (\[gvm\]) we can see that $k'(\omega)$ measures the group velocity mismatch (or difference in group delays) between the waves at $\omega+\omega_0$ and the reference frame at $\omega_0$.
In this work, we mainly focus our attention on solitons that are generated by the long pulse break-up, so it is reasonable to represent the linear operator of the GNLSE as $\kappa(\omega)=\beta_{2}(\omega_0) \omega^2 /2+\epsilon \hat H(\omega)$, where $\beta_{2}(\omega_0)$ is the group velocity dispersion measured at the reference frequency, $\hat H(\omega)$ describes higher-order dispersion terms, and $\epsilon$ is a small parameter that permits to separate short and long length scales in the solution of Eq. (\[gnlse\]).
We may thus study the solution of Eq. (\[gnlse\]) by using the [*ansatz*]{} provided by a first order soliton plus an additional small perturbation $\hat A(\omega,z)=\hat A_0(\omega)\exp[i\kappa_Sz]+\epsilon F(\omega,z)$. In our notation, $\hat A_0(\omega)$ is the first order soliton solution, $\kappa_S=\gamma |A_S|^2 /2$ is the nonlinear contribution to the soliton wavenumber, and $F(\omega,z)$ is the perturbation. Our [*ansatz*]{} neglects the soliton frequency shift that may be induced by $\hat H(\omega)$, hence its validity will be limited to the early stages of the DW emission process (details on the soliton dynamics in presence of third order dispersion can be found in Refs. [@Wabnitz1994; @Akhmediev1995; @Gaeta2002; @Mussot2010]). Now, since the soliton is the first order solution (i.e., with $\epsilon=0$) of Eq. (\[gnlse\]), we may collect all terms proportional to $\epsilon$ and obtain a linear forced equation for the perturbation $F(\omega,z)$ $$\frac{\partial F}{\partial z}-i\kappa(\omega) F=i \hat H(\omega) \hat A(\omega)\exp[i\kappa_Sz]
\label{lsa}$$ The solution of Eq. (\[lsa\]), with zero initial condition, can be written as $$F(\omega,z)=\frac{\hat H(\omega) \hat A_0(\omega)}{\kappa_S -\kappa(\omega)}\left[e^{i\kappa_Sz} -e^{i\kappa(\omega)z}\right]
\label{lsa:solution}$$ Equation (\[lsa:solution\]) expresses the well-known property that DWs are fed by the spectrum of the soliton, and that their amplitude is enhanced at those frequencies $\omega_R$ which satisfy the resonant condition $\kappa(\omega_R)=\kappa_S$ [@Akhmediev1995].
When dealing with a large number of unequal solitons, an important issue is the sensitivity of the resonant condition with respect to variations of the soliton peak power $P_S$. If we perform a series expansion around the resonance frequency $\omega_R$, we obtain $\kappa(\omega_R+d\omega_R)\simeq \kappa(\omega_R)+d\omega_R\kappa'(\omega_R)$ and, by applying a standard error analysis for a variation $dP_S$ of the soliton peak power, we may calculate the sensitivity of the resonant frequency to a small change in soliton power with respect to a reference value $$\frac{d\omega_R}{dP_S}=\frac{\gamma}{2\kappa'(\omega_R)}
\label{sensitivityP}$$ Equation (\[sensitivityP\]) shows that the variation of the resonant frequency resulting from a small variation of the soliton power is determined by the group velocity mismatch $\kappa'(\omega_R)=\beta_1(\omega_R+\omega_0)-\beta_1(\omega_0)$ at the resonant frequency $\omega_R+\omega_0$. Large values of the group velocity mismatch, which bring long temporal delays between the DW and the soliton, will thus result in a small sensitivity of $\omega_R$ to changes of the soliton peak power.
Equation (\[sensitivityP\]) also describes an important property of the generated DWs: soliton amplitude jitter may lead to fluctuations in the DW frequency, because a change in peak power results in a variation of the zeros of the phase-matching condition. Note that a similar change is also observed if a red-shifting soliton is subject to an amplitude reshaping. Since DW frequency fluctuations are inversely proportional to $\kappa'(\omega_R)$, in the presence of a distribution of soliton powers, large values of $\kappa'(\omega_R)$ will drastically reduce the standard deviation of the DW frequencies. This may explain why the $DW_2$ peak at 1548 nm is narrow-band ($\kappa'(\omega_R)$ has a large modulus there), while there is virtually no peak but only a flat plateau around 800-950 nm for $DW_1$.
A similar conclusion can also be drawn for the sensitivity of the resonance frequency as a result of changes in the soliton central wavelength. From a close inspection of Fig. \[fig2\], it is clear that the slope of the curve for $DW_1$ (lower blue curve) around 1064 nm is more than twice the slope of the curve for $DW_2$ (upper green curve). Numerical simulations (not shown here) attest that small fluctuations of the soliton wavelength indeed lead to a much larger spreading for the $DW_1$ resonance than for the $DW_2$ resonance.
In Fig. \[fig11\] we illustrate the spectrum of the perturbation as it is predicted from Eq. (\[lsa:solution\]), for different pump soliton wavelengths, and a propagation length of 4 m: the generation of $DW_1$, $DW_2$ and of the red-shifted spectral tail (between 1100 nm and 1400 nm) are clearly recognizable. Owing to the perturbative nature of $F(\omega,z)$, the spectrum of Fig. \[fig11\] provides an estimate of the output spectrum at all frequencies, except for values close to the soliton reference frequency. This analysis confirms that for soliton carrier wavelengths larger than about 1150 nm the intensity of the $DW_2$ peak grows larger than that associated with $DW_1$. The DW amplitude depends not only on the soliton wavelength, but also on its FWHM (which is equal to 30 fs in the simulations of Fig. \[fig11\]): this is not surprising since the soliton spectrum appears as a multiplying factor in Eq. (\[lsa:solution\]).
Since higher-order terms were neglected in deriving Eq. (\[lsa:solution\]), we have verified its validity by computing the output spectrum through the numerical solution of the GNLSE (see Fig. \[fig12\]). The numerically computed DW spectra exhibit a good agreement with the theoretically predicted spectra of Fig. \[fig11\]. Thus we may conclude that the approximate analytical solution of Eq. (\[lsa:solution\]) provides a quick first estimate of the DW spectrum, which is useful to explain its dependence on the soliton parameters and the fiber dispersion profile. Nevertheless, numerical simulations are still necessary for an accurate evaluation of the DW amplitude, as well as the precise shape of the output spectrum. By moving along the different soliton wavelengths of Fig. \[fig11\] or Fig. \[fig12\] we can see how a red-shift of the soliton central wavelength can gradually intensify $DW_2$.
![Output normalized spectra (in logarithmic scale) as given by Eq. (\[lsa:solution\]) versus the frequency detuning $\omega/2 \pi$ for different central wavelengths of the input first order soliton having a FWHM of 30 fs and for 4 m of propagation. The region where $DW_1$ ($DW_2$) is generated is highlighted by a dashed line (solid line) ellipse.[]{data-label="fig11"}](fig11.eps){width=".8\columnwidth"}
![Output normalized spectra (in logarithmic scale) versus the frequency detuning $\omega/2 \pi$ for different central wavelengths of the input first order soliton (FWHM=30 fs) calculated by numerically solving the GNLSE for 4 m of propagation. The region where $DW_1$ ($DW_2$) is generated is highlighted by a dashed line (solid line) ellipse.[]{data-label="fig12"}](fig12.eps){width=".8\columnwidth"}
Conclusions
===========
We presented an extensive experimental and numerical study of the highly efficient generation of a DW spectral peak at telecom wavelengths in a dual-core microstructured optical fiber, pumped by a near infrared microchip laser. Both experiments and simulations agree in their predictions that the conversion efficiency into the DW spectrum strongly depends on the value of the pump wavelength, in combination with the dispersion profile of the fiber. Moreover, in spite of the fact that the phase-matching condition between solitons and DWs provides multiple resonances, we found that the conversion efficiency is by far more efficient for the DW that is closest to the pump solitons. We further presented an approximate but analytical expression for the DW spectrum which explicitly contains the dispersion profile of the fiber, and thus provides an useful tool for estimating the DW position and amplitude. Moreover, the intensity of the analytical DW spectrum is proportional to the amplitude of the soliton spectral tail at the resonance wavelength, which confirms the observation that stronger DWs are generated when resonance occurs closer to the center wavelength of the pumping soliton ensemble.
We believe that our results can help in understanding the process of highly efficient DW generation in dispersion-engineered optical fibers. In particular, our analysis may be used as a guideline for the optimization and reverse engineering of specialty fibers with the purpose of emitting a target and high energy DW peak in any desired spectral region of interest for a particular application.
S.W. is also with Istituto Nazionale di Ottica of the Consiglio Nazionale delle Ricerche. We acknowledge the partial support from the Région Limousin (C409-SPARC), and by the Italian Ministry of University and Research (MIUR, Project No. 2012BFNWZ2)
[99]{}
P. K. A. Wai, C. R. Menyuk, Y. C. Lee, and H. H. Chen, “Nonlinear pulse propagation in the neighborhood of the zero-dispersion wavelength of monomode optical fibers,” Opt. Lett. [**11**]{}, 464-466 (1986).
P. K. A. Wai, C. R. Menyuk, and Y. C. Lee, “Soliton at the zero-group-dispersion wavelength of a single-mode fiber,” Opt. Lett. [**12**]{}, 628-630 (1987).
N. Akhmediev and M. Karlsson,“Cherenkov radiation emitted by solitons in optical fibers,” Phys. Rev. A [**51**]{}, 2602-2607 (1995).
S. Roy, S. K. Bhadra, and G. P. Agrawal, “Effects of higher-order dispersion on resonant dispersive waves emitted by solitons,” Opt. Lett. [**34**]{}, 2072-2074 (2009).
S. Roy, D. Ghosh, S. K. Bhadra, and G. P. Agrawal, “Role of dispersion profile in controlling emission of dispersive waves by solitons in supercontinuum generation,” Opt. Commun. [**283**]{}, 3081-3088 (2010).
S. P. Stark, F. Biancalana, A. Podlipensky, and P. St. J. Russell, “Nonlinear wavelength conversion in photonic crystal fibers with three zero-dispersion points,” Phys. Rev. A, [**83**]{}, 023808 (2011).
B. J. Eggleton, C. Kerbage, P. S. Westbrook, R. S. Windeler, and A. Hale, “Microstructured optical fiber devices,” Opt. Express, [**9**]{}, 698-713 (2001).
P. St. J. Russell, “Photonic crystal fibers,” J. Lightwave Technol., [**24**]{}, 4729-4749 (2006).
F. Poli, A. Cucinotta, and S. Selleri, [*Photonic Crystal Fibers: Properties and Applications*]{} (Springer, 2010).
L. Tartara, I. Cristiani, and V. Degiorgio, “Blue light and infrared continuum generation by soliton fission in a microstructured fiber,” Appl. Phys. B [**77**]{}, 307-311 (2003).
G. Chang, L. J. Chen, and F. X. Kärtner, “Highly efficient Cherenkov radiation in photonic crystal fibers for broadband visible wavelength generation,” Opt. Lett. [**35**]{}, 2361-2363 (2010).
J. Yuan, X. Sang, C. Yu, Y. Han, G. Zhou, S. Li, and L. Hou, “Highly efficient and broadband Cherenkov radiation at the visible wavelength in the fundamental mode of photonic crystal fiber,” IEEE Photon. Technol. Lett. [**23**]{}, 786-788 (2011).
G. Genty, M. Lehtonen, and H. Ludvigsen, “Enhanced bandwidth of supercontinuum generated in microstructured fibers,” Opt. Express [**12**]{}, 3471-3480 (2004).
J. C. Travers, A. B. Rulkov, B. A. Cumberland, S. V. Popov, and J. R. Taylor, “Visible supercontinuum generation in photonic crystal fibers with a 400 W continuous wave fiber laser,” Opt. Express [**16**]{}, 14435-14447 (2008).
A. Kudlinski, G. Bouwmans, M. Douay, M. Taki, A. Mussot,“Dispersion-engineered photonic crystal fibers for CW-pumped supercontinuum sources,” J. Lightwave Technol. [**27**]{}, 1556-1564 (2009).
W. J. Wadsworth, N. Joly, J. C. Knight, T. A. Birks, F. Biancalana, and P. St. J. Russell, “Supercontinuum and four-wave mixing with Q-switched pulses in endlessly single-mode photonic crystal fibres,” Opt. Express [**12**]{}, 299-309 (2004).
J. M. Stone and J. C. Knight, “Visibly “white” light generation in uniform photonic crystal fiber using a microchip laser,” Opt. Express [**16**]{}, 2670-2675 (2008).
C. Lesvigne, V. Couderc, A. Tonello, P. Leproux, A. Barthélémy, S. Lacroix, F. Druon, P. Blandin, M. Hanna, and P. Georges, “Visible supercontinuum generation controlled by intermodal four-wave mixing in microstructured fiber,” Opt. Lett. [**32**]{}, 2173-2175 (2007).
G. Manili, D. Modotto, U. Minoni, S. Wabnitz, C. De Angelis, G. Town, A. Tonello, and V. Couderc, “Modal four-wave mixing supported generation of supercontinuum light from the infrared to the visible region in a birefringent multi-core microstructured optical fiber,” Opt. Fiber Technol. [**17**]{}, 160-167 (2011).
J. Herrmann, U. Griebner, N. Zhavoronkov, A. Husakou, D. Nickel, J. C. Knight, W. J. Wadsworth, P. St. J. Russell, and G. Korn, “Experimental evidence for supercontinuum generation by fission of higher-order solitons in photonic fibers,” Phys. Rev. Lett. [**88**]{}, 173901 (2002).
A. Demircan and U. Bandelow, “Analysis of the interplay between soliton fission and modulation instability in supercontinuum generation,” Appl. Phys. B [**86**]{}, 31-39 (2007).
R. Driben, B. A. Malomed, A. V. Yulin, and D. V. Skryabin, “Newton’s cradles in optics: from N-soliton fission to soliton chains,” Phys. Rev. A [**87**]{}, 063808 (2013).
F. M. Mitschke and L. F. Mollenauer, “Discovery of the soliton self-frequency shift,” Opt. Lett. [**11**]{}, 659-661 (1986).
G. Genty, M. Lehtonen, and H. Ludvigsen, “Effect of cross-phase modulation on supercontinuum generated in microstructured fibers with sub-30 fs pulses,” Opt. Express [**12**]{}, 4614-4624 (2004).
D. V. Skryabin and A. V. Yulin, “Theory of generation of new frequencies by mixing of solitons and dispersive waves in optical fibers,” Phys. Rev. E [**72**]{}, 016619 (2005).
A. V. Gorbach, D. V. Skryabin, J. M. Stone, and J. C. Knight, “Four-wave mixing of solitons with radiation and quasi-nondispersive wave packets at the short-wavelength edge of a supercontinuum,” Opt. Express [**14**]{}, 9854-9863 (2006).
R. Driben, F. Mitschke, and N. Zhavoronkov, “Cascaded interactions between Raman induced solitons and dispersive waves in photonic crystal fibers at the advanced stage of supercontinuum generation,” Opt. Express [**18**]{}, 25993-25998 (2010).
L. Zhang, S. G. Yang, Y. Han, H. W. Chen, M. H. Chen, and S. Z. Xie, “Simultaneous generation of tunable giant dispersive waves in the visible and mid-infrared regions based on photonic crystal fibers,” J. Opt. [**15**]{}, 1-5 (2013).
T. Cheng, D. Deng, X. Xue, L. Zhang, T. Suzuki, and Y. Ohishi, “Highly efficient tunable dispersive wave in a tellurite microstructured optical fiber,” IEEE Photonics Journ. [**7**]{}, 2200107 (2015).
X. B. Zhang, X. Zhu, L. Chen, F. G. Jiang, X. B. Yang, J. G. Peng, and J. Y. Li, “Enhanced violet Cherenkov radiation generation in GeO$_2$-doped photonic crystal fiber,” Appl. Phys. B [**111**]{}, 273-277 (2013).
J. H. Yuan, X. Z. Sang, Q. Wu, C. X. Yu, K. R. Wang, B. B. Yan, X. W. Shen, Y. Han, G. Y. Zhou, Y. Semenova, G. Farrell, and L. T. Hou, “Efficient red-shifted dispersive wave in a photonic crystal fiber for widely tunable mid-infrared wavelength generation,” Laser Phys. Lett., [**10**]{}, 045405 (2013).
K. E. Webb, Y. Q. Xu, M. Erkintalo, and S. G. Murdoch, “Generalized dispersive wave emission in nonlinear fiber optics,” Opt. Lett. [**38**]{}, 151-153 (2013).
P. Colman, S. Combrié, G. Lehoucq, A. de Rossi, and S. Trillo, “Blue self-frequency shift of slow solitons and radiation locking in a line-defect waveguide,” Phys. Rev. Lett. [**109**]{}, 093901 (2012).
G. Manili, A. Tonello, D. Modotto, M. Andreana, V. Couderc, U. Minoni, and S. Wabnitz, “Gigantic dispersive wave emission from dual concentric core microstructured fiber,” Opt. Lett. [**37**]{}, 4101-4103 (2012).
K. Nakajima and M. Ohashi, “Dopant dependence of effective nonlinear refractive index in GeO$_2$- and F-doped core single-mode fibers,” IEEE Photon. Technol. Lett. [**14**]{}, 492-494 (2002).
Y. P. Yatsenko, A. F. Kosolapov, A. E. Levchenko, S. L. Semjonov, and E. M. Dianov, “Broadband wavelength conversion in a germanosilicate-core photonic crystal fiber,” Opt. Lett. [**34**]{}, 2581-2583 (2009).
A. Labruyère, P. Leproux, V. Couderc, V. Tombelaine, J. Kobelke, K. Schuster, H. Bartelt, S. Hilaire, G. Huss, and G. Mélin, “Structured-core GeO$_2$-doped photonic-crystal fibers for parametric and supercontinuum generation,” IEEE Photon. Technol. Lett. [**22**]{}, 1259-1261 (2010).
F. Gérôme, J. L. Auguste, and J.M. Blondy, “Design of dispersion-compensating fibers based on a dual-concentric-core photonic crystal fiber,” Opt. Lett. [**29**]{}, 2725-2727 (2004).
F. Gérôme, J. L. Auguste, J. Maury, J. M. Blondy, and J. Marcou, “Theoretical and experimental analysis of a chromatic dispersion compensating module using a dual concentric core fiber,” J. Lightwave Technol. [**24**]{}, 442-448 (2006).
M. Erkintalo, G. Genty, and J. M. Dudley, “Experimental signatures of dispersive waves emitted during soliton collisions,” Opt. Express [**18**]{}, 13379-13384 (2010).
M. Erkintalo, G. Genty, and J. M. Dudley, “Giant dispersive wave generation through soliton collision,” Opt. Lett. [**35**]{}, 658-660 (2010).
A. Tonello, D. Modotto, K. Krupa, A. Labruyère, B. M. Shalaby, V. Couderc, A. Barthélémy, U. Minoni, S. Wabnitz, and A. B. Aceves, “Dispersive wave emission in dual concentric core fiber: the role of soliton-soliton collisions,” IEEE Photon. Technol. Lett. [**27**]{} 1145-1148 (2015)
B. Kibler, P. A. Lacourt, F. Courvoisier, and J. M. Dudley, “Soliton spectral tunnelling in photonic crystal fibre with sub-wavelength core defect,” Electron. Lett. [**43**]{}, 967-968 (2007).
F. Poletti, P. Horak, and D. J. Richardson, “Soliton spectral tunneling in dispersion-controlled holey fibers,” IEEE Photon. Technol. Lett. [**20**]{}, 1414-1416 (2008).
H. Guo, S. Wang, X. Zeng, and M. Bache, “Understanding soliton spectral tunneling as spectral coupling effect,” IEEE Photon. Technol. Lett. [**25**]{}, 1928-1931 (2013).
B. H. Chapman, J. C. Travers, S. V. Popov, A. Mussot, and A. Kudlinski, “Long wavelength extension of CW-pumped supercontinuum through soliton-dispersive wave interactions,” Opt. Express [**18**]{}, 24729-24734 (2010).
M. Andreana, A. Labruyère, A. Tonello, S. Wabnitz, P. Leproux, V. Couderc, C. Duterte, A. Cserteg, A. Bertrand, Y. Hernandez, D. Giannone, S. Hilaire, and G. Huss, “Control of near-infrared supercontinuum bandwidth by adjusting pump pulse duration,” Opt. Express [**20**]{}, 10750-10760 (2012).
Y. Kodama, M. Romagnoli, S. Wabnitz, and M. Midrio, “Role of third-order dispersion on soliton instabilities and interactions in optical fibers,” Opt. Lett. [**19**]{}, 165-167 (1994).
A. L. Gaeta, “Nonlinear propagation and continuum generation in microstructured optical fibers,” Opt. Lett. [**27**]{}, 924-926 (2002).
A. Mussot, A. Kudlinski, E. Louvergneaux, M. Kolobov, and M. Taki, “Impact of the third-order dispersion on the modulation instability gain of pulsed signals,” Opt. Lett. [**35**]{}, 1194-1196 (2010).
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
Let $\pi : X\map{}Y$ be a good quotient of a smooth variety $X$ by a reductive algebraic group $G$ and $1\leq k\leq\dim Y$ an integer. We prove that if, locally, any invariant horizontal differential $k$-form on $X$ (resp. any regular differential $k$-form on $Y$) is a Kähler differential form on $Y$ then $\codim{Y_{\sing}}>k+1$. We also prove that the dualizing sheaf on $Y$ is the sheaf of invariant horizontal $\dim
Y$-forms.
author:
- Guillaume Jamet
bibliography:
- 'mybib.bib'
title: |
Differential forms and smoothness\
of quotients by reductive groups
---
Introduction {#introduction .unnumbered}
============
Let $\pi : X\map{}Y$ be a good quotient of a smooth variety $X$ by a reductive algebraic group $G$. How one can bound the dimension of the singular locus of $Y$? Since there exists no natural embedding of $Y$ in some smooth variety, it seems difficult to describe the $n$-th Fitting ideal of the sheaf $\Omega_{Y}^{1}$. J. Fogarty suggests a different approach to this problem by raising in [@MR90a:14014] the following questions (all schemes are assumed to be of finite type over a field of characteristic 0) :
[Question]{} Let $G$ be a finite group acting on a smooth variety $X$ and $\pi : X\map{}Y$ the quotient. Is the natural morphism $$\Omega^{1}_{Y}\map{}\inv{G}{\Omega^{1}_{X}}$$ surjective if and only if $Y$ is smooth?
In that article J. Fogarty verifies that the surjectivity condition is indeed necessary. He also proves that, when the group $G$ is abelian, this condition is sufficient ([@MR90a:14014 Lemma 5]).
Observe that the module $\inv{G}{\Omega^{1}_{X}}$ is naturally isomorphic to $\dd{\Omega^{1}_{Y}}$ and, the variety $Y$ being normal, also isomorphic to the module $\omega^{1}_{Y}$ of regular 1-forms (cf. appendix \[rardiff\]) and to the module $i_{*}\Omega^{1}_{Y_{\smooth}}$ (here $i$ denotes the inclusion $Y_{\smooth}\subset y$). It is also easily checked that this problem reduces to the case where $X$ is a rational representation of $G$. In particular when $G\subset\SL{\C^{2}}$, then $Y={\C^{2}}/G$ is a complete intersection and one can give an affirmative answer to the question above. However, already in dimension 2 (i.e. $G\subset\GL{\C^{2}}$) this question appears to be quite tricky.
Recently M.Brion proved the following result :
[Theorem ([[@MR98k:14067 Theorem 1]]{})]{} Let $G$ be a reductive algebraic group acting on a smooth affine variety $X$, and let $\pi : X\map{}Y$ be the quotient. If $Y$ is smooth then the natural morphism $$\begin{aligned}
\inv{G}{d\pi} & : &\Omega_{Y}\map{}\inv{G}{\Omega_{X,G}}\end{aligned}$$ is an isomorphism.
Here $\inv{G}{\Omega_{X,G}}$ is the differential graded algebra of [*invariant horizontal differential forms*]{} and $\inv{G}{d\pi}$ is the morphism of differential graded algebras induced by the cotangent morphism $d\pi$ (see section \[hordiff\]). When $G$ is finite, it is isomorphic to $\inv{G}{\Omega_{X}}$. This last theorem clearly suggests to reformulate and investigate Fogarty’s question in the more general context of quotients by reductive groups.
The main theorem we prove in this paper is the following, thus giving a partial answer to Fogarty’s question and also a strong converse to Brion’s theorem :
[Theorem (\[smcrit1\])]{} Let $G$ be a reductive algebraic group acting on a smooth affine variety $X$, with quotient map $\pi :X\map{} Y$ and let $k$ be an integer with $1\leq k\leq\dim Y$. The morphism $\inv{G}{d\pi^{k}}$ is surjective in codimension $k+1$ if and only if $Y$ is smooth in codimension $k+1$.
We stated these results for affine $G$-schemes, but it is easy to see that they generalize immediately to the case of good quotients (i.e. affine uniform categorical quotient morphisms $\pi :
X\map{}Y$, with the terminology of [@MR86a:14006]).
In the case of finite abelian groups we also prove :
[Theorem (\[fgroup3\])]{} Let $G$ be a finite abelian group acting on a smooth affine scheme $X$ with quotient $\pi : X\map{} Y$ and let $k$ be an integer with $1\leq k\leq\dim X$. The morphism $\inv{G}{d\pi^{k}}$ is surjective if and only if $Y$ is smooth.
This improves the previous result of Fogarty and also shows that, with the hypothesis of (\[smcrit1\]) smoothness in codimension $k+1$ doesn’t imply that $\inv{G}{d\pi^{k}}$ (or $c_{Y}^{k}$, see below) is surjective.
In order to prove these theorems it is important to understand how $\inv{G}{\Omega_{X,G}}$ compares to other sheaves of differentials on $Y$, in particular to the sheaves $\tilde{\Omega}_{Y}$ and $\omega_{Y}$ (respectively, the sheafs of [*absolutely regular*]{} and [ *regular differential forms*]{}. Cf. appendix \[rardiff\]). In his article [@MR98k:14067], M. Brion observed that, as a corollary to his theorem and under the additional condition that [*no invariant divisors is mapped by $\pi$ onto a closed subscheme of codimension $\geq 2$ in $Y$*]{}, there are isomorphisms $\inv{G}{\Omega_{X,G}}\simeq\dd{\Omega_{Y}}\simeq\omega_{Y}$. This comparison problem is also closely related to the more classical problem of describing the dualizing sheaf of a quotient (by a reductive group) variety as a sheaf of invariants. It has been extensively studied by F. Knop in [@MR90k:14053], but the expression he obtains for $\omega_{Y}^{n}$ (the canonical sheaf if $n=\dim Y$) is again dependent on the existence of the preceding “bad divisors".
Here, using a general machinery of Kähler (resp. absolutely regular) horizontal differential forms (sections \[hordiff\] and \[arhdiff\]) we obtain the following comparison statement :
[Proposition (\[arhcomp\])]{} Let $G$ be a reductive algebraic group, $X$ be a smooth affine $G$-scheme and $\pi : X\map{}Y$ the quotient. There is a sequence of inclusions : $$\bar\Omega_{Y}\subseteq\tilde\Omega_{Y}\subseteq\inv{G}{\Omega_{X,G}}
\subseteq\omega_{Y}$$ which are equalities on the smooth locus of $Y$.
This together with a theorem of Boutot ([@MR88a:14005]) leads to the following simple description of the dualizing sheaf :
[Corollary (\[arhdual\])]{} Let $G$ be a reductive algebraic group, $X$ be a smooth affine $G$-scheme with quotient map $\pi : X\map{}Y$ and let $n=\dim Y$. Then the dualizing sheaf on $Y$, $\omega^{n}_{Y}$, is isomorphic to $\inv{G}{\Omega^{n}_{X,G}}$.
Our Proposition \[arhcomp\] also leads to a more intrinsic version of (\[smcrit1\]) :
[Theorem (\[smcrit2\])]{} Let $Y$ be the quotient of a smooth affine variety by a reductive algebraic group and let $k$ be an integer with $1\leq k\leq\dim Y$. The fundamental class morphism $c_{Y}^{k}$ is surjective in codimension $k+1$ if and only if $Y$ is smooth in codimension $k+1$.
Note that this result apply in particular when $Y$ is a variety with toroidal singularities. Indeed, it is proved in [@MR95i:14046] that any toric variety can be realized as the good quotient of an open subset of an affine space $\Aff^{n}$ by a torus. In fact, for quotient by tori, we expect that a statement similar to (\[fgroup3\]) might hold.
A smoothness criterion much like (\[smcrit2\]) also holds when $Y$ is locally a complete intersection ([@MR42:255] or [@mythesis]. Note by the way that quotient singularities which are complete intersections are “exceptionnal" and must be singular in codimension 2). Even more generally, one may conjecture that for a variety $Y$ with reasonnable singularities (see [@MR90a:14021 5.22, p107] in appendix \[rardiff\]) [*$c_{Y}^{k}$ is surjective in codimension $k$ if and only if $Y$ is smooth in codimension $k$*]{} (the “$k+1$” in (\[smcrit2\]) is clearly a gift of the local quasi-homogeneous structure).
Finally, combined with results of H. Flenner ([@MR89j:14001], and van Sraten-Steenbrink [@MR87j:32025] in the case of isolated singularities) proposition \[arhcomp\] implies that for $0\leq k<\codim{Y_{\sing}}-1$, we have $\tilde\Omega^{k}_{Y}\simeq\inv{G}{\Omega^{k}_{X,G}}\simeq\omega^{k}_{Y}$. However, the following question (as far as we know) remains open : [*Under the hypotheses of (\[arhcomp\]) do we have in general isomorphisms $\tilde\Omega_{Y}\simeq\inv{G}{\Omega_{X,G}}\simeq\omega_{Y}$ or at least $\inv{G}{\Omega_{X,G}}\simeq\omega_{Y}$?*]{}
#### Acknowledgements {#acknowledgements .unnumbered}
This work reproduces parts of my Ph.D. Thesis, worked out at the Institut de Mathématiques de Jussieu. Many thanks to my Ph.D. advisor, C. Peskine.
#### Notation and conventions {#notation-and-conventions .unnumbered}
We work over a fixed field $\field$ of characteristic 0 with algebraic closure $\bar\field$. All the schemes we consider are of finite type over $\field$. For such a scheme $X$, we denote by $\Omega_{X}$ the differential graded algebra $\oplus_{k\geq 0}\Omega^{k}_{X/\field}$ of Kähler differentials, and write $\Omega^{k}_{X}$ for $\Omega^{k}_{X/\field}$.
For $G$ an algebraic group and a $G$-scheme $X$, we denote by $G$-$\O_{X}$-mod the category of $G$-equivariant $\O_{X}$-modules.
An affine $\Gm$-scheme $X$ is said to be quasi-conical (this is an ugly terminology, but, we believe it is consistent with the algebraic definitions of homogeneous and quasi-homogeneous ideals) if $\O_{X}$ is generated by homogeneous sections of non-negative weights. We recall that $X$ is said to be conical when $\O_{X}$ is generated by homogeneous sections of weight 1.
By differential operator, we mean differential operator relative to $\field$ in the sense of [@MR39:220 16.8].
We denote by $\Gamma$ the decreasing filtration by codimension of the support : Let $c$ be an integer. For any $\O_{X}$-module $M$ and $U\subset X$ an open subset, $\Gamma_{c}M(U)$ is the subgroup of $M(U)$ consisting of the sections having support of codimension $\geq c$ in $X$. We write $\Gamma_{(c)}$ for $\Gamma_{c}/\Gamma_{c+1}$ and $\bar{M}$ for $\Gamma_{(0)}M$. In particular, when $X$ is integral, $\Gamma_{1}M$ is the submodule of torsion elements and $\bar{M}=\Gamma_{(0)}M$ is $M$ modulo torsion. We recall that this filtration is preserved by differential operators and in particular by $\O_{X}$-linear morphisms. These definitions extend to categories of complexes in the obvious way.
By a desingularisation of $X$, we always mean a desingularisation of $X_{\red}$. We take ([@MR99i:14020]) as a general reference for resolution of singularities, in particular for the existence of equivariant resolutions.
Horizontal differentials {#hordiff}
========================
Let $G$ be an algebraic group, $\lieg$ its Lie algebra considered as a $G$-module via the adjoint representation, and $X$ a $G$-scheme. We will also consider $G$ as a $G$-scheme by the action of $G$ on itself by inner automorphism. We have the following diagram of equivariant maps : $$\xymatrix{
G & G\times X \ar[l]^{p}\ar[d]^{q}\ar@<.5ex>[r]^{\mu} & X \ar@<.5ex>[l]^{s} \\
& X &
}$$ where $p$ and $q$ are the projections, $\mu$ is the action map and $s$ is the section of $\mu$ defined by $x\mapsto (e,x)$. This induces the following diagram of $G$-equivariant coherent modules on $G\times X$ : $$\xymatrix{
\mu^{*}\Omega^{1}_{X}\ar[r]^<<<{d\mu}\ar[rd] &
\Omega^{1}_{G\times X}=p^{*}\Omega^{1}_{G}\oplus
q^{*}\Omega^{1}_{X}\ar[d]\\
& p^{*}\Omega^{1}_{G}
}$$ Taking the pull-back by $s$ of the diagonal morphism above, we obtain a morphism $$\begin{aligned}
d\mu^{1}_{X,G} & : & \Omega^{1}_{X}\map{}s^{*}p^{*}\Omega^{1}_{G}=\lieg^{\vee}\otimes\O_{X}\end{aligned}$$ We then define a morphism $d\mu_{X,G} : \Omega_{X}\map{}\Omega_{X}\otimes\lieg^{\vee}$ as follows $$\begin{aligned}
d\mu^{k}_{X,G} & : &
\Omega^{k}_{X}\map{}\Omega^{k-1}_{X}\otimes\lieg^{\vee} \\
d\mu^{k}_{X,G}(df_{1}\wedge\ldots\wedge
df_{k}) &= & \sum_{i=1}^{k}(-1)^{k-i}df_{1}\wedge\ldots\wedge\widehat{df_{i}}\wedge\ldots\wedge
df_{k}\otimes d\mu^{1}_{X,G}(df_{i}),\end{aligned}$$
For an alternative and more rigourous construction of the morphisms above, using ‘multilinear homological algebra’, we refer to [@mythesis].
The $G$-equivariant module $\Omega^{k}_{X,G}=\Ker(d\mu^{k}_{X,G})$ is called the module of horizontal $k$-forms. We denote by $\Omega_{X,G}$ the graded algebra $\oplus_{k\geq 0}\Omega^{k}_{X,G}$.
The sections of $\Omega_{X,G}$ consists of those forms whose interior product with any vector field induced by the group action vanishes.
The preceeding construction is natural in $X$. Thus, for any equivariant map $f
: X\map{}Y$ the cotangent morphism induces morphisms $f^{*}\Omega^{k}_{Y,G}\map{}\Omega^{k}_{X,G}$. It is also clear from the construction that if the action of $G$ is trivial then $d\mu^{1}_{X,G}=0$ and consequently we have $\Omega^{k}_{X,G}=\Omega^{k}_{X}$. From these remarks, we deduce :
\[invmorph\] Let $\pi : X\map{}Y$ be a $G$-invariant morphism, then the cotangent morphism $d\pi : \pi^{*}\Omega_{Y}\map{}\Omega_{X}$ factors through $\Omega_{X,G}\subset\Omega_{X}$.
\[torsionkernel\] This last proposition applies in particular when $\pi$ is a categorical quotient of $X$. Assume that $X$ is affine and that $G$ is a reductive linear group. Let $\pi : X\map{}Y$ be the quotient of $X$. By (\[invmorph\]) there is a morphism $\pi^{*}\Omega_{Y}\map{}\Omega_{X,G}$ and therefore a morphism $$\begin{aligned}
\inv{G}{d\pi} & : &\Omega_{Y}\map{}\inv{G}{\Omega_{X,G}}\end{aligned}$$ of coherent modules on $Y$. Under the additional assumption that $X$ is smooth, then $\inv{G}{\Omega_{X,G}}$ is a torsion-free module and by ([@MR98k:14067 Theorem 1]) the morphism $\inv{G}{d\pi}$ is generically an isomorphism. Consequently, the kernel of $\inv{G}{d\pi}$ is exactly the torsion of $\Omega_{Y}$ and we have an inclusion : $\bar\Omega_{Y}\subseteq\inv{G}{\Omega_{X,G}}$.
We now give some elementary properties of this construction :
\[pback\] Let $f : X\map{}Y$ be an equivariant map of $G$-schemes. Assume that the adjoint morphism $\Omega_{Y}\map{}f_{*}\Omega_{X}$ is injective. Then the diagram : $$\xymatrix{
\Omega_{Y,G}\ar[r]\ar[d] & \Omega_{Y}\ar[d]\\
f_{*}\Omega_{X,G}\ar[r] & f_{*}\Omega_{X}
}$$ is a fiber product diagram where all the morphisms are injective.
In other words, under the assumption, a differential form is horizontal if and only if its pull-back is.
[\[pback\]]{} The statement is an easy consequence of the commutative diagram $$\xymatrix@+1.3em{
0\ar[r] & \Omega_{Y,G}\ar[r]\ar[d] & \Omega_{Y}\ar[r]^{d\mu_{Y,G}}\ar[d] &
\Omega_{Y}\otimes\lieg^{\vee}\ar[d] \\
0\ar[r] & f_{*}\Omega_{X,G}\ar[r] & f_{*}\Omega_{X}\ar[r]^{f_{*}d\mu_{X,G}} &
f_{*}\Omega_{X}\otimes\lieg^{\vee}
}$$ where the two vertical morphisms on the left are injective by assumption.
\[fibration\] Let $G$ be an algebraic group and $f :
X\map{}Y$ be a principal $G$-fibration. Then the natural morphism $df : f^{*}\Omega_{Y}\map{}\Omega_{X,G}$ is an isomorphism.
One is reduced to proving the statement in the case of a trivial $G$-fibration where this is obvious.
\[twogroups\] Let $G$ and $H$ be algebraic groups acting on a scheme $X$. The natural commutative diagram $$\xymatrix @+1.3em{
0\ar[r] & \Omega_{X,G}\otimes\lieh^{\vee}\ar[r] &
\Omega_{X}\otimes\lieh^{\vee}\ar[r]^{d\mu_{X,G}\otimes\lieh^{\vee}} &
\Omega_{X}\otimes\lieg^{\vee}\otimes\lieh^{\vee} \\
0\ar[r] & \Omega_{X,G}\ar[r]\ar[u] &
\Omega_{X}\ar[r]^{d\mu_{X,G}}\ar[u]^{d\mu_{X,H}} &
\Omega_{X}\otimes\lieg^{\vee}\ar[u]^{-d\mu_{X,H}\otimes\lieg^{\vee}} \\
0\ar[r] & \Omega_{X,G}\cap\Omega_{X,H}\ar[r]\ar[u] &
\Omega_{X,H}\ar[r]\ar[u] &
\Omega_{X,H}\otimes\lieg^{\vee}\ar[u] \\
& 0\ar[u] & 0\ar[u] & 0\ar[u]
}$$ has exact rows and columns. Moreover, it induces an exact sequence : $$\xymatrix @+1.3em{
0\ar[r] & \Omega_{X,G}\cap\Omega_{X,H}\ar[r] & \Omega_{X}\ar[r] & \Omega_{X}\otimes(\lieg^{\vee}\oplus\lieh^{\vee}).
}$$
Observe, that we did not assume that the actions of $G$ and $H$ on $X$ commute, therefore this diagram is only separately $G$ and $H$-equivariant, but, in general, not $G\times H$-equivariant.
The Euler derivation {#euler}
====================
We go on using the notations of section \[hordiff\]. Let $T=\Gm=\Spec(\field[\lambda,\lambda^{-1}])$ be a one-dimensional torus with Lie algebra $\liet$ and $X$ an affine $T$-scheme. We recall that since $T$ is abelian, the adjoint representation is trivial, i.e. $\liet$ is a trivial $T$-module. We fix once for all an isomorphism $\field\simeq\liet$ via the left-invariant derivation $\lambda\frac{\partial}{\partial\lambda}$. Composing the dual of this last isomorphism with $d\mu^{1}_{X,T}$ we obtain a derivation on $X$ : $$\begin{aligned}
\e_{X,T} & : & \Omega^{1}_{X}\map{}\O_{X}\end{aligned}$$ called the Euler derivation. Since $X$ is affine, we have $X=\Spec(A)$ with $A$ a graded ring. The grading of $A$ corresponds to the weight for the $T$-action : A section $f$ of $\O_{X}$ is said to be homogeneous of weight $w$ if $\mu^{*}f=\lambda^{w}q^{*}f$. If $f$ is homogeneous of weight $w$, we set $|f|=w$.
\[euler1\] Let $f$ be an homogeneous section of $\O_{X}$. Then : $$\e(df)=|f|f.$$
[\[euler1\]]{} Let $w=|f|$. We have : $$\begin{aligned}
\e(df) & = & \lambda\frac{\partial}{\partial\lambda}d\mu^{1}_{X,T}(df)\\
& = &
\lambda\frac{\partial}{\partial\lambda}s^{*}(w\lambda^{w-1}f.d\lambda)\\
& = &
\lambda\frac{\partial}{\partial\lambda}s^{*}(w\lambda^{w}f.\frac{d\lambda}{\lambda})\\
& = & \lambda\frac{\partial}{\partial\lambda}(wf.\frac{d\lambda}{\lambda})\\
& = & wf\end{aligned}$$ as expected.
\[eulerdef\] The Euler derivation constructed above can be extended to a degree $-1$ endomorphism of the graded module $\Omega_{X}$ by setting : $$\begin{aligned}
\e & : & \Omega^{k}_{X}\map{}\Omega^{k-1}_{X} \\
\e(df_{1}\wedge\ldots\wedge df_{k}) & = &
\sum_{i=1}^{k}(-1)^{k-i}\e(df_{i})df_{1}\wedge\ldots\wedge\widehat{df_{i}}\wedge\ldots\wedge df_{k}. \end{aligned}$$ It satisfies the following two properties :
$\e^{2}=0$.
For any two forms $\alpha,\beta$ of respective degree $k$ and $l$, we have $$\e(\alpha\wedge\beta)=(-1)^{l}\e(\alpha)\wedge\beta+\alpha\wedge\e(\beta).$$
[\[eulerdef\]]{} By direct computation.
We thus have constructed a complex that we will denote by $(\Omega_{X},\e)$.
The exterior differential algebra $(\Omega_{X},d)$ is also graded by weight : A section $\alpha$ of $(\Omega_{X},d)$ is homogeneous of weight $w$ if $\mu^{*}\alpha=\lambda^{w}q^{*}\alpha$. The following properties are then easy to check :
\[euler2\] Let $\alpha$ and $\beta$ be homogeneous sections of $\Omega_{X}$.
The forms $d\alpha$ and $\e(\alpha)$ are homogeneous and $|d\alpha|=|\e(\alpha)|=|\alpha|$.
The form $\alpha\wedge\beta$ is homogeneous and $|\alpha\wedge\beta|=|\alpha|+|\beta|$.
The algebra $\Omega_{X}$ is generated by the differentials of homogeneous sections of $\O_{X}$.
$\Ker(\e)=\Omega_{X,T}$.
\[bracket\] For any homogeneous $k$-forms $\alpha$, we have : $$[\e,d]\alpha=(-1)^{k}|\alpha|\alpha.$$
[\[bracket\]]{} This is a direct computation again.
Let $c\geq 0$. The operators $\e$ and $d$ preserve the filtration by codimension of the support and therefore they induce operators on $\Gamma_{c}\Omega_{X}$ and $\Gamma_{(c)}\Omega_{X}$ that we again denote by $\e$ and $d$. Moreover, since $\Gamma_{c}\Omega_{X}$ and $\Gamma_{(c)}\Omega_{X}$ are also $T$-equivariant, the statement above remains true for these modules.
\[euler3\] The submodule $\inv{T}{\Omega_{X,T}}\subseteq
\inv{T}{\Omega_{X}}$ is stable by the exterior derivative of $\Omega_{X}$.
[\[euler3\]]{} Keeping in mind that $T$-invariants are precisely homogeneous sections of null weight, the result is a direct consequence of (\[bracket\]) and (\[euler2\](iv)).
Horizontal differentials : Poincaré lemmas
==========================================
\[dstable\] Let $G$ be a reductive algebraic group and $X$ an affine $G$-scheme. Then the submodule $\inv{G}{\Omega_{X,G}}\subset
\inv{G}{\Omega_{X}}$ is stable by the exterior derivative of $\Omega_{X}$.
This statement holds more generally for $G$ a linear algebraic group. But its proof would require an algebraic construction of the Lie derivative that we did not explain here. The proof would run as follows : For $v\in\lieg$, denotes by $\lie_{v}$ the Lie derivative and by $<v,\cdot>$ the interior product. Then, for any section $\alpha$ of $\Omega_{X}$ we have the relation : $$\lie_{v}\alpha=d<v,\alpha>+<v,d\alpha>.$$ The statement therefore follows from the observation that $\lie_{v}$ vanishes on $\inv{G}{\Omega_{X}}$.
[\[dstable\]]{} We recall $\inv{G}{\Omega_{X}}$ is obviously stable by exterior differentiation. Since $G$ is reductive, on can find one-dimensional subtori $T_{1},\ldots,T_{d}$ of $G$ such that $\lieg=\liet_{1}\oplus\ldots\oplus\liet_{d}$. Then, by (\[twogroups\]), we have :
$$\begin{aligned}
\Omega_{X,G} & = &
\Omega_{X,T_{1}}\cap\ldots\cap\Omega_{X,T_{d}}.\end{aligned}$$
And therefore
$$\begin{aligned}
\inv{G}{\Omega_{X,G}} & = &
\inv{G}{\Omega_{X}}\cap\Omega_{X,T_{1}}\cap\ldots\cap\Omega_{X,T_{d}}\\
& = & \inv{G}{\Omega_{X}}\cap\inv{T_{1}}{\Omega_{X,T_{1}}}\cap\ldots\cap\inv{T_{d}}{\Omega_{X,T_{d}}}.\end{aligned}$$
By (\[euler3\]), all the terms in the intersection above are stable by $d$, so we can conclude that $\inv{G}{\Omega_{X,G}}$ is stable by $d$ too.
\[estable\] Let $G$ be a reductive algebraic group and let $X$ be an affine $G\times T$-scheme. Then $\Omega_{X,G}$ is stable by $\e=\e_{X,T}$. We write $(\Omega_{X,G},\e)$ for this subcomplex of $(\Omega_{X},\e)$.
[\[estable\]]{} By a direct calculation, using the explicit definitions of $d\mu_{X,G}$ and $\e$.
Therefore, if $c\geq 0$ is an integer, $\Gamma_{c}\Omega_{X,G}$ is also stable by $\e$ and therefore there is an induced endomorphism on $\Gamma_{(c)}\Omega_{X,G}$.
\[bracket2\] Let $G$ be a reductive algebraic group and let $X$ be an affine $G\times T$-scheme. Let $\alpha$ be a homogeneous section (with respect to the $T$-action) of $\inv{G}{\Omega^{k}_{X,G}}$. Then $$[\e,d]\alpha=(-1)^{k}|\alpha|\alpha.$$
Clearly, we again have a similar statement for $\Gamma_{c}\inv{G}{\Omega_{X,G}}$, $\Gamma_{(c)}\inv{G}{\Omega_{X,G}}$, $\inv{G}{\Gamma_{c}\Omega_{X,G}}$ or $\inv{G}{\Gamma_{(c)}\Omega_{X,G}}$.
\[euler5\] Let $G$ be a reductive algebraic group and let $X$ be an affine $G\times T$-scheme. Then $$\H{}{\inv{G}{\Omega_{X,G}},\e}=\H{}{\inv{G}{\Omega_{X,G}},\e}^{T}.$$ Let $c\geq 0$. Then the same relation holds for $\Gamma_{c}\inv{G}{\Omega_{X,G}}$, $\Gamma_{(c)}\inv{G}{\Omega_{X,G}}$, $\inv{G}{\Gamma_{c}\Omega_{X,G}}$ and $\inv{G}{\Gamma_{(c)}\Omega_{X,G}}$.
[\[euler5\]]{} Let $\alpha$ be a homogeneous section of $\inv{G}{\Omega^{k}_{X,G}}\cap \Ker(\e)$. Then by (\[bracket2\]) we have $\e d\alpha=(-1)^{k}|\alpha|\alpha$. Therefore if $|\alpha|\neq 0$ the class of $\alpha$ in $\H{k}{\inv{G}{\Omega_{X,G}},\e}$ vanishes. Since $\H{}{\inv{G}{\Omega_{X,G}},\e}^{T}$ is a direct factor of $\H{}{\inv{G}{\Omega_{X,G}},\e}$, the equality is proved.
\[diffpur\] Let $X$ be a quasi-conical affine $T$-scheme. Then the pull-back morphism for the quotient map $X\map{}X\quot T$ induces isomorphisms : $$\Omega_{X\quot T}\map{\sim}\inv{T}{\Omega_{X,T}}
\map{\sim}\inv{T}{\Omega_{X}}\subset\Omega_{X}.$$
[\[diffpur\]]{} Easy, by arguments on weights.
\[euler6\] Let $G$ be a reductive algebraic group and let $X$ be an affine $G\times T$-scheme, quasi-conical with respect to the $T$-action. Then the natural morphism $$\Omega_{X\quot T,G}\map{}\inv{T}{\Omega_{X,G}}$$ induced by the $G$-equivariant map $X\map{}X\quot T$, is an isomorphism.
[\[euler6\]]{} By (\[diffpur\]) the hypotheses of (\[pback\]) are satisfied for the map $X\map{}X\quot T$. Taking $T$-invariants in the diagram of (\[pback\]) together with the isomorphism $\Omega_{X\quot T}\map{\sim}\inv{T}{\Omega_{X}}$ gives the result.
\[euler8\] Let $G$ be a reductive algebraic group and let $X$ be an affine $G\times T$-scheme, quasi-conical with respect to the $T$-action. Let $d\geq c\geq 0$. There are isomorphisms of exact sequences $$\xymatrix{
0\ar[r] & \inv{T}{\Gamma_{d}\Omega_{X,G}}\ar[r]\ar@{=}[d] &
\inv{T}{\Gamma_{c}\Omega_{X,G}}\ar[r]\ar@{=}[d] &
\inv{T}{\Gamma_{c}/\Gamma_{d}\,\Omega_{X,G}}\ar[r]\ar@{=}[d] & 0\\
0\ar[r] & \H{}{\inv{T}{\Gamma_{d}\Omega_{X,G}},\e}\ar[r] &
\H{}{\inv{T}{\Gamma_{c}\Omega_{X,G}},\e}\ar[r] &
\H{}{\inv{T}{\Gamma_{c}/\Gamma_{d}\,\Omega_{X,G}},\e}\ar[r] & 0\\
0\ar[r] & \inv{G\times T}{\Gamma_{d}\Omega_{X,G}}\ar[r]\ar@{=}[d] &
\inv{G\times T}{\Gamma_{c}\Omega_{X,G}}\ar[r]\ar@{=}[d] &
\inv{G\times T}{\Gamma_{c}/\Gamma_{d}\,\Omega_{X,G}} \ar[r]\ar@{=}[d] & 0\\
0\ar[r] & \H{}{\inv{G}{\Gamma_{d}\Omega_{X,G}},\e}\ar[r] &
\H{}{\inv{G}{\Gamma_{c}\Omega_{X,G}},\e}\ar[r] &
\H{}{\inv{G}{\Gamma_{c}/\Gamma_{d}\,\Omega_{X,G}},\e}\ar[r] & 0
}$$
[\[euler8\]]{} By (\[euler6\]) we have $\inv{T}{\Gamma_{c}\Omega_{X,G}}\subset\Omega_{X\quot T}$. Therefore $\e$ vanishes for all the complexes involved in the first isomorphism and this proves the first statement. For the second one, take $G$-invariants in the first diagram and use (\[euler5\]).
One might understand the next two statements as a natural generalisation, with $\e$ and $d$ exchanged, of the Poincaré Lemma to singular varieties with reductive group action :
\[euler9\] Let $G$ be a reductive algebraic group and let $X$ be an affine $G\times T$-scheme, quasi-conical with respect to the $T$-action. Then the $G$-equivariant map $X\map{}X\quot T$ induces an isomorphism $$\inv{G}{\Omega_{X\quot T,G}}\map{\sim}\H{}{\inv{G}{\Omega_{X,G}},\e}.$$
\[euler10\] Let $G$ be a reductive algebraic group and let $X$ be an affine $G\times T$-scheme, quasi-conical with respect to the $T$-action and such that $X\quot T=\Spec(\field)$. Then $$\H{}{\inv{G}{\Omega_{X,G}},\e}=\H{}{\inv{G}{\bar\Omega_{X,G}},\e}=\field.$$
In particular, in the case of a trivial action of $G$ on a variety and under the preceding hypotheses we have exact complexes $$\xymatrix{
\ldots\ar[r] &\Omega_{X}^{n}\ar[r] &\ldots\ar[r] &\Omega_{X}^{1}\ar[r]
&\O_{X}\ar[r] & \field\ar[r] & 0 \\
0\ar[r] &\bar{\Omega}_{X}^{n}\ar[r] &\ldots\ar[r] &\bar{\Omega}_{X}^{1}\ar[r]
&\O_{X}\ar[r] & \field\ar[r] & 0
}$$
Absolutely regular horizontal differentials {#arhdiff}
===========================================
In this section, we merge the construction of horizontal differentials and the content of appendix \[aregdiff\].
Let $X$ be a $G$-scheme and $f : \tilde X\map{} X$ a $G$-equivariant desingularisation. We denote by $\tilde\Omega_{X,G}$ the sheaf $f_{*}\Omega_{\tilde X,G}$. This definition is independent of the choice of $f$, as in the non-equivariant case, since two equivariant resolutions of singularities can be covered by a third one.
By construction, we have natural equivariant morphisms $$\Omega_{X,G}\map{}\tilde\Omega_{X,G}\map{}i_{*}\Omega_{X_{\smooth},G}$$ where $i$ is the inclusion $X_{\smooth}\subset X$. Therefore, when $X$ is reduced, we have : $$\Omega_{X,G}\map{}\bar\Omega_{X,G}\subset\tilde\Omega_{X,G}\subset i_{*}\Omega_{X_{\smooth},G}.$$
\[arhdiff2\] Let $f : X\map{}Y$ be an equivariant dominant morphism. Then we have a commutative diagram $$\xymatrix{
\Omega_{X,G}\ar[r] & \tilde\Omega_{X,G} \\
f^{*}\Omega_{Y,G}\ar[r]\ar[u] & f^{*}\tilde\Omega_{Y,G}\ar[u]
}$$
\[arhdiff3\] Let $f : X\map{}Y$ be an invariant dominant morphism. Then we have a commutative diagram $$\xymatrix{
\Omega_{X,G}\ar[r] & \tilde\Omega_{X,G} \\
f^{*}\Omega_{Y}\ar[r]\ar[u] & f^{*}\tilde\Omega_{Y}\ar[u]
}$$
\[arhdiff1\] Let $f : X\map{}Y$ be a proper equivariant birational morphism. Then the morphism $\tilde\Omega_{Y,G}\map{}f_{*}\tilde\Omega_{X,G}$ is an isomorphism.
With this at hand, we can give a partial answer to the question raised by M. Brion ([@MR98k:14067 after Theorem 2]) :
\[arhcomp\] Let $G$ be a reductive algebraic group, $X$ be a smooth affine $G$-scheme and $\pi : X\map{}Y$ the quotient. There is a sequence of inclusions : $$\bar\Omega_{Y}\subseteq\tilde\Omega_{Y}\subseteq\inv{G}{\Omega_{X,G}}
\subseteq\omega_{Y}$$ which are equalities on the smooth locus of $Y$.
\[arhdual\] Let $G$ be a reductive algebraic group, $X$ be a smooth affine $G$-scheme with quotient map $\pi : X\map{}Y$ and let $n=\dim Y$. Then the dualizing sheaf on $Y$, $\omega^{n}_{Y}$, is isomorphic to $\inv{G}{\Omega^{n}_{X,G}}$.
[\[arhcomp\]]{} Since $\Omega_{X,G}=\tilde\Omega_{X,G}$, by (\[arhdiff3\]) we have inclusions $\bar\Omega_{Y}\subseteq\tilde\Omega_{Y}\subseteq\inv{G}{\Omega_{X,G}}$ of torsion-free modules. Moreover, by the theorem of Brion ([@MR98k:14067 Theorem 1]), these are isomorphisms outside the closed subset $Y_{\sing}$, therefore outside a closed subset of codimension $\geq
2$. Thus the modules involved have isomorphic biduals and we obtain : $$\bar\Omega_{Y}\subseteq\tilde\Omega_{Y}\subseteq\inv{G}{\Omega_{X,G}}
\subseteq\dd{\Omega_{Y}}=\omega_{Y}.$$
[\[arhdual\]]{} It is then a direct consequence of the fact that $Y$ has rational singularities ([@MR88a:14005]). Indeed, this implies that $\tilde\Omega^{n}_{Y}\map{\sim}\omega^{n}_{Y}$.
\[arhdiff4\] If one assume that all the points of $X$ are strongly stable for the action of $G$, i.e., that for all closed points $x\in X$, the orbit $Gx$ is closed and the stabilizer $G_{x}$ is finite, then there are isomorphisms $$\tilde{\Omega}_{Y}\map{\sim}\inv{G}{\Omega_{X,G}}
\map{\sim}\omega_{Y}.$$
To prove this, one can assume that the group $G$ is already finite (use the Etale Slice Theorem as in the last reduction step in (\[ihdiffsmth\]) below). With this assumption made it is easily seen that $\Omega_{X,G}=\Omega_{X}$ (here $\lieg=(0)$) and that consequently $\inv{G}{\Omega_{X}}=\omega_{Y}$. It therefore remains to see that $\tilde{\Omega}_{Y}=\inv{G}{\Omega_{X}}$. This can be done as follows.
We have a commutative diagram $$\xymatrix{
\tilde X \ar[r]^{\tilde\pi}\ar[d]^{g} & \tilde Y\ar[d]^{f} \\
X\ar[r]^{\pi} & Y}$$ where $f$ is a resolution of singularities for $Y$ and $\tilde X$ is the normalization of the component birational to $X$ in $X\times_{Y}\tilde
Y$. The group $G$ acts naturally on $\tilde X$ and the map $\tilde\pi$ is the quotient morphism. We thus have a morphism $$\Omega_{\tilde Y}\map{}\inv{G}{\tilde\pi_{*}\tilde{\Omega}_{\tilde X}}$$ induced by $\tilde\pi$. Since $\tilde X$ is normal it is an isomorphism in codimension 1 and since $\Omega_{\tilde Y}$ is locally free it is in fact an isomorphism (recalling that $\tilde\Omega_{\tilde X}$ is torsion-free). Consequently, we have $$\tilde\Omega_{Y} = f_{*}\Omega_{\tilde Y}
= f_{*}\inv{G}{\tilde\pi_{*}\tilde{\Omega}_{\tilde X}}
= \inv{G}{\pi_{*}g_{*}\tilde{\Omega}_{\tilde X}}
= \inv{G}{\pi_{*}\Omega_{X}}.$$ This proves our claim.
Invariant horizontal differentials and smoothness {#ihdiffsmth}
=================================================
In this section we give proofs for the results stated in the introduction :
\[smcrit1\] Let $G$ be a reductive algebraic group acting on a smooth affine variety $X$, with quotient map $\pi :X\map{} Y$ and let $k$ be an integer with $1\leq k\leq\dim Y$. The morphism $\inv{G}{d\pi^{k}}$ is surjective in codimension $k+1$ if and only if $Y$ is smooth in codimension $k+1$.
\[smcrit2\] Let $Y$ be the quotient of a smooth affine variety by a reductive algebraic group and let $k$ be an integer with $1\leq k\leq\dim Y$. The fundamental class morphism $c_{Y}^{k}$ is surjective in codimension $k+1$ if and only if $Y$ is smooth in codimension $k+1$.
[\[smcrit1\]]{} After deleting a closed subset of codimension $>k+1$ we may assume that the morphism $\inv{G}{d\pi} : \Omega_{Y}\map{}\inv{G}{\Omega_{X,G}}$ is surjective in degree $k$, i.e. that we have a surjection $\Omega^{k}_{Y}\map{}\inv{G}{\Omega^{k}_{X,G}}$ and we want to prove that under this hypothesis the singular locus of $Y$ has codimension $>k+1$.
The proof, now divides in five steps.
### Etale slices {#etale-slices .unnumbered}
Quite generally, let $H\map{}G$ be a map of reductive algebraic groups and $W$ an affine $H$-scheme together with an $H$-equivariant map $ j : W\map{} X$. We let $G\times H$ act on $G\times W$ in the following way : $(g,h)(g',w)=(gg'h^{-1},hw)$ and denote by $f : G\times W\map{}
G\times_{H}W$ the quotient by $1\times H$. Observe that since $1\times
H$ acts freely on $G\times W$, the map $f$ is a principal fibration and therefore is smooth. We obtain commutative diagram of $G\times H$-schemes :
$$\begin{aligned}
\label{slicediag1}\xymatrix @+1.3em{
G\times W\ar[d]^{}\ar[r]^{f} &
G\times_{H}W\ar[d]^{}\ar[r]^(.6){\bar{\mu}(G\times_{H} j)} &
X\ar[d]^{\pi}\\
W\ar[r]^{} & W\quot H\ar[r]^{} & X\quot G
}\end{aligned}$$
where the vertical maps are quotients by $G$, the horizontal maps in the left-square are quotients by $1\times H$ and $\bar{\mu}$ is the factorization of the $1\times H$-invariant map $\mu$ ($1\times H$ acts trivially on $X)$.
For $y\in Y$ a closed point, we denote by $T_{y}\subset X_{y}$ the unique closed orbit over $y$. Let $x\in T_{y}$ be a closed point with (necessarily) reductive stabilizer $H=G_{x}$. The Etale Slice theorem of Luna ([@MR49:7269 pp 96–99]), asserts the following : There exists a smooth locally closed, $H$-stable subvariety $W$ of $X$ such that $x\in W$, $G.W$ is an open set and such that in the natural commutative diagram (\[slicediag1\]) the right-square is cartesian with etale horizontal maps (i.e. an etale base change diagram). Moreover, letting $N=N_{T_{y}/X}(x)$ be the normal space at $x$ of the orbit $T_{y}$, understood geometrically as a rational representation of $H$, there is a natural map of $H$-schemes $\rho : W\map{}N$, etale at $0$, which induces a commutative diagram :
$$\begin{aligned}
\label{slicediag2}\xymatrix @+1.3em{
G\times_{H} N\ar[d]^{\phi} &
G\times_{H}W\ar[d]^{}\ar[r]^(.6){\bar{\mu}(G\times_{H}
j)}\ar[l]_{G\times_{H}\rho} &
X\ar[d]^{\pi}\\
N\quot H & W\quot H\ar[r]^{}\ar[l] & X\quot G
}\end{aligned}$$
where the two squares are cartesian and the horizontal maps are etale neighbourhoods.
### Stratification by slice type {#stratification-by-slice-type .unnumbered}
We again refer to ([@MR49:7269 pp 100–102]). Let $H\subseteq G$ be a reductive subgroup and $N$ an $H$-module. We have a commutative diagram : $$\begin{aligned}
\label{slicediag4}
\xymatrix @+1.3em{
G\times N\ar[r]^{f}\ar[rd] & G\times_{H} N\ar[d] \\
& G/H
}\end{aligned}$$ which realizes $G\times_{H}N$ as the total space of a $G$-equivariant vector bundle over the affine homogeneous space $G/H$ with fiber at $1$ equals to $N$. Conversely let $N$ be a $G$-equivariant vector bundle over an affine $G$-homogeneous base $T$. Let $t\in T$ be a closed point then $N(t)$ is a $G_{t}$-module and $G_{t}$ is reductive. Thus we have an equivalence between the set $\{(H,N)\}$ up to conjugacy and the isomorphism classes of $G$-equivariant vector bundles over affine homogeneous bases. We denote by ${\mathcal M}(G)$ any of those sets and classes by brackets $[\,]$.
By the preceding, we thus have a map $\mu : Y(\bar\field)\map{} {\mathcal M}(G)$ which sends $y$ to the isomorphism class $[N_{T_{y}/X}\map{} T_{y}]$ or equivalently to the “conjugacy class” $[H, N]$ with the notations of the preceding section. Let $\nu\in{\mathcal M}(G)$, then the set $\mu^{-1}(\nu)$ is a locally closed subset of $Y$, smooth with its reduced scheme structure. We will denote by $Y_{\nu}$ this smooth locally closed subscheme of $Y$. Moreover the collection $\{Y_{\nu}\}_{\nu\in{\mathcal M}(G)}$ is a finite stratification of $Y$ (in particular $\mu$ has finite image). Therefore, the map $\mu$ can be extended to all the points of $Y$ : Let $Z\subset Y$ be an irreducible closed subset, then there exists a unique $\nu\in{\mathcal M}(G)$ such that $Z\cap Y_{\nu}$ is dense in $Z$ and one can set $\mu(Z)=\nu$. Observe that $\mu(Z)$ is the slice type of a general point of $Z$.
Another important fact about $\mu$ is that it is compatible with strongly etale (also called excellent) morphisms : Given such a map $\varphi$ between smooth affine $G$-schemes, we have $\mu(\varphi\quot G)=\mu$.
We now look closer to $G$-schemes of the kind $G\times_{H}N$ and their quotients by $G$. Write $N_{H}$ for the canonical complementary submodule to $N^{H}$ in $N$ : $N=N^{H}\times N_{H}$. Then in the construction of $G\times_{H}N$, $N^{H}$ is a trivial $H$-module and therefore the diagram obtained when $W$ is replaced by $N$ in the left square of (\[slicediag1\]) reads : $$\begin{aligned}
\label{slicediag3}\xymatrix @+1.3em{
N^{H}\times(G\times N_{H}) \ar[d]^{p}\ar[r]^{f} &
N^{H}\times(G\times_{H}N_{H})\ar[d]^{\phi} \\
N^{H}\times N_{H}\ar[r]^{\psi} & N^{H}\times(N_{H}\quot H)
}\end{aligned}$$ Let $\nu\in{\mathcal M}(G)$ be the class of $(H,N)$, then $((G\times_{H}N)\quot G)_{\nu}=N^{H}\times 0\subseteq N\quot H$. One can convince oneself of this fact through the description of $G\times_{H}N$ as an equivariant vector bundle over $G/H$.
### Reduction to an isolated singularity {#reduction-to-an-isolated-singularity .unnumbered}
First, it is harmless to assume that the singular locus of $Y$, $Y_{\sing}$ is irreducible. Let $\mu(Y_{\sing})=\nu=[H,N]$ and let $y\in Y_{\sing}\cap Y_{\nu}$ be a general closed point. By standard etale base change arguments in the diagram (\[slicediag2\]), our hypothesis and our conclusion hold for $\pi$ at $y$ if and only if they respectively hold for $\phi$ at $0$. We can therefore assume that $X=G\times_{H}N$, $\pi=\phi$ and $Y=N\quot H$.
Now, with the notations of (\[slicediag3\]), it is clear that $Y_{\sing}=N^{H}\times(N_{H}\quot H)_{\sing}$. On the other hand $Y_{\nu}=N^{H}\times 0$ and, since $\mu(Y_{\sing})=\nu$, the closed subset $Y_{\nu}$ should cut a dense open set on $Y_{\sing}$. Consequently, we must have $Y_{\nu}=Y_{\sing}$ and thus $(N_{H}\quot H)_{\sing}=0$.
Let $\pi_{H} : X_{H}=G\times_{H}N_{H}\map{}Y_{H}=N_{H}\quot H$ be the quotient map by $G$, then clearly $\pi=N^{H}\times \pi_{H}$. Let $k$ be an integer, then the map $\inv{G}{d\pi}$ is diagonal with respect to the decompositions :
$$\begin{aligned}
\inv{G}{\Omega^{k}_{X,G}} & = &
\bigoplus_{i=0}^{k}\Omega^{i}_{N^{H}}\etimes\inv{G}{\Omega^{k-i}_{X_{H},G}}\\
\Omega^{k}_{Y} & = &
\bigoplus_{i=0}^{k}\Omega^{i}_{N^{H}}\etimes\,\Omega^{k-i}_{Y_{H}}\end{aligned}$$
Therefore $\inv{G}{d\pi}$ is surjective in degree $k$ if and only if $\inv{G}{d\pi_{H}}$ is surjective in all degrees $k-\dim{N^{H}},\dots, k$.
To conclude, we can therefore make the extra assumption that $Y=X\quot G=N\quot H$ has only an isolated singularity at $0$. And one should notice that the theorem remains in fact only to be proved when $k=\dim Y-1$ or $\dim Y$, since, otherwise ($k<\dim
Y-1$) the statement is obviously true.
### Reduction to the case of a representation {#reduction-to-the-case-of-a-representation .unnumbered}
We keep in mind all the identifications and assumptions made previously. Recalling diagram (\[slicediag3\]) and applying lemmas (\[fibration\]) and (\[twogroups\]) to the fibration $f$, we have an exact sequence
$$\begin{aligned}
\xymatrix @+1.3em{
0\ar[r] & f^{*}\Omega_{G\times_{H} N, G}\ar[r] & \Omega_{G\times N,
G}\ar[r] & \Omega_{G\times N, G}\otimes\lieh^{\vee}.
}\end{aligned}$$
Taking $G$-invariants together with lemma (\[fibration\]) for $p$ leads to the exact sequence :
$$\begin{aligned}
\xymatrix @+1.3em{
0\ar[r] & \inv{G}{f^{*}\Omega_{G\times_{H} N, G}}\ar[r] &
\Omega_{N}\ar[r] & \Omega_{N}\otimes\lieh^{\vee}
}\end{aligned}$$
Therefore, we have proved that $\inv{G}{f^{*}\Omega_{G\times_{H} N, G}}=\Omega_{N, H}$. Taking $H$-invariants, we obtain $$\inv{H}{\Omega_{N, H}}=\inv{G\times H}{f^{*}\Omega_{G\times_{H} N, G}}=
\inv{G}{\Omega_{G\times_{H} N, G}}.$$ One can then conclude, that the hypothesis and the conclusion of the theorem hold for $\phi$ if and only if they respectively hold for $\psi$. Thus we are reduced to prove the theorem in the case where $X$ is a rational representation of $G$ with $X\quot G$ having only an isolated singularity at the origin.
### Conclusion {#conclusion .unnumbered}
Carrying on, $X$ is now a rational $G$-module with quotient $\pi :
X\map{}Y$, such that $Y$ has only an isolated singularity at the origin. We recall the hypothesis in the theorem : The morphism $\inv{G}{d\pi}$ is surjective in degree $k\leq\dim Y$. We must prove that $Y$ is smooth in codimension $k+1$. Thus we have to prove that if $k=\dim Y$ or $\dim Y-1$ then $Y$ is smooth.
The one dimensional torus $T=\Gm$ acts on $X$ by homothety and this action commutes with the action of $G$. Thus $X$ is a $G\times T$ scheme and $Y$ is a $T$-scheme. Both $X$ and $Y$ are quasi-conical and $X\quot T=Y\quot T=\Spec(\field)$.
Let $n=\dim Y$. Applying (\[euler10\]) to $X$ and $Y$ we obtain an injective morphism of exact complexes (the kernel of $\inv{G}{d\pi}$ is exactly the torsion of $\Omega_{Y}$, cf. remark \[torsionkernel\]) :
$$\xymatrix@-1em{
\inv{G}{\Omega_{X,G}} & 0\ar[r] &\inv{G}{\Omega_{X,G}^{n}}\ar[r]
&\inv{G}{\Omega_{X,G}^{n-1}}\ar[r] &\ldots\ar[r]
&\inv{G}{\Omega_{X,G}^{1}}\ar[r]
&\O_{Y}\ar[r] & \field\ar[r] & 0 \\
\bar{\Omega}_{Y}\ar[u]^{\inv{G}{d\pi}} &0\ar[r]
&\bar{\Omega}_{Y}^{n}\ar[r]\ar[u] &\bar{\Omega}_{Y}^{n-1}\ar[r]\ar[u]
&\ldots\ar[r]
&\bar{\Omega}_{Y}^{1}\ar[r]\ar[u]
&\O_{Y}\ar[r]\ar@{=}[u] & \field\ar[r]\ar@{=}[u] & 0
}$$ From this diagram, we deduce that if $\inv{G}{d\pi}$ is surjective in degree $n-1$, then it is also surjective in degree $n$. Therefore we have an isomorphism $\bar\Omega^{n}_{Y}\map{\sim}\inv{G}{\Omega_{X,G}^{n}}$. Moreover, by proposition (\[arhcomp\]) we know that $\inv{G}{\Omega_{X,G}^{n}}=\omega^{n}_{Y}=\dd{\Omega^{n}_{Y}}$. Thus $\bar\Omega^{n}_{Y}$ is a reflexive module.
Recall that by the theorem of Boutot ([@MR88a:14005]), $Y$ has rational singularities and in particular is normal and Cohen-Macaulay and that $\omega^{n}_{Y}$ is then the dualizing module of $Y$. The fundamental class map $c$ ([@MR90a:14021 5.2 p 91, 5.15 p 99], [@MR80h:14009] and appendix \[rardiff\]), in degree $n$, factors through :
$$\xymatrix{
\Omega^{n}_{Y}\ar[r]^{c}\ar@{>>}[d] & \omega^{n}_{Y} \\
\bar\Omega^{n}_{Y}\ar[ru] &
}$$ But $\bar\Omega^{n}_{Y}$ is reflexive and, since $Y$ is normal, $c$ is an isomorphism in codimension $1$. Therefore $c$ is necessarily surjective. We now invoke a theorem of Kunz and Waldi ([@MR90a:14021 5.22 p 107]) to conclude that $Y$ is smooth.
The proof of theorem \[smcrit1\] is complete.
[\[smcrit2\]]{} Using (\[smcrit1\]), we can a give a straightforward proof of the result : By (\[arhcomp\]) the hypotheses of (\[smcrit1\]) are satisfied for the same integer $k$.
The case of abelian finite groups {#fgroup}
=================================
Let $G$ be a finite group acting on a quasi-projective scheme $X$ and let $\pi : X\map{}Y$ be the quotient.
For an element $g\in G$, we denote the closed subscheme of $g$-fixed points by $X^{g}$ and for a point $x\in X$, we denote its stabilizer (also called isotropy subgroup) by $G_{x}$. We then define a increasing filtration of $G$ by normal subgroups in the following way : For $k\geq 0$ an integer we set $G^{k}=<g\in G, \forall x\in X^{g}, \codim{X^{g},x}\leq k>$. In particular $G^{1}$ is the subgroup generated by the [*pseudo-reflections*]{} in $G$. For a point $x\in X^{g}$, if $\codim{X^{g},x}\leq 1$ then $g$ is said to be a [*pseudo-reflection at $x$*]{}. When $X$ is smooth, this condition is satisfied if and only if locally at $x$, the diagonal form of $g$ is of the kind $(\zeta,1,\ldots,1)$ for some root of unity $\zeta$. Clearly $g$ is a pseudo-reflection if and only if it is a pseudo-reflection at all the points of $X^{g}$.
When $G^{1}=(1)$ one says that $G$ is a [*small*]{} group of automorphisms of $X$. In this case, by standard ramification theory, the quotient map is unramified in codimension one. When $G=G^{1}$ one says that $G$ is generated by pseudo-reflections. We now recall the classical
[Theorem (Shephard-Todd, Chevalley, Serre [@MR15:600b; @MR38:3267])]{} With the preceding notations, the following conditions are equivalent :
The quotient $Y$ is smooth.
For all $x\in X$, the group $G_{x}$ is generated by the pseudo-reflections at $x$.
The $\O_{Y}$-module $\pi_{*}\O_{X}$ is locally free.
Thus, the local study of quotients of smooth varieties by finite groups reduces to the study of quotients of smooth varieties by small finite groups of automorphisms : Indeed, the theorem above implies that, locally around $x$, the group $G_{x}/G_{x}^{1}$ is a small group of automorphisms of the smooth variety $X/G_{x}^{1}$. It is also clear that, for local questions, by the Etale Slice Theorem (see (\[ihdiffsmth\])) one is reduced to study the case where $X$ is a rational representation of $G$.
\[fgroup3\] Let $G$ be a finite abelian group acting on a smooth affine scheme $X$ with quotient $\pi : X\map{} Y$ and let $k$ be an integer with $1\leq k\leq\dim X$. The morphism $\inv{G}{d\pi^{k}}$ is surjective if and only if $Y$ is smooth.
[\[fgroup3\]]{} By the preceding remarks, we are reduced to the case where $X$ is a rational representation of $G$ as a small group of automorphism. So that the map $\pi$ is unramified in codimension one.
We recall that, $G$ being finite, we have $\Omega_{X,G}=\Omega_{X}$. Moreover by (\[smcrit1\]) we deduce that $Y$ is smooth in codimension 2 and we can assume that $1\leq k <\dim{X}-1$. Thus we can assume that $\dim X>2$ and purity of the branch locus implies that $\pi$ is unramified in codimension 2.
From now on we proceed by induction on $\dim X$. Since $G$ is abelian, $X$ decomposes as a product of representation : $X=X'\times L$ with $2\leq\dim{X'}=\dim{X}-1$. We have a diagram $$\xymatrix{
X'\ar@{^{(}->}[r] \ar[d]^{\pi'} & X\ar[d]^{\pi} \\
Y'\ar@{^{(}->}[r] & Y
}$$ where the vertical maps are quotient by $G$ and the horizontal ones are embeddings. This induces a commutative diagram : $$\xymatrix{
\inv{G}{\Omega^{k}_{X}}\ar[r] & \inv{G}{\Omega^{k}_{X'}}\\
\Omega^{k}_{Y}\ar[r]\ar[u] & \Omega^{k}_{Y'}\ar[u]
}$$ where all the morphisms are surjective. Thus, by the induction hypothesis, $Y'$ is smooth. Now, if $G$ were not trivial, the origin being a fixed point, the map $\pi'$ should have to be ramified and, by purity of the branch locus again, its ramification locus should have codimension one. But then $\pi$ should be ramified in codimension 2. It is a contradiction. Thus, $G$ is trivial and therefore $Y$ is smooth.
Regular and absolutely regular differentials {#rardiff}
============================================
Regular differentials {#regdiff}
---------------------
Regular differentials together with duality theory have been studied by many authors but from different viewpoints. The main results that we need are found in the book of Kunz and Waldi ([@MR90a:14021]), but we feel that the very general and explicit construction of regular differentials in this book (where the construction is local and relative from the beginning) asks a lot of the (lazy) reader, and therefore does not “specialize” easily to a convenient tool in the common case of schemes of finite type over a field.
Thus we choose the following path : We take the theory of the residual complex and fundamental class as exposed in the work of El Zein ([@MR80h:14009]) as a “black box” and rephrase, with a view toward Kunz and Waldi’s theory of regular differentials, the results and constructions of El Zein. We do not intend to say anything new here and all the subsequent claims are implicitely proved in El Zein’s article ([@MR80h:14009]). In fact, this approach was inspired to us by the work of Kersken ([@MR85h:32014; @MR86a:14015; @MR85j:14032]).
### Construction {#construction .unnumbered}
Let $\field$ be a field of characteristic 0. For any scheme $X$ of finite type over $\field$, there exists a [*residual complex*]{} $\K_{X}$ ([@MR36:5145]). This is a complex of injective $\O_{X}$-modules concentrated in degree $[-\dim X, 0]$, the image of which in the derived category is the [*dualizing complex*]{}.
Let $n=\dim X$. We denote by $\omega^{n}_{X}$ the module $\cH{0}{\K_{X}[-n]}$. If $X$ is smooth, $\K_{X}$ is the Cousin resolution of $\Omega^{n}_{X}[n]$. If $i : X\map{}Y$ is an embedding of $X$ into a smooth $Y$ then $\K_{X}=i^{!}\K_{Y}=\sHom_{\O_{Y}}(\O_{X},\K_{Y})$. If $\pi : X\map{}Y$ is a finite surjective morphism then the complexes $\K_{X}$ and $\pi^{!}\K_{Y}$ are quasi-isomorphic and therefore $\omega^{n}_{X}\simeq\pi^{!}\omega^{n}_{Y}$. Moreover, the formation of the residual complex commutes with restriction to an open set. Thus, for a general $X$, $\omega^{n}_{X}$ has the ${\mathrm S}_{2}$ property and coincides with $\Omega^{n}_{X}$ at the smooth points of $X$. Consequently, if $X$ is normal then there is a natural isomorphism $\dd{\Omega^{n}_{X}}\map{\sim}\omega^{n}_{X}$.
The complex $\K_{X}$ is exact in degrees $\neq\dim X$ if and only if $X$ is equidimensionnal and Cohen-Macaulay. In this case, the module $\omega^{n}_{X}$ is the [*dualizing module*]{} (usually denoted $\omega_{X}$).
Now, following El Zein, let $\K_{X}^{*,\cdot}=\sHom(\Omega_{X},\K_{X})$. It is a bigraded object, where the $*$ (resp. the $\cdot$) corresponds to degrees in $\Omega_{X}$ (resp. in $\K_{X}$), concentrated in degrees $[-\infty, 0]\times [-\dim X, 0]$. We now explain how one can put on $\K_{X}^{*,\cdot}$ a structure of complex of right differential graded $\Omega_{X}$-modules concentrated in degree $[-\dim X, 0]$.
The left $\Omega_{X}$-module structure of $\Omega_{X}$ given by exterior product induces an obvious right $\Omega_{X}$-module structure on $\K_{X}^{*,p}=\sHom(\Omega_{X},\K^{p}_{X})$ and the differential $\delta$ of $\K_{X}$ induces an $\Omega_{X}$-linear differential : $\delta'=\sHom(\Omega_{X},\delta)$.
The non-trivial point is the existence for all $p$ of a differential endo-operator $d'$ of order $\leq 1$ and $*$-degree 1 on $\K_{X}^{*,p}$ satisfying the conditions
$\delta'.d'=d'.\delta'$.
$d'(\phi.\alpha)=\phi.(d\alpha)+(-1)^{q}(\d'\phi).\alpha$, for $\alpha\in\Omega^{q}_{X}$ and $\phi\in\K_{X}^{*,p}$.
The construction of $d'$ is explained in ([@MR80h:14009 2.1.2]), the proof of (ii) follows from the lemma ([@MR80h:14009 2.1.2, Lemme], be aware that there is a misprint in this paper : The logical section 2.1.2 is labelled 3.1.2) and the remarks following the proof of this lemma. Finally, (i) is a direct consequence of ([@MR80h:14009 2.1, Proposition]) and ([@MR80h:14009 2.1.2, Proposition]). We want to insist on the fact that, even in the smooth case, the operator $d'$ is not the naive (and above all, meaningless) “$\sHom(d,\K_{X})$”. We can now define the module of [*regular differential forms*]{} : $\omega_{X}=\cH{*,0}{\K_{X}^{*,\cdot}[-n,-n]}$. Thus, $\omega_{X}$ is a right differential graded $\Omega_{X}$-module and one has $\omega^{k}_{X}=\sHom(\Omega^{n-k}_{X},\omega^{n}_{X})$.
When $X$ is normal and equidimensional, the isomorphism $\dd{\Omega^{n}_{X}}\map{\sim}\omega^{n}_{X}$ therefore induces an isomorphism $\dd{\Omega_{X}}\map{\sim}\omega_{X}$. Thus, in this case, it is easily seen that this construction coincides with that of Kunz and Waldi ([@MR90a:14021 3.17, Theorem]). Note also that, when $X$ is normal, $\omega_{X}$ is a reflexive module.
### The fundamental class {#the-fundamental-class .unnumbered}
The [*fundamental class*]{} is constructed and studied by El Zein in ([@MR80h:14009 3.1, Théorème]). The fundamental class is defined as a global section ${\mathrm C}_{X}$ of $\K^{*,\cdot}_{X}$ (as a bigraded object) satisfying $d'{\mathrm C}_{X}=\delta'{\mathrm
C}_{X}=0$. When $X$ is equidimensional of dimension $n$, the fundamental class is homogeneous of degree $(-n, -n)$. In general, the contribution to ${\mathrm C}_{X}$ of an $m$-dimensional irreducible component of $X$ is homogeneous of degree $(-m, -m)$ (cf. the next section). Let $X$ be an $n$-dimensionnal scheme. By this observation, since $\delta'{\mathrm C}_{X}=0$, we have an induced cohomology class ${\mathrm c}_{X}\in\omega^{0}_{X}$. Then, right multiplication defines a morphism $$\begin{aligned}
\Omega_{X} & \map{} & \omega_{X} \\
\alpha & \longmapsto & {\mathrm c}_{X}.\alpha \end{aligned}$$ of differential graded $\Omega_{X}$-modules, thanks to the relation $d'{\mathrm c}_{X}=0$. We again denote by ${\mathrm c}_{X}$ this morphism and also call it the fundamental class morphism.
To be a little more explicit, ${\mathrm c}_{X}\in\cH{0}{X, \K^{*,\cdot}_{X}[-n,-n]}=
\Hom(\Omega^{n}_{X}, \omega^{n}_{X})$ and the fundamental class morphism in degree $k$ is the composition $$\Omega^{k}_{X}\map{}\Hom(\Omega^{n-k},\Omega^{n}_{X})\map{}
\Hom(\Omega^{n-k},\omega^{n}_{X})\simeq\omega^{k}_{X}.$$
When $X$ is normal and equidimensional, the morphism ${\mathrm c}_{X}$ can be identified with the natural morphism $\Omega_{X}\map{}\dd{\Omega_{X}}\simeq\omega_{X}$.
We can now state the following fundamental theorem of Kunz and Waldi :
[Theorem ([@MR90a:14021 5.22, p107])]{} Let $X$ be an equidimensional Cohen-Macaulay reduced scheme of finite type over $\field$ and let $n=\dim X$. Then the support of $\Coker ({\mathrm c}_{X})^{n}$ is precisely the singular locus of $X$.
### The trace map for regular differentials {#the-trace-map-for-regular-differentials .unnumbered}
Let $f : X\map{}Y$ be a proper morphism, then the trace morphism $\Tr f : f_{*}\K^{*,\cdot}_{X}\map{}\K^{*,\cdot}_{Y}$ is obtained by the composition of the natural morphism $\Omega_{Y}\map{}f_{*}\Omega_{X}$ with the trace morphism for residual complexes $f_{*}\K_{X}\map{}\K_{Y}$. We thus have a well defined trace morphism $\Tr f : f_{*}\omega_{X}\map{}\omega_{Y}$ vanishing if $\dim X \neq\dim Y$.
Assume that $f$ is birational, i.e., that there exists a dense open subset $V\subset Y$ such that the induced morphism $f^{-1}(V)\map{}V$ be an isomorphism. Then, by ([@MR80h:14009 3.1, Théorème]) the trace morphism $\Tr f : f_{*}\K^{*,\cdot}_{X}\map{}\K^{*,\cdot}_{Y}$ sends ${\mathrm C}_{X}$ to ${\mathrm C}_{Y}$. Consequently, under these hypotheses we have a commutative diagram : $$\xymatrix{
f_{*}\Omega_{X}\ar[r]^{{\mathrm c}_{X}} & f_{*}\omega_{X}\ar[d]^{\Tr f} \\
\Omega_{Y}\ar[r]^{{\mathrm c}_{Y}}\ar[u] & \omega_{Y}
}$$
Let $X$ be a scheme and $X_{1},\ldots,X_{k}$ its irreducible components with their reduced structure and inclusions $j_{i} : X_{i}\subset X$. Then by construction ([@MR80h:14009 p37]) we have that ${\mathrm C}_{X}=
\sum_{i} e_{X_{i}}(X)\Tr j_{i}({\mathrm C}_{X_{i}})$, where $e_{X_{i}}(X)=\mathrm{length}(\O_{X,X_{i}})$, the multiplicity of $X$ along $X_{i}$. Thus, we have ${\mathrm c}_{X}=
\sum_{i} e_{X_{i}}(X)\Tr j_{i}({\mathrm c}_{X_{i}})$.
Assume now that $f : X\map{}Y$ is a finite dominant morphism between integral schemes then by ([@MR80h:14009 3.1, Proposition 2]) we have that $\Tr f({\mathrm C}_{X})=\deg(f){\mathrm C}_{Y}$. We therefore have a commutative diagram : $$\xymatrix{
f_{*}\Omega_{X}\ar[r]^{{\mathrm c}_{X}} & f_{*}\omega_{X}\ar[d]^{\Tr f} \\
\Omega_{Y}\ar[r]^{\deg(f){\mathrm c}_{Y}}\ar[u] & \omega_{Y}
}$$
Absolutely regular differentials {#aregdiff}
--------------------------------
Let $X$ be a scheme and $f : \tilde X\map{}X$ a desingularisation (if $X$ is not reduced, by this, we mean a desingularisation of $X_{\red}$). We recall that the $\O_{X}$-module $f_{*}\Omega_{\tilde X}$ is independent of the choice of $f$, we denote it by $\tilde\Omega_{X}$. It is usually called the module of [*absolutely regular differentials*]{}, or sometimes, when $X$ is a normal variety, the module of [*Zariski differentials*]{}. By construction, we have natural morphisms $$\Omega_{X}\map{}\tilde\Omega_{X}\map{}i_{*}\Omega_{X_{\smooth}}$$ where $i$ is the inclusion $X_{\smooth}\subset X$. Therefore, when $X$ is reduced, we have : $$\Omega_{X}\map{}\bar\Omega_{X}\subset\tilde\Omega_{X}\subset i_{*}\Omega_{X_{\smooth}}.$$ In general, we also have a commutative diagram : $$\xymatrix{
f_{*}\Omega_{\tilde X}\ar@{=}[r] & f_{*}\omega_{\tilde X}\ar[d]^{\Tr f} \\
\Omega_{X}\ar[u]\ar[r]^{{\mathrm c}_{X}} & \omega_{X}
}$$ and consequently, a sequence of morphisms $$\Omega_{X}\map{}\tilde\Omega_{X}\map{}\omega_{X}.$$
Let $f : X\map{}Y$ be a dominant morphism. Then we have a commutative diagram $$\xymatrix{
\Omega_{X}\ar[r] & \tilde\Omega_{X} \\
f^{*}\Omega_{Y}\ar[r]\ar[u] & f^{*}\tilde\Omega_{Y}\ar[u]
}$$ Assume moreover that the morphism $f$ is proper and birational. Then we have a commutative diagram $$\xymatrix{
f_{*}\Omega_{X}\ar[r] & f_{*}\tilde\Omega_{X}\ar[r] &
f_{*}\omega_{X}\ar[d]^{\Tr f} \\
\Omega_{Y}\ar[r]\ar[u] & \tilde\Omega_{Y}\ar[u]\ar[r] & \omega_{Y}
}$$ where the rows are factorisations of the respective fundamental class morphisms. Note that–obviously–the middle vertical arrow is an isomorphism.
{#section .unnumbered}
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Building conversational systems in new domains and with added functionality requires resource-efficient models that work under low-data regimes (i.e., in few-shot setups). Motivated by these requirements, we introduce intent detection methods backed by pretrained dual sentence encoders such as USE and ConveRT. We demonstrate the usefulness and wide applicability of the proposed intent detectors, showing that: **1)** they outperform intent detectors based on fine-tuning the full BERT-Large model or using BERT as a fixed black-box encoder on three diverse intent detection data sets; **2)** the gains are especially pronounced in few-shot setups (i.e., with only 10 or 30 annotated examples per intent); **3)** our intent detectors can be trained in a matter of minutes on a single CPU; and **4)** they are stable across different hyperparameter settings. In hope of facilitating and democratizing research focused on intention detection, we release our code, as well as a new challenging single-domain intent detection dataset comprising 13,083 annotated examples over 77 intents.'
author:
- |
I[ñ]{}igo Casanueva[^1], Tadas Temčinas,$^{*}$ Daniela Gerz, Matthew Henderson, Ivan Vulić\
PolyAI Limited\
London, United Kingdom\
`{inigo,dan,matt,ivan}@poly-ai.com`
bibliography:
- 'acl2020.bib'
title: |
Efficient Intent Detection with Dual Sentence Encoders\
[github.com/PolyAI-LDN/polyai-models](github.com/PolyAI-LDN/polyai-models)
---
Introduction {#s:intro}
============
Task-oriented conversational systems allow users to interact with computer applications through conversation in order to solve a particular task with well-defined semantics, such as booking restaurants, hotels and flights [@Hemphill:1990; @Williams:2012b; @ElAsri:2017sigdial], providing tourist information [@Budzianowski:2018emnlp], or automating customer support [@Xu:2017chi].
*Intent detection* is a vital component of any task-oriented conversational system [@Hemphill:1990; @Coucke:18]. In order to [understand the user’s current goal]{}, the system must leverage its intent detector to classify the user’s utterance (provided in varied natural language) into one of several predefined classes, that is, *intents*.[^2] Scaling intent detectors (as well as conversational systems in general) to support new target domains and tasks is a very challenging and resource-intensive process [@Wen:17; @rastogi2019towards]. The need for expert domain knowledge and domain-specific labeled data still impedes quick and wide deployment of intent detectors. In other words, one crucial challenge is enabling effective intent detection in *low-data scenarios* typically met in commercial systems, with only several examples available per intent (i.e., the so-called *few-shot learning setups*).
Transfer learning on top of pretrained sentence encoders [@Devlin:2018arxiv; @Liu:2019roberta *inter alia*] has now established as the mainstay paradigm aiming to mitigate the bottleneck with scarce in-domain data. However, directly applying the omnipresent sentence encoders such as BERT to intent detection may be sub-optimal. **1)** As shown by , pretraining on a general language-modeling (LM) objective for conversational tasks is less effective than *conversational pretraining* based on the response selection task [@Henderson:2019acl]. **2)** Fine-tuning BERT and its variants is very resource-intensive as it assumes the adaptation of the full large model. Moreover, in few-shot setups fine-tuning may result in overfitting. From a commercial perspective, these properties lead to extremely slow, cumbersome, and expensive development cycles.
Therefore, in this work we propose to use efficient *dual sentence encoders* such as Universal Sentence Encoder (USE) [@Cer:2018arxiv] and ConveRT [@henderson2019convert] to support intent detection. These models are in fact neural architectures tailored for modeling sentence pairs [@Henderson:2019acl; @Humeau:2019arxiv], and are trained on a conversational response selection task. As such, they inherently encapsulate conversational knowledge needed for (few-shot) intent detection. We discuss their advantage over LM-based encoders, and empirically validate the usefulness of conversational pretraining for intent detection. We show that intent detectors based on fixed USE and ConveRT encodings outperform BERT-backed intent detectors across the board on three diverse intent detection datasets, with prominent gains especially in few-shot scenarios. Another advantage of dual models is their compactness:[^3] we demonstrate that our state-of-the-art USE+ConveRT intent detectors can be trained even on a regular laptop’s CPU in several minutes.
We also show that intent detectors based on dual sentence encoders are largely invariant to hyperparameter changes. This finding is extremely important for real-life low-data regimes: due to the invariance, the expensive hyperparameter tuning step can be bypassed, and a limited number of annotated examples can be used directly as additional training data, instead of held-out validation data.
Another contribution of this work is a new and challenging intent detection dataset in the banking domain, dubbed <span style="font-variant:small-caps;">banking77</span>. It follows the very recent endeavor of procuring high-quality intent detection data [@Liu:2019iwsds; @larson-etal-2019-evaluation], but is very different in nature than the other datasets. Unlike prior work which scatters a set of coarse-grained intents across a multitude of domains (i.e., 10+ domains, see Table \[tab:data\] later), we present a challenging single-domain dataset comprising 13,083 examples over 77 fine-grained intents. We release the code and the data online at:\
[[github.com/PolyAI-LDN/polyai-models](github.com/PolyAI-LDN/polyai-models)]{}.
Methodology: Intent Detection with Dual Sentence Encoders {#s:methodology}
=========================================================
**Pretrained Sentence Encoders.** Large-scale pretrained models have benefited a wide spectrum of NLP applications immensely [@Devlin:2018arxiv; @Liu:2019roberta; @radford2019language]. Their core strength lies in the fact that, through consuming large general-purpose corpora during pretraining, they require smaller amounts of domain-specific training data to adapt to a particular task and/or domain [@Ruder:2019transfer]. The adaptation is typically achieved by adding a task-specific output layer to a large pretrained sentence encoder, and then fine-tuning the entire model [@Devlin:2018arxiv]. However, the fine-tuning process is computationally intensive [@Zafrir:2019arxiv; @henderson2019convert], and still requires sufficient task-specific data [@Arase:2019emnlp; @Sanh:2019arxiv]. As such, the standard approach is both unsustainable in terms of resource consumption [@Strubell:2019acl], as well as sub-optimal for few-shot scenarios.
**Dual Sentence Encoders and Conversational Pretraining.** A recent branch of sentence encoders moves beyond the standard LM-based pretraining objective, and proposes an alternative objective: *conversational response selection*, typically on Reddit data [@AlRfou:2016arxiv; @Henderson:2019arxiv]. As empirically validated by , conversational (instead of LM-based) pretraining aligns better with conversational tasks such as dialog act prediction or next utterance generation.
Pretraining on response selection also allows for the use of efficient *dual* models: the neural response selection architectures are instantiated as dual-encoder networks that learn the interaction between inputs/contexts and their relevant (follow-up) responses. Through such response selection pretraining regimes they organically encode useful conversational cues in their representations.
In this work, we propose to use such efficient conversational dual models as the main source of (general-purpose) conversational knowledge to inform domain-specific intent detectors. We empirically demonstrate their benefits over other standard sentence encoders such as BERT in terms of **1)** performance, **2)** efficiency, and **3)** applicability in few-shot scenarios. We focus on two prominent dual models trained on the response selection task: Universal Sentence Encoder (USE) [@Cer:2018arxiv], and Conversational Representations from Transformers (ConveRT) [@henderson2019convert]. For further technical details regarding the two models, we refer the interested reader to the original work.
**Intent Detection with dual Encoders.** We implement a simple yet effective model (see §\[s:results\] later) for intent detection which is based on the two dual models. Unlike with BERT, we do not fine-tune the entire model, but use fixed sentence representations encoded by USE and ConveRT. We simply stack a Multi-Layer Perceptron (MLP) with a single hidden layer with ReLU non-linear activations [@Maas:2014icml] on top of the fixed representations, followed by a softmax layer for multi-class classification. This simple formulation also allows us to experiment with the combination of USE and ConveRT representations: we can feed the concatenated vectors to the same classification architecture without any further adjustment.
New Dataset: <span style="font-variant:small-caps;">banking77</span> {#s:banking77}
====================================================================
In spite of the crucial role of intent detection in any task-oriented conversational system, publicly available intent detection datasets are still few and far between. The previous standard datasets such as Web Apps, Ask Ubuntu, the Chatbot Corpus [@Braun:17] or SNIPS [@Coucke:18] are limited to only a small number of classes ($<10$), which oversimplifies the intent detection task and does not emulate the true environment of commercial systems. Therefore, more recent work has recognized the need for improved intent detection datasets. **1)** The dataset of , dubbed <span style="font-variant:small-caps;">hwu64</span>, contains 25,716 examples for 64 intents in 21 domains. **2)** The dataset of , dubbed <span style="font-variant:small-caps;">clinc150</span>, spans 150 intents and 23,700 examples across 10 domains.
However, the two recent datasets are *multi-domain*, and the examples per each domain may not sufficiently capture the full complexity of each domain as encountered “in the wild”. Therefore, to complement the recent effort on data collection for intent detection, we propose a new *single-domain* dataset: it provides a very fine-grained set of intents in a banking domain, not present in <span style="font-variant:small-caps;">hwu64</span> and <span style="font-variant:small-caps;">clinc150</span>. The new <span style="font-variant:small-caps;">banking77</span> dataset comprises 13,083 customer service queries labeled with 77 intents. Its focus on fine-grained single-domain intent detection makes it complementary to the two other datasets: we believe that any comprehensive intent detection evaluation should involve both coarser-grained multi-domain datasets such as <span style="font-variant:small-caps;">hwu64</span> and <span style="font-variant:small-caps;">clinc150</span>, and a fine-grained single-domain dataset such as <span style="font-variant:small-caps;">banking77</span>. The data stats are summarized in Table \[tab:data\].
The single-domain focus of <span style="font-variant:small-caps;">banking77</span> with a large number of intents makes it more challenging. Some intent categories partially overlap with others, which requires fine-grained decisions, see Table \[tab:banking-examples\] (e.g., *reverted top-up* vs. *failed top-up*). Furthermore, as other examples from Table \[tab:banking-examples\] suggest, it is not always possible to rely on the semantics of individual words to capture the correct intent.[^4]
[l XXX]{} **Dataset** & **Intents** & **Examples** & **Domains**\
& 64 & 25,716 & 21\
[<span style="font-variant:small-caps;">clinc150</span>]{} & 150 & 23,700 & 10\
(lr)[2-4]{} [<span style="font-variant:small-caps;">banking77</span> (ours)]{} & 77 & 13,083 & 1\
[l X]{} **Intent Class** & **Example Utterance**\
&\
[Link to Existing Card]{} &\
[Reverted Top-up]{} &\
[Failed Top-up]{} &\
Experimental Setup {#s:exp}
==================
**Few-Shot Setups.** We conduct all experiments on the three intent detection datasets described in §\[s:banking77\]. We are interested in wide-scale few-shot intent classification in particular: we argue that this setup most closely resembles the development process of a commercial conversational system, which typically starts with only a small number of data points when expanding to a new domain or task. We simulate such low-data settings by sampling smaller subsets from the full data. We experiment with setups where only 10 or 30 examples are available for each intent, while we use the same standard test sets for each experimental run.[^5]
Encoder CPU GPU
------------ ------ -------
Bert Large 2.4 235.9
USE 53.5 785.4
ConveRT 58.3 866.7
: Number of sentences encoded per second by the 3 sentence encoders benchmarked.[]{data-label="profiling-table"}
model CPU GPU TPU
---------------- ----- ----- ------
Bert finetuned N/A N/A 567s
USE 65s 57s N/A
ConveRT 73s 53s N/A
: Time to train and evaluate an intent detection model using the USE, ConveRT and BERT-finetuned intent detection models for the dataset banking and the data regime 10. The CPU is a 2.3 GHz Dual-Core Intel Core i5. The GPU is a GeForce RTX 2080 Ti, 11GiB. The TPU is a v2-8, 8 cores, 64 GiB.[]{data-label="profiling-table2"}
**MLP Design.** Unless stated otherwise (e.g., in experiments where we explicitly vary hyperparameters), for the MLP classifier, we use a single 512-dimensional hidden layer. We train with stochastic gradient descent (SGD), with the learning rate of $0.7$ and linear decay. We rely on very aggressive dropout ($0.75$) and train for $500$ iterations to reach convergence. We show how this training regime can improve the model’s generalization capability, and we also probe its (in)susceptibility to diverse hyperparameter setups later in §\[s:results\]. Low-data settings are balanced, which is especially easy to guarantee in few-shot scenarios.
**Models in Comparison.** We compare intent detectors supported by the following pretrained sentence encoders. First, in the <span style="font-variant:small-caps;">bert-fixed</span> model we use pretrained BERT in the same way as dual encoders, in the so-called *feature mode*: we treat BERT as a black-box fixed encoder and use it to compute encodings/features for training the classifier.[^6] We use the mean-pooled “sequence ouput” (i.e., the pooled mean of the sub-word embeddings) as the sentence representation.[^7] In the <span style="font-variant:small-caps;">bert-tuned</span> model, we rely on the standard BERT-based fine-tuning regime for classification tasks [@Devlin:2018arxiv] which adapts the full model. We train a softmax layer on top of the <span style="font-variant:small-caps;">\[cls\]</span> token output. We use the Adam optimizer with weight decay and a learning rate of $4 \times 10^{-4}$. For low-data (10 examples per intent), mid-data (30 examples) and full-data settings we train for 50, 18, and 5 epochs, respectively, which is sufficient for the model to converge, while avoiding overfitting or catastrophic forgetting.
We use the two publicly available pretrained dual encoders: **1)** the multilingual large variant of <span style="font-variant:small-caps;">use</span> [@Yang:2019multiuse],[^8] and **2)** the single-context <span style="font-variant:small-caps;">ConveRT</span> model trained on the full 2015-2019 Reddit data comprising 654M *(context, response)* training pairs [@henderson2019convert].[^9] In all experimental runs, we rely on the pretrained cased BERT-large model: 24 Transformer layers, embedding dimensionality 1024, and a total of 340M parameters. Note that e.g. ConveRT is much lighter in its design and is also pretrained more quickly than BERT [@henderson2019convert]: it relies on 6 Transfomer layers with embedding dimensionality of 512. We report accuracy as the main evaluation measure for all experimental runs.
Results and Discussion {#s:results}
======================
[max width=]{}
[l XX XX XX]{} & & &\
(lr)[2-3]{} (lr)[4-5]{} (lr)[6-7]{} **Model** & **10** & **Full** & **10** & **Full** & **10** & **Full**\
(lr)[2-3]{} (lr)[4-5]{} (lr)[6-7]{} <span style="font-variant:small-caps;">bert-fixed</span> & [64.9 (67.8) \[57.0\]]{} & [86.2 (88.4) \[74.9\]]{} & [78.1 (80.6) \[70.2\]]{} & [91.2 (92.6) \[84.7\]]{} & [71.5 (72.8) \[68.0\]]{} & [85.9 (86.8) \[81.5\]]{}\
<span style="font-variant:small-caps;">USE</span> & [83.9 (84.4) \[83.0\]]{} & [92.6 (92.9) \[91.4\]]{} & [90.6 (91.0) \[89.9\]]{} & [95.0 (95.3) \[93.9\]]{} & [83.6 (83.9) \[83.0\]]{} & [91.6 (92.1) \[90.7\]]{}\
<span style="font-variant:small-caps;">ConveRT</span> & [83.1 (83.4) \[82.4\]]{} & [92.6 (93.0) \[91.6\]]{} & [92.4 (92.8) \[92.0\]]{} & [97.1 (97.2) \[96.3\]]{} & [82.5 (83.1) \[82.0\]]{} & [91.3 (91.6) \[90.8\]]{}\
<span style="font-variant:small-caps;">USE+ConveRT</span> & [85.2 (85.5) \[84.8\]]{} & [93.3 (93.5) \[92.8\]]{} & [93.2 (93.5) \[92.8\]]{} & [97.0 (97.2) \[96.5\]]{} & [85.9 (86.2) \[85.7\]]{} & [92.5 (92.8) \[91.6\]]{}\
[l YY]{} Encoder & CPU & GPU\
(lr)[2-3]{} <span style="font-variant:small-caps;">bert</span> (Large) & 2.4 & 235.9\
<span style="font-variant:small-caps;">USE</span> & 53.5 & 785.4\
<span style="font-variant:small-caps;">ConveRT</span> & 58.3 & 866.7\
[l YYY]{} Classifer & CPU & GPU & TPU\
(lr)[2-4]{} <span style="font-variant:small-caps;">bert-tuned</span> & n/a & n/a & 567s\
<span style="font-variant:small-caps;">USE</span> & 65s & 57s & n/a\
<span style="font-variant:small-caps;">ConveRT</span> & 73s & 53s & n/a\
Table \[results-table\] summarizes the main results; we show the accuracy scores of all models on all three datasets, and for different training data setups. As one crucial finding, we report competitive performance of intent detectors based on the two dual models, and their relative performance seems to also depend on the dataset at hand: <span style="font-variant:small-caps;">USE</span> has a slight edge over <span style="font-variant:small-caps;">ConveRT</span> on <span style="font-variant:small-caps;">hwu64</span>, but the opposite holds on <span style="font-variant:small-caps;">clinc150</span>. The design based on fixed sentence representations, however, allows for the straightforward combination of <span style="font-variant:small-caps;">USE</span> and <span style="font-variant:small-caps;">ConveRT</span>. The results suggest that the two dual models in fact capture complementary information, as the combined <span style="font-variant:small-caps;">USE+ConveRT</span>-based intent detectors result in peak performance across the board. As discussed later, due to its pretraining objective, BERT is competitive only in its fine-tuning mode of usage, and cannot match other two sentence encoders in the feature-based (i.e., fixed) usage mode.
**Few-Shot Scenarios.** The focus of this work is on low-data few-shot scenarios often met in production, where only a handful of annotated examples per intent are available. The usefulness of dual sentence encoders comes to the fore especially in this setup: 1) the results indicate gains over the fine-tuned BERT model especially for few-shot scenarios, and the gains are more pronounced in our “fewest-shot” setup (with only 10 annotated examples per intent). The respective improvements of <span style="font-variant:small-caps;">USE+ConveRT</span> over <span style="font-variant:small-caps;">bert-tuned</span> are +1.77, +1.33, and +0.97 for <span style="font-variant:small-caps;">banking77</span>, <span style="font-variant:small-caps;">clinc150</span>, and <span style="font-variant:small-caps;">hwu64</span> (10 examples per intent), and we also see better results with the combined model when 30 examples per intent are available on all three datasets. Overall, this proves the suitability of dual sentence encoders for the few-shot intent classification task.
**Invariance to Hyperparameters.** A prominent risk in few-shot setups concerns overfitting to small data sets [@Srivastava:2014jmlr; @Olson:2018nips]. Another issue concerns the sheer lack of training data, which gets even more pronounced if a subset of the (already scarce) data must be reserved for validation and hyper-parameter tuning. Therefore, a desirable property of any few-shot intent detector is its invariance to hyperparameters and, consequently, its off-the-shelf usage without further tuning on the validation set. This effectively means that one could use all available annotated examples directly for training. In order to increase the reliability of the intent detectors and prevent overfitting in few-shot scenarios, we suggest to use the aggressive dropout regularization (i.e., the dropout rate is 0.75), and a very large number of iterations (500), see §\[s:exp\].
We now demonstrate that the intent detectors based on dual encoders are very robust with respect to different hyper-parameter choices, starting from this basic assumption that a high number of iterations and high dropout rates $r$ are needed. For each classifier, we fix the *base/pivot* configuration from §\[s:exp\]: the number of hidden layers is $H=1$, its dimensionality is $h=512$, the SGD optimizer is used with the learning rate of $0.7$. Starting from the pivot configuration, we create other configurations by altering one hyper-parameter at the time from the pivot. We probe the following values: $r=\{0.75, 0.5, 0.25\}$, $H=\{0, 1, 2\}$, $h=\{128, 256, 512, 1024\}$, and we also try out all the configurations with another optimizer: Adam with the linearly decaying learning rate of $4 \times 10^{-4}$.
The results with all hyperparameter configs are summarized in Table \[results-variation\]. They suggest that intent detectors based on dual models are indeed very robust. Importantly, we do not observe any experimental run which results in substantially lower performance with these models. In general, the peak scores with dual-based models are reported with higher $r$ rates (0.75), and with larger hidden layer sizes $h$ (1,024). On the other side of the spectrum are variants with lower $r$ rates (0.25) and smaller $h$-s (128). However, the fluctuation in scores is not large, as illustrated by the results in Table \[results-variation\]. This finding does not hold for <span style="font-variant:small-caps;">bert-fixed</span> where in Table \[results-variation\] we do observe “outlier” runs with substantially lower performance compared to its peak and average scores. Finally, it is also important to note <span style="font-variant:small-caps;">bert-tuned</span> does not converge to a good solution for 2% of the runs with different seeds, and such runs are not included in the final reported numbers with that baseline in Table \[results-table\].
**Resource Efficiency.** Besides superior performance established in Table \[results-table\] and increased stability (see Table \[results-variation\]), another advantage of the two dual models is their *encoding efficiency*. In Table \[profiling-table\] we report the average times needed by each fixed encoder to encode sentences fed in the batches of size 15 on both CPU (2.3 GHz Dual-Core Intel Core i5) and GPU (GeForce RTX 2080 Ti, 11 GB). The encoding times reveal that <span style="font-variant:small-caps;">bert</span>, when used as a sentence encoder, is around 20 times slower on the CPU and roughly 3 times slower on the GPU.[^10]
Furthermore, in Table \[profiling-table2\] we present the time required to train and evaluate an intent classification model for <span style="font-variant:small-caps;">banking77</span> in the lowest-data regime (10 instances per intent).[^11] Note that the time reduction on GPU over CPU for the few-shot scenario is mostly due to the reduced encoding time on GPU (see Table \[profiling-table\] again). However, when operating in the *Full* data regime, the benefits of GPU training vanish: using a neural net with a single hidden layer the overhead of the GPU usage is higher than the speed-up achieved due to faster encoding and network computations. Crucially, the reported training and execution times clearly indicate that effective intent detectors based on pretrained dual models can be constructed even without large resource demands and can run even on CPUs, without huge models that require GPUs or TPUs. In sum, we hope that our findings related to improved resource efficiency of dual models, as well as the shared code will facilitate further and wider research focused on intent detection.
**Further Discussion.** The results from Tables \[results-table\] and \[results-variation\] show that transferring representations from conversational pretraining based on the response selection task is useful for conversational tasks such as intent detection. This corroborates the main findings from prior work [@Humeau:2019arxiv; @henderson2019convert]. The results also suggest that using BERT as an off-the-shelf sentence encoder is sub-optimal: BERT is much more powerful when used in the fine-tuning mode instead of the less expensive “feature-based” mode [@Peters:2019repl]. This is mostly due to its pretraining LM objective: while both USE and ConveRT are forced to reason at the level of full sentences during the response selection pretraining, BERT is primarily a (local) language model. It seems that the next sentence prediction objective is not sufficient to learn a universal sentence encoder which can be applied off-the-shelf to unseen sentences in conversational tasks [@Mehri:2019acl]. However, BERT’s competitive performance in the fine-tuning mode, at least in the *Full* data scenarios, suggests that it still captures knowledge which is useful for intent detection. Given strong performance of both fine-tuned BERT and dual models in the intent detection task, in future work we plan to investigate hybrid strategies that combine dual sentence encoders and LM-based encoders. Note that it is also possible to combine <span style="font-variant:small-caps;">bert-fixed</span> with the two dual encoders, but such ensembles, besides yielding reduced performance, also substantially increase training times (Table \[profiling-table\]).
We also believe that further gains can be achieved by increasing the overall size and depth of dual models such as ConveRT, but this comes at the expense of its efficiency and training speed: note that the current architecture of ConveRT relies on only 6 Transformer layers and embedding dimensionality of 512 (cf., BERT-Large with 24 layers and 1024-dim embeddings).
Conclusion {#s:conclusion}
==========
We have presented intent classification models that rely on sentence encoders which were pretrained on a conversational response selection task. We have demonstrated that using dual encoder models such as USE and ConveRT yield state-of-the-art intent classification results on three diverse intent classification data sets in English. One of these data sets is another contribution of this work: we have proposed a fine-grained single-domain data set spanning 13,083 annotated examples across 77 intents in the banking domain.
The gains with the proposed models over fully fine-tuned BERT-based classifiers are especially pronounced in few-shot scenarios, typically encountered in commercial systems, where only a small set of annotated examples per intent can be guaranteed. Crucially, we have shown that the proposed intent classifiers are extremely lightweight in terms of resources, which makes them widely usable: they can be trained on a standard laptop’s CPU in several minutes. This property holds promise to facilitate the development of intent classifiers even without access to large computational resources, which in turn also increases equality and fairness in research [@Strubell:2019acl].
In future work we will port the efficient intent detectors based on dual encoders to other languages, leveraging multilingual pretrained representations [@Chidambaram:2019repl]. This work has also empirically validated that there is still ample room for improvement in the intent detection task especially in low-data regimes. Thus, similar to recent work [@Upadhyay:2018icassp; @Khalil:2019emnlp; @Liu:2019emnlp], we will also investigate how to transfer intent detectors to low-resource target languages in few-shot and zero-shot scenarios. We will also extend the models to handle out-of-scope prediction [@larson-etal-2019-evaluation].
We have released the code and the data sets online at:\
[[github.com/PolyAI-LDN/polyai-models](github.com/PolyAI-LDN/polyai-models)]{}.
[^1]: [ ]{} Equal contribution. TT is now at the Oxford University.
[^2]: For instance, in the e-banking domain intents can be *lost card* or *failed top-up* (see Table \[tab:banking-examples\]). The importance of intent detection is also illustrated by the fact that getting the intent wrong is the first point of failure of any conversational agent.
[^3]: For instance, ConveRT is only 59MB in size, pretrained in less than a day on 12 GPUs [@henderson2019convert].
[^4]: The examples in <span style="font-variant:small-caps;">banking77</span> are also longer on average (12 words) than in <span style="font-variant:small-caps;">hwu64</span> (7 words) or <span style="font-variant:small-caps;">clinc150</span> (8).
[^5]: For reproducibility, we release all training subsets.
[^6]: We have also experimented with ELMo embeddings [@Peters:2018naacl] in the same feature mode, but they are consistently outperformed by all other models in comparison.
[^7]: This performed slightly better than using the \[CLS\] token embedding as sentence representation.
[^8]: https://tfhub.dev/google/universal-sentence-encoder-multilingual-large/1
[^9]: https://github.com/PolyAI-LDN/polyai-models
[^10]: We provide a *colab* script to reproduce these experiments.
[^11]: Note that we cannot evaluate <span style="font-variant:small-caps;">bert-tuned</span> on GPU as it runs out of memory. Similar problems were reported in prior work [@Devlin:2018arxiv]. <span style="font-variant:small-caps;">USE</span> and <span style="font-variant:small-caps;">ConveRT</span> cannot be evaluated on TPUs as they currently lack TPU-specific code.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'It is of interest to study supergravity solutions preserving a non-minimal fraction of supersymmetries. A necessary condition for supersymmetry to be preserved is that the spacetime admits a Killing spinor and hence a null or timelike Killing vector. Spacetimes admitting a covariantly constant null vector ($CCNV$), and hence a null Killing vector, belong to the Kundt class. We investigate the existence of additional isometries in the class of higher-dimensional $CCNV$ Kundt metrics.'
author:
- |
[**A. Coley$^{1}$, D. McNutt$^{1}$, N. Pelavas$^{1}$**]{}\
$^{1}$Department of Mathematics and Statistics,\
Dalhousie University, Halifax, Nova Scotia,\
Canada B3H 3J5\
\
`aac, ddmcnutt, [email protected]`
title: '$CCNV$ Spacetimes and (Super)symmetries'
---
[Introduction]{}
Supersymmetric supergravity solutions are of interest in the context of the AdS/CFT conjecture, the microscopic properties of black hole entropy, and in a search for a deeper understanding of string theory dualities. For example, in five dimensions solutions preserving various fractions of supersymmetry of $N=2$ gauged supergravity have been studied. The Killing spinor equations imply that supersymmetric solutions preserve $2$, $4,6$ or $8$ of the supersymmetries. The $AdS_{5}$ solution with vanishing gauge field strengths and constant scalars preserves all of the supersymmetries. Half supersymmetric solutions in gauged five dimensional supergravity with vector multiplets possess two Dirac Killing spinors and hence two time-like or null Killing vectors. These solutions have been fully classified , using the spinorial geometry method, in [@gaunt]. Indeed, in a number of supergravity theories [@hommth], in order to preserve some supersymmetry it is necessary that the spacetime admits a Killing spinor which then yields a null or timelike Killing vector from its Dirac current. Therefore, a necessary (but not sufficient) condition for supersymmetry to be preserved is that the spacetime admits a null or timelike Killing vector (KV).
In this short communication we study supergravity solutions preserving a non-minimal fraction of supersymmetries, by discussing the existence of additional KVs in the class of higher-dimensional Kundt spacetimes admitting a covariantly constant null vector ($CCNV$) [@MCP]. $CCNV$ spacetimes belong to the Kundt class because they contain a null KV which is geodesic, non-expanding, shear-free and non-twisting. The existence of an additional KV puts constraints on the metric functions and the KV components. KVs that are null or timelike locally or globally (for all values of the coordinate $v$) are of particular importance. As an illustration we present two explicit examples.
A constant scalar invariant ($CSI$) spacetime is a spacetime such that all of the polynomial scalar invariants constructed from the Riemann tensor and its covariant derivatives are constant [@CSI]. The $VSI$ spacetimes are $CSI$ spacetimes for which all of these polynomial scalar invariants vanish. The subset of $CCNV$ spacetimes which are also $CSI$ or $VSI$ are of particular interest. Indeed, it has been shown previously that the higher-dimensional $VSI$ spacetimes with fluxes and dilaton are solutions of type IIB supergravity [@VSISUG]. A subset of Ricci type N $VSI$ spacetimes, the higher-dimensional Weyl type N pp-wave spacetimes, are known to be solutions in type IIB supergravity with an R-R five-form or with NS-NS form fields [@hortseyt; @tseytlin]. In fact, all Ricci type N $VSI$ spacetimes are solutions to supergravity and, moreover, there are $VSI$ spacetime solutions of type IIB supergravity which are of Ricci type III, including the string gyratons, assuming appropriate source fields are provided [@VSISUG]. It has been argued that the $VSI$ supergravity spacetimes are exact string solutions to all orders in the string tension. Those $VSI$ spacetimes in which supersymmetry is preserved admit a $CCNV$. Higher-dimensional $VSI$ spacetime solutions to type IIB supergravity preserving some supersymmetry are of Ricci type N, Weyl type III(a) or N [@CFHP]. It is also known that $AdS_d \times S^{(D-d)}$ spacetimes are supersymmetric $CSI$ solutions of IIB supergravity. There are a number of other $CSI$ spacetimes known to be solutions of supergravity and admit supersymmetries [@CSI], including generalizations of $AdS \times S$ [@Gauntlett], of the chiral null models [@hortseyt], and the string gyratons [@FZ]. Some explicit examples of $CSI$ $CCNV$ Ricci type N supergravity spacetimes have been constructed [@CFH].
[Kundt metrics and $CCNV$ spacetimes]{} \[KundtCCNVsect\]
A spacetime possessing a CCNV, $\ell$, is necessarily of higher-dimensional Kundt form. Local coordinates $(u,v,x^e)$ can be chosen, where $\ell = \partial_v$, so that the metric can be written [@coley] $$ds^2=2 du [d v+H(u,x^e)d u+ \hat W_{ e}(u,x^f)d x^e]+ {g}_{ef}(u,x^g) dx^e dx^f, \label{CCNVKundt}$$ where the metric functions are independent of the light-cone coordinate $v$.
A Kundt metric admitting a $CCNV$ is $CSI$ if and only if the transverse metric $g_{ef}$ is locally homogeneous [@CSI]. (Due to the local homogeneity of $g_{ef}$ a coordinate transformation can be performed so that the $m_{ie}$ in eqn. (\[ccnvframe\]) below are independent of $u$.) This implies that the Riemann tensor is of type II or less [@coley]. If a $CSI$-$CCNV$ metric satisfies $R_{ab}R^{ab}=0$ then the metric is $VSI$, and the Riemann tensor will be of type III, N or O and the transverse metric is flat (i.e., ${g}_{ef}={\delta}_{ef}$). The constraints on a $CSI$ $CCNV$ spacetime to admit an additional KV are obtained as subcases of the cases analyzed below where the transverse metric is a locally homogeneous.
[Additional isometries]{} \[GeneralCCNV\]
Let us choose the coframe $\{ m^a \}$
m\^1 = n = dv + Hdu + W\_e dx\^e, \[ccnvframe\] m\^2 = , m\^i = m\^i\_[ e]{} dx\^e, where $m^i_{~e} m_{if} = g_{ef}$ and $m_{ie}m_j^{~e} = \delta_{ij}$. The frame derivatives are given by
= D\_1 = \_v, n = D\_2 = \_u - H\_v, m\_i = D\_i = m\_i\^[ e]{}(\_[e]{} - W\_e \_v).
The KV can be written as $X = X_1 n + X_2 \ell + X_i m^i$. A coordinate transformation can be made to eliminate $ \hat W_3$ in and we may rotate the frame in order to set $X_3 \neq 0$ and $X_m = 0$ [@MCP]. $X$ is now given by
X = X\_1 n + X\_2 + m\^3 . Henceforth it will also be assumed that the matrix $m_{ie}$ is upper-triangular.
The Killing equations can then be written as:
X\_[1,v]{} = 0, X\_[1,u]{}+X\_[2,v]{} = 0, m\_3\^[ e]{}X\_[1,e]{} + X\_[3,v]{} = 0, m\_[n]{}\^[ e]{}X\_[1,e]{} = 0, \[letsgo\] which imply X\_1 = F\_1(u,x\^e), X\_2 = -D\_2(X\_1)v+F\_2(u,x\^e), X\_3 = -D\_3(X\_1)v +F\_3(u,x\^e), \[kvcomps\] and
&D\_2 X\_2 + \_i J\_i X\_i=0 & \[killeqn40\]\
& D\_i X\_2 + D\_2 X\_i - J\_i X\_1- \_j (A\_[ji]{}+B\_[ij]{})X\_j = 0 & \[killeqn50\]\
& D\_j X\_i+D\_i X\_j +2B\_[(ij)]{}X\_1 - 2\_k \_[k(ij)]{} X\_k = 0,& \[killeqn60\]
where $ B_{ij} = m_{ie,u}m_{j}^{~e}$, $W_i = m_i^{~e} \hat W_e$, and $J_i \equiv \Gamma_{2i2} = D_i H - D_2 W_i - B_{ji}W^j$, $A_{ij} \equiv D_{[j} W_{i]} + D_{k[ij]}W^k$, $D_{ijk}\equiv 2m_{i e,f}m_{[j}^{~e}m_{k]}^{~f}$. Further information can be found by taking the Killing equations and applying the commutation relations, which leads to two cases; (1) $D_3 X_1 = 0$, or (2) $\Gamma_{3n2} =
\Gamma_{3n3} = \Gamma_{3nm} = 0$.
[Case 1: $D_{3}X_{1}=0$]{}
Using equation and the definition of $F_2$ from , we have that $X_{1}=c_1u+c_2$. If $c_1 \neq 0$ we may always choose coordinates to set $X_1 = u$, while if $c_1 = 0$ we may choose $c_2 = 1$.
[**Subcase 1.1: $F_{3}=0$.**]{} (i) $c_1 \neq 0$, $X_1 = u$; $F_2$ must be of the form
F\_2 = + . \[case11f2\] $H$ and $W_m$ are given in terms of these two functions (where $g' \equiv \frac{dg}{du}$)
H = - + , W\_m = . \[C11Wn\]
\(ii) $c_1 = 0$, $X_1 = 1$; ${F_2}_{,u}=0$, and $H$ and $W_n$ are
H = F\_2(x\^e) + A\_0(u, x\^r), W\_n = D\_nA\_0 du + C\_n(x\^e). \[C11Wn0\]
In either case, the only requirement on the transverse metric is that it be independent of $u$. The arbitrary functions in this case are $F_{2}$ and the functions arising from integration.
[**Subcase 1.2: $F_{3} \neq 0$.**]{} The transverse metric is now determined by
m\_[33]{} = -F\_[3,3]{} du + A\_1(x\^3, x\^r). \[m33,u\]
$$m_{nr,u}= -m_{nr,3} \frac{F_3}{m_{33} X_{1}}, ~~
m_{3r,u}= - \frac{ F_{3,r}}{X_1} - \frac{ m_{3[r,3]}m_3^{~3}F_3}{X_1}. \label{m3r,u}$$
\(i) $c_1 \neq 0$, $X_1 = u$; $F_i(u,x^e)$ ($i=1,2$) are arbitrary functions, $H$ is given by
H = - D\_2F\_2 - - - , \[C12H\] and $W_n$ is determined by
$$D_{2}( u W_n ) + F_3 D_3 W_n + D_n(F_2 - u H ) = 0 . \label{C12Wn}$$
\(ii) $c_1=0$, ($c_2 \neq 0$) $X_1 = 1$; $F_2$ and $F_3$ satisfy
D\_2F\_2 + F\_3 D\_3F\_2 + 12 D\_2(F\_3\^2) + 12 F\_3 D\_3(F\_3\^2) = 0. \[C12f20\]
$H$ may be written as
H &=& m\_[33]{} D\_2F\_3 dx\^3 + F\_2 + 12 F\_3\^2 + A\_2(u, x\^r). \[C12H0\]
The only equation for $W_n$ is
& F\_3 D\_3W\_n + D\_2 W\_n = D\_n(H) . \[C12Wn0\] &
\(iii) $X_1= 0$:
$$F_{3,3} =0,~~
m_{nr,3} = 0,~~
D_2log(m_{33}) = - \frac{ D_3F_2 }{ F_3 } - D_2log(F_3). \label{mnr,u0}$$
W\_n = - dx\^3 + E\_[n]{}(u, x\^r), H = - dx\^3 + A\_3(u, x\^r). \[C12Wn00\] There are two further subcases depending upon whether $m_{33,r} = 0$ or not, whence we may further integrate to determine the transverse metric.
[Case 2: $\Gamma_{3ia} = 0$]{} \[GeneralCCNVc2\]
This implies the upper-triangular matrix $m_{ie}$ takes the form: $m_{33} = M_{,3}(u, x^3),
\newline m_{3r} = 0, m_{nr} = m_{nr}(u,x^r)$, while the $W_n$ must satisfy $D_3 (W_n) = 0$. The remaining Killing equations then simplify. In particular, $B_{(mn)}X_1 = 0$, leading to two subcases: (1) $X_1 = 0$, or (2) $B_{(mn)} = 0$.
[**Case 2.1: $X_1 = 0$, $B_{(mn)} \neq 0$.**]{} $F_{2,r} = 0$, $F_{3,e} = 0$; $m_{ie}$, $H$, $W_n$ given by and .
[**Case 2.2: $B_{(mn)} = 0$, $X_1 \neq 0$.**]{} This case is similar to the subcases dealt with in Case 1.1 (see equations -, -). For $n<p$ the vanishing of $B_{(np)}$ implies $m_{n r , u} = 0$, the special form of $m_{ie}$ implies that $m_r^{~~3} = 0$, and the only non-zero component of the tensor $B$ is $B_{33}$.
If we assume that $F_{1,3} \neq 0$ and $F_1$ is independent of $x^r$: = , = . \[m33comma3\]
Thus $m_{33}(u,x^3)$ is entirely defined by $F_1$. We may solve for $H$ and the $W_n$: H = F\_3 - F\_1 - , W\_n = - .\[case22H\] $F_3$ is of the form:
F\_3 = dx\^3 + A\_6(u, x\^r) \[f3case22\] There are differential equations for $F_2$ in terms of the arbitrary functions $F_1(u, x^3)$ and $A_6(u, x^r)$. These solutions are summarized in Table 2 in [@MCP].
[*Killing Lie Algebra:*]{} There are three particular forms for the KV in those $CCNV$ spacetimes admitting an additional isometry:
&(A)& X\_A = c n + F\_2(u,x\^e) + F\_3(u,x\^e) m\^3\
&(B)& X\_B = u n + \[F\_2(u,x\^e)-v\] + F\_3(u,x\^e) m\^3\
&(C)& X\_C = F\_1(u,x\^3) n + \[F\_2(u,x\^e) - D\_2F\_1 v\] + \[F\_3 - D\_3F\_1 v\] m\^3 . To determine if these spacetimes admit even more KVs we examine the commutator of $X$ with $\ell$ in each case. In case (A), $[X_A, \ell ] = 0$ and in case B $[X_B, \ell] = - \ell$, and thus there are no additional KVs. In the most general case $Y_C \equiv [X_C, \ell]$ can yield a new KV; $Y_C = D_2F_1 \ell + D_3F_1 m_3$. However, this will always be spacelike since $(D_3F_1)^2 > 0$. Note that $[Y_C, \ell ] = 0$, while, in general, $[Y_C, X_C] \neq 0$.
[*Non-spacelike isometries:*]{} Let us consider the set of $CCNV$ spacetimes admitting an additional non-spacelike KV, so that
D\_3(X\_1)\^2 v\^2 + 2(D\_2(X\_1) X\_1 - D\_3(X\_1) F\_3) v + F\_3\^[ 2]{} - 2 X\_1 F\_2 0 If the KV field is non-spacelike for all values of $v$, then $D_3(X_1)$ must vanish and $X_1$ is constant. Therefore, various subcases discussed above are excluded. In the remaining cases
F\_3\^[ 2]{} - 2 X\_1 F\_2 0. \[ntl\] In the timelike case, the subcases with $X_1=0$ are no longer valid since $F_3^{~2} < 0$. In the case that $X$ is null and $c_2 \neq 0$ we can rescale $n$ so that $2F_2 = F_3^{~2}$. We can then integrate out the various cases: If $F_3=0$, $F_2$ must vanish as well and $X = n$. The remaining metric functions are now $H = A_0(u,x^r)$ and $W_n = \int D_n(A_0) du + C_n(x^e)$. The transverse metric is unaffected. If $F_3 \neq 0$, $H = A_2(u,x^r)$, $D_2(W_n) + D_3(W_n)F_3 = D_n(A_2)$, and ${(log m_{33})_{,u}}= {D_2(log F_3)}$. If $c_2 = 0$, $F_2$ must be constant, and the KV is a scalar multiple of $\ell$ and can be disregarded. The remaining cases are just a repetition of the above with added constraints. The $CSI$ $CCNV$ spacetimes admitting KVs which are non-spacelike for all values of $v$ are the subcases of the above cases where the transverse space is locally homogenous.
[Explicit examples]{}
[**I:**]{} We first present an explicit example for the case where $X_{1}=u$ and $F_{3}\neq 0$. Assuming that $F_{3}(u,x^i)=\epsilon u m_{33}$ and $\epsilon$ is a nonzero constant, we obtain $$m_{is,u}+\epsilon m_{is,3}=0 \label{miseqs}$$ and the transverse metric is thus given by $$m_{is}=m_{is}(x^3-\epsilon u,x^n)\, . \label{missols}$$ We have the algebraic solution $$\hat{W}_{3}=-\frac{1}{\epsilon}(H+F_{2,u})-F_{2,3}-\epsilon m_{33}^{\ \ 2}, \label{w3sol}$$ where $F_{2}(u,x^i)$ is an arbitrary function and $H$ is given by $$H(u,x^i)=\frac{1}{u}\left[-\int^{u}S(z,x^3-\epsilon u+\epsilon z,x^n) dz
+ A(x^3-\epsilon u,x^n) \right], \label{hsol}$$ where $A$ is an arbitrary function and $S$ is given by $$S(u,x^3,x^n)=(uF_{2,u})_{u}+\epsilon u F_{2,3u}+\epsilon^2 u(m_{33}^{\ \ 2})_{u}\, .$$ Furthermore, the solution for $\hat{W}_{n}$, $n=4,\ldots,N$ is $$\hat{W}_{n}(u,x^i)=\frac{1}{u}\left[-\int^{u}T_{n}(z,x^3-\epsilon u+\epsilon z,x^m) dz + B_{n}(x^3-\epsilon u,x^m) \right] \label{wnsol}$$ where $B_{n}$ are arbitrary functions and $T_{n}$ is given by $$T_{n}(u,x^3,x^m)=\left[(uF_{2})_{u}+\epsilon u F_{2,3}+\epsilon^2 um_{33}^{\ \ 2}\right]_{,n} + \epsilon m_{3n}m_{33}\, .$$
In this example, the KV and its magnitude are given by $$X=u\mathbf{n}+(-v+F_{2}){\bl}+\epsilon u m_{33}\mathbf{m}^3,~~
X_{a}X^{a}=-2uv+2uF_{2}+(\epsilon u m_{33})^2 \, . \label{Xex1}$$ Clearly, the causal character of $X$ will depend on the choice of $F_{2}(u,x^i)$, and for any fixed $(u,x^i)$ $X$ is timelike or null for appropriately chosen values of $v$. Moreover, (\[Xex1\]) is an example of case (B); therefore the commutator of $X$ and ${\bl}$ gives rise to a constant rescaling of ${\bl}$ and, in general, there are no more KVs. The additional KV is only timelike or null locally (for a restricted range of coordinate values). However, the solutions can be extended smoothly so that the KV is timelike or null on a physically interesting part of spacetime. For example, a solution valid on $u>0$, $v>0$ (with $F_{2}<0$), can be smoothly matched across $u=v=0$ to a solution valid on $u<0$, $v<0$ (with $F_{2}>0$), so that the KV is timelike on the resulting coordinate patch.
As an illustration, suppose the $m_{3s}$ are separable as follows $$m_{3s}=(x^3-\epsilon u)^{p_{s}}h_{s}(x^n)$$ and $F_{2}$ has the form $$F_{2}=-\frac{\epsilon}{2p_{3}+1}(x^3-\epsilon u)^{2p_{3}+1}h_{3}^{\ 2}+g(u,x^n),$$ where the $p_{s}$ are constants and $h_{s}$, $g$ arbitrary functions. Thus, from (\[hsol\]) $$H=-\epsilon^2(x^3-\epsilon u)^{2p_{3}-1}[x^3-\epsilon(p_{3}+1)u]h_{3}^{\ 2}-g_{,u} + u^{-1}A(x^3-\epsilon u,x^n),$$ and hence from (\[w3sol\]) $$\hat{W}_{3}=-\epsilon^2p_{3}u(x^3-\epsilon u)^{2p_{3}-1}h_{3}^{\ 2}-(\epsilon u)^{-1}A(x^3-\epsilon u,x^n).$$ Last, equation (\[wnsol\]) gives $$\begin{aligned}
\hat{W}_{n}=\epsilon(x^3-\epsilon u)^{p_{3}}h_{3}\left\{\frac{2(x^3-\epsilon u)^{p_{3}}}{2p_{3}+1}\left[x^3-\epsilon\left(p_{3}+\frac{3}{2}\right)u\right]h_{3,n} \right. & & \nonumber \\
\biggl.\mbox{}-(x^3-\epsilon u)^{p_n}h_{n} \biggr\}-g_{,n}+u^{-1}B_{n}(x^3-\epsilon u,x^m)\,. & &\end{aligned}$$
[**II:**]{} A second example corresponding to the distinct subcase where $X_{1}=1$ and assuming $F_{3}(u,x^i)=\epsilon m_{33}$ gives the same solutions (\[missols\]) for the transverse metric (although, in this case, the additional KV is globally timelike or null). In addition, we have $$\hat{W}_{3}=\int H_{,3}du + \epsilon^{-1}(F_{2}+f) \label{w3sol2}$$ where $H(u,x^i)$, $F_{2}(x^3-\epsilon u,x^n)$ and $f(x^{i})$ are arbitrary functions. Last, the metric functions $\hat{W}_{n}$ are $$\hat{W}_{n}(u,x^i)=\int^{u}L_{n}(z,x^3-\epsilon u+\epsilon z,x^m) dz + E_{n}(x^3-\epsilon u,x^m), \label{wnsol2}$$ with $E_{n}$ arbitrary and $L_{n}$ given by $$L_{n}(u,x^3,x^m)=H_{,n}+\epsilon\int H_{,3n}du + f_{,n}\, . \label{Lnsol2}$$ The KV and its magnitude is $$\begin{aligned}
X=\mathbf{n}+F_{2}{\bl}+\epsilon m_{33}\mathbf{m}^3,& & X_{a}X^{a}=2F_{2}+(\epsilon m_{33})^2 \, . \label{Xex2}\end{aligned}$$ Since $F_{2}$ and $m_{33}$ have the same functional dependence there always exists $F_2$ such that $X$ is everywhere timelike or null. The KV (\[Xex2\]) is an example of case (A) and thus $X$ and ${\bl}$ commute and hence no additional KVs arise. For instance, suppose $H=H(x^3-\epsilon u,x^n)$ and $f$ is analytic at $x^3=0$ (say) then (\[w3sol2\]) and (\[wnsol2\]) simplify to give $$\begin{aligned}
\hat{W}_{3} & = & -\epsilon^{-1}(H-F_2 - f), \\
\hat{W}_{n} & = & \epsilon^{-1}\sum^{\infty}_{p=0}\partial_{n}\partial_{3}^{\ p}f(0,x^m)\frac{(x^3)^{p+1}}{(p+1)!} + E_{n}(x^3-\epsilon u,x^m) \, .\end{aligned}$$
This explicit solution is an example of a spacetime admitting 2 global null or timelike KVs, and is of importance in the study of supergravity solutions preserving a non-minimal fraction of supersymmetries.
[99]{}
J. P. Gauntlett and J. B. Gutowski, Phys. Rev.* ***D68** (2003) 105009; J. B. Gutowski and W. A. Sabra, JHEP **10** (2005) & JHEP **12** (2007) 025.
J. M. Figueroa-O’Farrill, P. Meessen and S. Philip, Class. Quant. Grav. [**22**]{}, 207 (2005); E. Hackett-Jones and D. Smith, JHEP [**0411**]{}, 029 (2004).
D. McNutt, A. Coley, and N. Pelavas, preprint.
A. Coley, S. Hervik and N. Pelavas, Class. Quant. Grav. [**23**]{}, 3053 (2006).
A. Coley, A. Fuster, S. Hervik and N. Pelavas, JHEP [**32**]{} (2007).
G. T. Horowitz and A. A. Tseytlin, Phys. Rev. D [**51**]{}, 2896 (1995).
R. R. Metsaev and A. A. Tseytlin, Phys. Rev. D [**65**]{}, 126004 (2002); J. G. Russo and A. A. Tseytlin, JHEP [**0209**]{}, 035 (2002); M. Blau [*et al.*]{}, JHEP [**0201**]{}, 047 (2002); P. Meessen, Phys. Rev. D [**65**]{}, 087501 (2002).
A. Coley, A. Fuster, S. Hervik and N. Pelavas, Class. Quant. Grav. [**23**]{}, 7431 (2006).
J. Gauntlett [*et al.*]{}, Phys. Rev. D[**74**]{}, 106007 (2006) .
V. P. Frolov and A. Zelnikov, Phys. Rev. D [**72**]{} (2005).
A. Coley, A. Fuster and S. Hervik, \[hep-th/0707.0957v1\]
A. Coley, 2008, Class. Quant. Grav. [**25**]{}, 033001.
|
{
"pile_set_name": "ArXiv"
}
|
---
address:
- 'Department of Mathematics, MIT and Northwestern University'
- 'Department of Mathematics, Northwestern University'
author:
- András Vasy
- Jared Wunsch
date: 'October 21, 2004'
title: 'Absence of super-exponentially decaying eigenfunctions on Riemannian manifolds with pinched negative curvature'
---
\[section\] \[lemma\][Proposition]{} \[lemma\][Theorem]{} \[lemma\][Corollary]{} \[lemma\][Result]{} \[lemma\][Remark]{} \[lemma\][Definition]{}
[^1] [^2]
Introduction and statement of results
=====================================
Let $(X,g)$ be a metrically complete, simply connected Riemannian manifold with bounded geometry and pinched negative curvature, i.e. there are constants $a>b>0$ such that $-a^2<K<-b^2$ for all sectional curvatures $K$. Here bounded geometry is used in the sense of Shubin, [@Shubin:Spectral Appendix 1], namely that all covariant derivatives of the Riemannian curvature tensor are bounded and the injectivity radius is uniformly bounded below by a positive constant. We show that there are no superexponentially decaying eigenfunctions of $\Delta$ on $X$; here $\Delta$ is the positive Laplacian of $g$. That is, fix some $o\in X$, and let $r(p)=d(p,o)$, $p\in X$, be the distance function. Then:
\[thm:Lap\] Suppose that $(X,g)$ is as above. If $(\Delta-\lambda)\psi=0$ and $\psi\in e^{-\alpha r}L^2(X)$ for all $\alpha$, then $\psi$ is identically $0$.
Since the curvature assumptions imply exponential volume growth, and due to elliptic regularity, the $L^2$ norm may be replaced by any $L^p$, or indeed Sobolev, norm. This result strengthens Mazzeo’s unique continuation theorem at infinity [@Mazzeo:Unique] by eliminating the asymptotic curvature assumption (4) there.
As shown below, the negative curvature assumption enters via the strict uniform convexity of the geodesic spheres centered at $o$, much as in the work of Mazzeo [@Mazzeo:Unique]. Thus, as observed by Rafe Mazzeo, the arguments go through equally well if $X$ is replaced by a manifold $M$ which is the union of a ‘core’ $M_0$ (not necessarily compact) and a product manifold $M_1=(1,\infty)_r\times N$, with a Riemannian metric $g=dr^2+k(r,.)$, $k$ a metric on $N(r)=\{r\}\times N$, $M_1$ having bounded geometry, provided that the second fundamental form of $N(r)$ is strictly positive, uniformly in $r$. Indeed, the assumptions on $\psi$ only need to be imposed on $M_1$, see Remark \[rem:loc\].
Following [@Shubin:Spectral Appendix 1] we remark that an equivalent formulation of the definition of a manifold of bounded geometry is the requirement that the injectivity radius is bounded below by a positive constant $r_{\mathrm{inj}}$, and that the transition functions between intersecting geodesic normal coordinate charts (called canonical coordinates in [@Shubin:Spectral]) of radius $<r_{\mathrm{inj}}/2$, say, are ${{\mathcal C}^{\infty}}$ with uniformly bounded derivatives (with bound independent of the base points). We let ${\operatorname{Diff}}(X)$ denote the algebra of differential operators corresponding to the bounded geometry, called the algebra of ${{\mathcal C}^{\infty}}$-bounded differential operators in [@Shubin:Spectral Appendix 1]. Thus, in any canonical coordinates, $A\in{\operatorname{Diff}}^m(X)$ has the form $\sum_{|\alpha|\leq m} a_\alpha(x)D_x^\alpha$, with ${\partial}^\beta a_\alpha$ uniformly bounded, with bound $C_{|\beta|}$ independent of the canonical coordinate chart, for all multiindices $\beta$. We also write $H^k(X)$ for the $L^2$-based Sobolev spaces below.
If $E$ is a vector bundle of bounded geometry, in the sense of [@Shubin:Spectral Appendix 1], then Theorem \[thm:Lap\] is also valid for $\Delta$ replaced by any second order differential operator $P\in {\operatorname{Diff}}^2(X,E)$ acting on sections of $E$ with scalar principal symbol equal to that of $\Delta$, i.e. the metric function on $T^*X$. Indeed, we can even localize at infinity, i.e. assume $Pu=0$ only near infinity, and obtain the conclusion that $u$ is $0$ near infinity.
\[thm:perturb\] Let $(X,g)$ be as above. Suppose $P\in{\operatorname{Diff}}^2(X,E)$, $\sigma_2(P)=g\,{\operatorname{Id}}$, where $g$ denotes the metric function on $T^*X$. If $P\psi\in{{\mathcal C}^{\infty}}_c(X,E)$ and $\psi\in e^{-\alpha r}L^2(X,E)$ for all $\alpha$ then $\psi\in{{\mathcal C}^{\infty}}_c(X,E)$.
This theorem, together with the standard unique continuation result, [@Hor Theorem 17.2.1], implies that if $P\psi=0$ on $X$ then $\psi=0$ on $X$, just as in Theorem \[thm:Lap\].
In addition, the Sobolev order of the assumptions on $\psi$ and $P\psi$ is immaterial. In fact, as discussed in Remark \[rem:loc\], the argument localizes near infinity, hence we may assume $P\psi\in{{\mathcal C}^{-\infty}}_c(X,E)$, and then elliptic regularity allows us to conclude that if $\psi$ is in an exponentially weighted Sobolev space near infinity (with possibly a negative exponent) then it is in the corresponding weighted $L^2$-space.
We also remark on the bounded geometry hypotheses, more precisely on the assumptions on covariant derivatives. The results of Anderson and Schoen [@Anderson-Schoen:Positive] on harmonic functions on negatively curved spaces are results below the continuous spectrum. Thus, these are elliptic problems even at infinity, in a rather strong sense – stronger than just the uniform ellipticity on manifolds with bounded geometry discussed below. In particular, the notion of positivity and the maximum principle are available, and can be used to eliminate conditions on covariant derivatives. However, for eigenfunctions embedded in the continuous spectrum such tools are unavailable. Indeed, for $\lambda$ large, $\Delta-\lambda$ can be seen to lose ‘strong’ ellipticity (so e.g. it is not Fredholm on $L^2(X)$), and is in many ways (micro)hyperbolic, at least in settings with an additional structure (cf. the discussion in [@RBMSpec] in asymptotically flat spaces). In such a setting commutator estimates are very natural, and have a long tradition in PDEs; this explains the role of the assumption on the covariant derivatives.
We are very grateful to Rafe Mazzeo and Richard Melrose for numerous very helpful conversations, and for their interest in the present work.
The proofs
==========
To make the argument more transparent, we write up the proof of Theorem \[thm:Lap\], at each step pointing out any significant changes that are needed to prove Theorem \[thm:perturb\]. The proofs are a version of Carleman estimates, see below, at least for self-adjoint operators, but we phrase these somewhat differently, in the spirit of operators with complex symbol and codimension 2 characteristic variety (which in this case is in the semiclassical limit) on which the Poisson bracket of the real and imaginary part of the (in this case, semiclassical) principal symbol is positive. This corresponds to non-solvability of the inhomogeneous PDE in the sense of [@Hor Section 26.4]; see also [@Zworski:Numerical] for a recent discussion.
We consider eigenfunctions of $\Delta$ that are superexponentially decaying: $(\Delta-\lambda)\psi=0$, and $\psi\in e^{-\alpha r} L^2(X)$ for all $\alpha$, $\|\psi\|_{L^2(X)}=1$ (for convenience). Note that $\lambda$ is real by the self-adjointness of $\Delta$. Moreover, $\psi\in{{\mathcal C}^{\infty}}(X)$ by standard elliptic regularity, and indeed $\psi\in e^{-\alpha r'}H^m(X)$ for all $m$, where $r'$ is a smoothed version of $r$, changed only near the origin. It is convenient to assume that $r'>0$, so $\inf_X r'>0$.
For $\alpha$ real, we consider $$P_\alpha=e^{\alpha r'}(\Delta-\lambda)e^{-\alpha r'}.$$ Here we need to use $r'$ since $r$ is not smooth at $o$. However, for notational simplicity, to avoid an additional compactly supported error term on almost every line, we ignore this, and simply add back a compactly supported error term in .
Let $${\operatorname{Re}}P_\alpha=\frac{1}{2}(P_\alpha+P_\alpha^*),
\ {\operatorname{Im}}P_\alpha=\frac{1}{2i}(P_\alpha-P_\alpha^*),$$ be the symmetric and skew-symmetric parts of $P_\alpha$. Thus, $P_\alpha
={\operatorname{Re}}P_\alpha+i{\operatorname{Im}}P_\alpha$, and ${\operatorname{Re}}P_\alpha$, ${\operatorname{Im}}P_\alpha$ are symmetric. Note also that $P_\alpha \psi_\alpha=0$ where $\psi_\alpha
=e^{\alpha r}\psi$. Thus, $$\label{eq:h-comm-8}
0=\|P_\alpha \psi_\alpha\|^2=\|{\operatorname{Re}}P_\alpha \psi_\alpha\|^2+\|{\operatorname{Im}}P_\alpha
\psi_\alpha\|^2
+\langle i[{\operatorname{Re}}P_\alpha,{\operatorname{Im}}P_\alpha]\psi_\alpha,\psi_\alpha\rangle.$$ Roughly speaking, this will give a contradiction provided the commutator is positive – although in the presence of error terms one needs to be a little more careful.
We remark that this argument parallels the last part of the $N$-body argument of [@Vasy:Exponential], showing exponential decay and unique continuation results for $N$-particle Hamiltonians with second order interactions, which in turn placed the work of Froese and Herbst [@FroExp] in potential scattering into this framework. However, in [@Vasy:Exponential] (as in [@FroExp]) this is the simplest part of the argument; it is much more work to show that $L^2$-eigenfunctions decay at a rate given by the next threshold above the eigenvalue $\lambda$ – hence superexponentially in the absence of such thresholds.
We now relate our arguments to the usual Carleman-type arguments, at least if $P_0$ is symmetric (as is for self-adjoint operators in ${\operatorname{Diff}}^2(X)$ with the same principal symbol as $\Delta$). In those, one considers $P_\alpha$ and $P_{-\alpha}$, with the same notation as above, and computes $\|P_\alpha \psi_\alpha\|^2\pm
\|P_{-\alpha}\psi_\alpha\|^2$. Since $P_0$ is symmetric, indeed self-adjoint, $P_{-\alpha}=P_\alpha^*$, so $$\begin{split}\label{eq:Carleman}
&\|P_\alpha \psi_\alpha\|^2+
\|P_{-\alpha}\psi_\alpha\|^2=2\|{\operatorname{Re}}P_\alpha\psi_\alpha\|^2
+2\|{\operatorname{Im}}P_\alpha\psi_\alpha\|^2,\\
&\|P_\alpha \psi_\alpha\|^2-
\|P_{-\alpha}\psi_\alpha\|^2=2\langle i[{\operatorname{Re}}P_\alpha,{\operatorname{Im}}P_\alpha]\psi_\alpha,
\psi_\alpha\rangle.
\end{split}$$ Thus, the usual Carleman argument breaks up into two pieces, and is completely equivalent to . However, dividing up $P_\alpha$ into its symmetric and skew-symmetric parts makes the calculations below more systematic, which is particularly apparent in how the double commutator appears in ${\operatorname{Re}}P_\alpha$ below. This double commutator, in turn, makes it clear why various terms, which one might expect by expanding out the squares $\|P_{\pm\alpha} \psi_\alpha\|^2$, do not appear in the evaluation of $\|P_\alpha \psi_\alpha\|^2\pm
\|P_{-\alpha}\psi_\alpha\|^2$.
Due to the prominent role played by $r$, we work in Riemannian normal coordinates. So let $g=dr^2+k(r,.)$ be the metric on $X$, where $k$ is the metric on the geodesic sphere of radius $r$, denoted by $S(r)$, and let $A(r,.)\,dr\wedge \omega$ denote the volume element, $\omega$ being the standard volume form on the unit sphere. By the bounded geometry assumptions, ${\partial}_r\log A
=-\Delta r\in{{\mathcal C}^{\infty}}_{\mathrm{b}}(X)={\operatorname{Diff}}^0(X)$ (see e.g. [@Zhu:Comparison Lemma 2.3] for the identity), i.e. is uniformly bounded with analogous conditions on the covariant derivatives. Then $$\label{eq:Lap-normal}
-\Delta={\partial}_r^2+({\partial}_r A){\partial}_r-\Delta_{S(r)}.$$
Now, $$\begin{split}
&P_\alpha=\Delta-\lambda+e^{\alpha r}[\Delta,e^{-\alpha r}],\\
&{\operatorname{Re}}P_\alpha=\Delta-\lambda+\frac{1}{2}[e^{\alpha r},[\Delta,e^{-\alpha r}]]\\
&{\operatorname{Im}}P_\alpha=\frac{1}{2i}(e^{\alpha r}[\Delta,e^{-\alpha r}]
+[\Delta,e^{-\alpha r}]e^{\alpha r}).
\end{split}$$ Here the expressions for ${\operatorname{Re}}P_\alpha$ and ${\operatorname{Im}}P_\alpha$ follow directly from the definition of the symmetric and skew-symmetric parts, using that $\Delta$ and $e^{\pm\alpha r}$ are symmetric.
In the double commutator in the expression for ${\operatorname{Re}}P_\alpha$ above, changing $\Delta$ by a first order operator would not alter the result, as commutation with a scalar reduces the order by $1$. Thus, in view of , in the double commutator in ${\operatorname{Re}}P_\alpha$ all terms but ${\partial}_r^2$ give vanishing contribution, so we immediately see that $${\operatorname{Re}}P_\alpha=\Delta-\lambda-\alpha^2.$$ We next compute the skew-symmetric part. This is $${\operatorname{Im}}P_\alpha=\frac{1}{i}(2\alpha{\partial}_r+\alpha({\partial}_r\log A)).$$ Thus, $$i[{\operatorname{Re}}P_\alpha,{\operatorname{Im}}P_\alpha]=\alpha[\Delta,2{\partial}_r+({\partial}_r\log A)].$$
The crucial estimate for this commutator that we need below is that there is $c>0$ such that $$\label{eq:comm-8}
[\Delta,2{\partial}_r+({\partial}_r\log A)]\geq c\Delta_{S(r)}+R,\ R\in{\operatorname{Diff}}^1(X);$$ here $R$ is symmetric and the inequality is understood in the sense of quadratic forms, e.g. with domain $H^2(X)$. Since the commutator is a priori in ${\operatorname{Diff}}^2(X)$, this means that we merely need to calculate its principal symbol, which in turn only depends on the principal symbols of the commutants. Thus, with $H_g$ denoting the Hamilton vector field of $g$, and $\sigma$ the canonical dual variable of $r$, with respect to the product decomposition $(0,\infty)\times S$ of $X\setminus o$, the principal symbol of ${\partial}_r$ is $\sigma_1({\partial}_r)=i\sigma$, and $$\sigma_2([\Delta,2{\partial}_r])=2H_g \sigma.$$ It is convenient to rephrase this by noting that $2{\partial}_r=-[\Delta,r]+R'$, $R'\in{\operatorname{Diff}}^0(X)$, so $2i\sigma=\sigma_1(2{\partial}_r)=iH_g r$, and hence $\sigma_2([\Delta,2{\partial}_r])=H_g^2 r$. The estimate we need then is that there is $c>0$ such that $$\label{eq:Hessian-est}
H_g^2 r\geq ck.$$ Indeed, implies , since for each $x\in X$, both sides of are quadratic forms on $T^*X$, depending smoothly on $x$, so their difference can be written as $\sum a_{ij}(x)\xi_i\xi_j$ ($\xi_i$ are canonical dual variables of local coordinates $x_i$), with $a_{ij}$ a non-negative matrix. This in turn is the principal symbol of $\sum_{ij}D_{x_i}^*a_{ij}(x)D_{x_j}$, and $$\langle \sum_{ij}D_{x_i}^*a_{ij}(x)D_{x_j} v,v\rangle=
\int_X \sum_{ij}a_{ij}(x) D_{x_i}v\,\overline{D_{x_j}v}\,dg\geq 0.$$
To analyze , recall that arclength parameterized geodesics of $g$ are projections to $X$ of the integral curves of $\frac{1}{2}H_g$ inside $S^*X$, the unit cosphere bundle of $X$. Thus, tells us that $r$ is strictly convex along geodesics tangent to $S(r_0)$ at the point of contact. Equivalently, the Hessian $\nabla dr$, which is the form on the fibers of $T X$ dual to $H_g^2 r$, is strictly positive on $TS(r_0)$, uniformly as $r_0\to\infty$. As $r=r_0$ defines $S(r_0)$, with $|\nabla r|=1$, this Hessian equals the second fundamental form of $S(r_0)$; hence is also equivalent to the uniform convexity of the hypersurfaces $S(r_0)$.
Now, follows immediately when the sectional curvatures of $X$ are bounded above by a negative constant $-b^2$, since by the Hessian comparison theorem (see e.g. [@Schoen-Yau:DG Theorem 1.1]), $H_g^2 r|_{T_{S(r_0)}X}\geq H_{g_0}^2 r|_{T_{S(r_0)}X}$, where $g_0$ is the metric with constant negative sectional curvature $-b^2$, and the right hand side is $b\coth br\geq b$ (cf. [@Schoen-Yau:DG Equation (1.7)]).
We will consider $\alpha\to\infty$, but for notational reasons it is convenient to work in the semiclassical setting. Thus, let $h=\alpha^{-1}$, $h\in(0,1]$, $\Delta_h=h^2\Delta$, $\Delta_{S(r),h}=h^2\Delta_{S(r)}$, and slightly abuse notation by writing $P_h=h^2e^{r/h}(\Delta-\lambda)e^{-r/h}$, so $$\begin{split}
&{\operatorname{Re}}P_h=\Delta_h-1-h^2\lambda,\ {\operatorname{Im}}P_h=\frac{1}{i}(2h{\partial}_r+h({\partial}_r\log A)),\\
&i[{\operatorname{Re}}P_h,{\operatorname{Im}}P_h]\geq ch\Delta_{S(r),h}+h^3 R,\ R\in{\operatorname{Diff}}^1(X).
\end{split}$$ We denote the space of semiclassical differential operators of order $m$ by ${\operatorname{Diff}}^m_h(X)$. We recall that $A\in{\operatorname{Diff}}^m_h(X)$ means that, in the usual multiindex notation, $A=\sum_{|\alpha|\leq m}
a_\alpha(x) (hD_x)^\alpha$ locally; in our bounded geometry setting we still impose, as for standard differential operators, that for all multiindices $\beta$, ${\partial}^\beta a_\alpha$ is bounded uniformly in all Riemannian normal coordinate charts of radius $R$ ($R$ less than half the injectivity radius, say), with bound only dependent on $|\beta|$. Then, weakening the above statements somewhat, in a way that still suffices below, $$\label{eq:h-comm-5}
{\operatorname{Re}}P_h=\Delta_h-1+hR_1',\ {\operatorname{Im}}P_h=\frac{1}{i}(2h{\partial}_r+hR_2'),
\ i[{\operatorname{Re}}P_h,{\operatorname{Im}}P_h]\geq ch\Delta_{S(r),h}+h^2 R_3'.$$ with $R_1',R_2'\in{\operatorname{Diff}}^1_h(X)$, $R_3'\in{\operatorname{Diff}}^2_h(X)$.
We stated in a weakened form to make it only depend on the principal symbol of $\Delta$. Namely, if $\Delta$ is replaced by any operator $\Delta+Q$, $Q\in{\operatorname{Diff}}^1(X,E)$ (not necessarily symmetric), and $P'_h=h^2 e^{r/h}(\Delta+Q-\lambda)e^{-r/h}$, then $P'_h-P_h=h^2 e^{r/h}Qe^{-r/h}\in h{\operatorname{Diff}}^1_h(X,E)$, so ${\operatorname{Re}}P'_h-{\operatorname{Re}}P_h,{\operatorname{Im}}P'_h-{\operatorname{Im}}P_h\in h{\operatorname{Diff}}^1_h(X,E)$, and thus $$\begin{split}
&i[{\operatorname{Re}}P'_h,{\operatorname{Im}}P'_h]-i[{\operatorname{Re}}P_h,{\operatorname{Im}}P_h]\\
&=i[{\operatorname{Re}}P'_h-{\operatorname{Re}}P_h,{\operatorname{Im}}P'_h]+i[{\operatorname{Re}}P_h,{\operatorname{Im}}P'_h-{\operatorname{Im}}P_h]\in h^2{\operatorname{Diff}}^2_h(X,E),
\end{split}$$ where we used that ${\operatorname{Re}}P_h,{\operatorname{Im}}P_h$ have scalar principal symbols, hence so do ${\operatorname{Re}}P'_h$ and ${\operatorname{Im}}P'_h$, giving the extra $h$ (compared to the order of the product) and the lower order in the commutators. In other words, still holds for $P_h$ replaced by $P'_h$.
In the above calculations we ignored a compact subset of $X$, so we need to add a compactly supported error. To avoid overburdening the notation, we write $r$ for the smoothed out distance function, denoted by $r'$ above, so for $r$ sufficiently large, $r(p)=d(p,o)$. Thus, we have shown that for some $c>0$, $$\label{eq:h-comm-15}
i[{\operatorname{Re}}P_h,{\operatorname{Im}}P_h]\geq ch\Delta_{S(r),h}+h^2R_0+h R_0',\ R_0,R_0'
\in{\operatorname{Diff}}^2_h(X),$$ $R_0'$ supported in $r\leq r_1$ for some $r_1>0$, with the inequality holding in the sense of operators. Since $\Delta_{S,h}=\Delta_h-(h D_r)^2-h(D_r \log A) hD_r$, this estimate implies $$\label{eq:h-comm-16}
\langle i[{\operatorname{Re}}P_h,{\operatorname{Im}}P_h]\psi_h,\psi_h\rangle
\geq\langle (ch+h R_1 {\operatorname{Re}}P_h+h R_2{\operatorname{Im}}P_h+h^2R_3+hR_4)
\psi_h,\psi_h\rangle$$ with $R_1\in{\operatorname{Diff}}^0_h(X)$, $R_2\in{\operatorname{Diff}}^1_h(X)$ and $R_3,R_4\in{\operatorname{Diff}}^2_h(X)$, $R_4$ having compact support in $r\leq r_1$. (In fact, for our purposes the compact support assumption is equivalent to assuming that $R_4$ is $o(1)$ as $r\to\infty$, as it can be absorbed in the first term for $r$ sufficiently large.)
We now show how to use to prove unique continuation at infinity. To be systematic, we set this part up somewhat abstractly. Recall that $P_h\in{\operatorname{Diff}}^2_h(X,E)$ is elliptic (or more precisely uniformly elliptic, both in $X$ and in $h$) if there is $C>0$ such that for all $(x,\xi)\in T^*X\setminus o$, and for all $h\in(0,1]$, $|\sigma_{2,h}(P_h)(x,\xi)^{-1}|\leq
C |\xi|_x^{-2}$,with $|\xi|^2_x=g_x(\xi,\xi)$ the length of $\xi\in T^*_x X$ with respect to $g$ and $|.|$ is the operator norm of the matrix of $\sigma_{2,h}(P_h)(x,\xi)^{-1}$ in any (bounded geometry) trivialization of $E$.
\[lemma:pos-comm\] Suppose $P_h\in{\operatorname{Diff}}^2_h(X,E)$ is elliptic and satisfies for some $c>0$. Suppose also that $\psi\in e^{-\alpha r}L^2(X,E)$ for all $\alpha$. If $$\label{eq:h-eigenfn}
P_h\psi_h=0,\ \psi_h=e^{r/h}\psi,$$ then there exists $R>0$ such that $\psi$ vanishes when $r>R$.
To simplify notation, we drop the bundle $E$ below. Its presence would not require any changes, except in the notation.
Let $\Psi(X)$ the algebra of pseudodifferential operators corresponding to the bounded geometry, with uniform support, see [@Shubin:Spectral Appendix 1, Definition 3.1-3.2], denoted by $U\Psi(X)$ there. The elements of $\Psi^0(X)$ are bounded on $L^2(X)$, and if $A\in\Psi^m(X)$ is elliptic, there is $B\in\Psi^{-m}(X)$ such that $AB-{\operatorname{Id}},BA-{\operatorname{Id}}\in\Psi^{-\infty}(X)$, so elliptic regularity statements and estimates work as usual.
We also need the corresponding semiclassical space of operators $\Psi_h(X)$. These can be defined by modifying the definition of $\Psi(X)$ exactly as if $X$ were compact, i.e. defining $\Psi_h^m(X)$ near the diagonal using the semiclassical quantization of symbols $a$, and globally as the sums of such operators and elements of $\Psi_h^{-\infty}(X)$. The latter space consists of operators with smooth Schwartz kernel that decays rapidly off the diagonal as $h\to 0$. More precisely, for $R\in \Psi^{-\infty}_h(X)$ we require that its Schwartz kernel $K$ satisfy $K\in{{\mathcal C}^{\infty}}((0,1]\times
X\times X)$, that there is $C_R>0$ such that $K(x,y)=0$ if $d(x,y)>C_R$, and for all $N$ there is $C_N>0$ such that for all $\alpha,\beta$ with $|\alpha|\leq N$, $|\beta|\leq N$, and for all $h\in(0,1]$, $$|{\partial}_x^\alpha {\partial}_y^\beta K(x,y,h)|\leq C_{N}h^{-n}(1+d(x,y)/h)^{-N},$$ in canonical coordinates, with $n=\dim X$. All standard properties of semiclassical ps.d.o’s remain valid – indeed here we only require basic elliptic regularity. The use of ps.d.o.’s can be eliminated, if desired, by proving the elliptic regularity estimates directly.
Since $P_h$ is an elliptic family, and elliptic regularity give $$\label{eq:h-ell-reg}
\|\psi_h\|_{H^2_h(X)}\leq C_1\|\psi_h\|_{L^2(X)},$$ $C_1$ independent of $h\in(0,1]$. Correspondingly, we do not specify below which Sobolev norms we are taking. In general, the letter $C,C'$ will be used denote a constant independent of $h\in(0,1]$, which may vary from line to line.
We first remark that by the Cauchy-Schwarz inequality, and as $\|R_j^*\psi_h\|\leq C\|\psi_h\|$, $j=1,2,3,4$, $$\begin{split}\label{eq:R1-R2}
&|\langle h R_1{\operatorname{Re}}P_h\psi_h, \psi_h\rangle|\leq C h\|\psi_h\|\|{\operatorname{Re}}P_h\psi_h\|
\leq Ch\|\psi_h\|^2+C h\|{\operatorname{Re}}P_h\psi_h\|^2,\\
&|\langle h R_2{\operatorname{Im}}P_h \psi_h, \psi_h\rangle|\leq Ch\|\psi_h\|\|{\operatorname{Im}}P_h\psi_h\|
\leq Ch\|\psi_h\|^2+Ch\|{\operatorname{Im}}P_h\psi_h\|^2.
\end{split}$$ Next, $$\begin{split}\label{eq:R3}
&|\langle \psi_h,h^2R_3\psi_h\rangle|\leq Ch^2\|\psi_h\|^2.
\end{split}$$ Since $R_4$ is supported in $r\leq r_1$, we can take some $\chi\in{{\mathcal C}^{\infty}}({\mathbb{R}})$ identically $1$ on $(-\infty,3r_1/2)$, supported in $(-\infty,2r_1)$, and deduce that $$|\langle\psi_h,h R_4\psi_h\rangle|=|\langle\chi(r)\psi_h,h R_4\chi(r)
\psi_h\rangle|\leq h\|\chi(r)\psi_h\|^2_{H^1_h(X)}.$$ Now, for $r\leq 2r_1$, $|\psi_h|=e^{r/h}|\psi|\leq
e^{2r_1/h}|\psi|$, with a similar estimate for the semiclassical derivatives, so $$\begin{split}\label{eq:R4}
|\langle\psi_h,h R_4\psi_h\rangle|&\leq h\|\chi(r)\psi_h\|^2_{H^1_h(X)}\\
&\leq Ch e^{4r_1/h}\|\psi\|^2_{H^1_h(X)}\leq C
h e^{4r_1/h}\|\psi\|^2_{H^1(X)}\leq C' he^{4r_1/h}\|\psi\|^2.
\end{split}$$ Hence, we deduce from (with $P_\alpha$ replaced by $P_h$) and that $$\begin{split}\label{eq:h-comm-32}
0\geq (1-Ch)\|{\operatorname{Re}}P_h\psi_h\|^2+(1-Ch)\|{\operatorname{Im}}P_h\psi_h\|^2
&+h(c-Ch)\|\psi_h\|^2\\
&-Che^{4r_1/h}\|\psi\|^2.
\end{split}$$ Dropping the first two (positive) terms on the right hand side, we conclude that there exists $h_0>0$ such that for $h\in(0,h_0)$, $$\label{eq:h-comm-64}
Ch e^{4r_1/h}\|\psi\|^2
\geq h\frac{c}{2}\|\psi_h\|^2.$$
Now suppose that $R>2r_1$ and ${\operatorname{supp}}\psi\cap\{
r\geq R\}$ is non-empty. Since $e^{2r/h}\geq e^{2R/h}$ for $r\geq R$, we deduce that $$\|\psi_h\|^2\geq C' e^{2R/h},\ C'
=\|\psi\|^2_{r\geq R}>0.$$ Thus, we conclude from that $$\label{eq:h-lim}
C\|\psi\|^2\geq \frac{c}{2}\,C'
e^{2(R-2r_1)/h}.$$ But letting $h\to 0$, the right hand side goes to $+\infty$, providing a contradiction.
Thus, $\psi$ vanishes for $r\geq R$.
The proof of Theorem \[thm:Lap\] is finished since if $\psi$ vanishes on an open set, it vanishes everywhere on $X$ by the usual Carleman-type unique continuation theorem [@Hor Theorem 17.2.1].
In fact, it is straightforward to strengthen Lemma \[lemma:pos-comm\] and allow $P\psi$ to be compactly supported. The following lemma thus completes the proof of Theorem \[thm:perturb\]:
Suppose $P\in{\operatorname{Diff}}^2(X;E)$ is elliptic, $\psi\in e^{-\alpha r}L^2(X,E)$ for all $\alpha$, and there is $r_0>0$ such that $P\psi=0$ for $r>r_0$. Let $P_h=e^{r/h}h^2P e^{-r/h}$, and suppose that $P_h$ satisfies for some $c>0$. Then there exists $R>0$ such that $\psi$ vanishes when $r>R$.
\[rem:loc\] Note that in this formulation, if $X$ is replaced by a manifold with several ends, one of which is of the product form eluded to in the introduction, our theorem holds locally on this end. That is, if $P\psi$ vanishes on this end and $\psi$ has superexponential decay there, then $\psi$ vanishes on the end — hence globally by the standard unique continuation theorem if $P\psi$ is identically zero. To prove this, we merely multiply by a cutoff function supported on this end, and apply the lemma to the resulting inhomogeneous problem.
The elliptic regularity estimate now becomes $$\label{eq:h-ell-reg-p}
\|\psi_h\|_{H^2_h(X)}\leq C_1(\|\psi_h\|_{L^2(X)}+\|P_h\psi_h\|_{L^2(X)}),$$ $C_1$ independent of $h\in(0,1]$, and we need to keep track of the second term on the right hand side.
Correspondingly, $\|R_j^*\psi_h\|\leq C(\|\psi_h\|+\|P_h\psi_h\|)$, $j=1,2,3,4$. Thus, on the right hand side of , we need to add $Ch \|P_h\psi_h\|^2$, resp. $Ch \|P_h\psi_h\|^2$, while on the right hand side of we need to add $Ch^2\|P_h\psi_h\|^2$. Similarly, we need to add $C'he^{4r_1/h}\|P\psi\|^2$ to the right hand side of . Thus, becomes $$\begin{split}
(1+Ch)
\|P_h\psi_h\|^2
\geq (1-Ch)&\|{\operatorname{Re}}P_h\psi_h\|^2+(1-Ch)\|{\operatorname{Im}}P_h\psi_h\|^2\\
&+h(c-Ch)\|\psi_h\|^2
-Che^{4r_1/h}(\|\psi\|^2+\|P\psi\|^2).
\end{split}$$ Since $P_h\psi_h=e^{r/h}h^2P\psi$, we have $\|P_h\psi_h\|\leq e^{r_0/h}h^2
\|P\psi\|$. Let $r_2=\max(r_0,r_1)$. Thus, there exists $h_0>0$ such that for $h\in(0,h_0)$, $$\label{eq:h-comm-64-p}
2e^{4r_2/h}\|P\psi\|^2+Ch e^{4r_1/h}\|\psi\|^2
\geq h\frac{c}{2}\|\psi_h\|^2.$$ Taking $R>2r_2$, the proof is now finished as in Lemma \[lemma:pos-comm\], for becomes $$2\|P\psi\|^2+Ch\|\psi\|^2\geq \frac{c}{2}\,C'
he^{2(R-2r_2)/h},$$ and the right hand side still goes to $+\infty$, while the left hand side is bounded as $h\to 0$.
[10]{}
M. Anderson and R. Schoen. Positive harmonic functions on complete manifolds of negative curvature. , 121:429–461, 1985.
R. G. Froese and I. Herbst. Exponential bounds and absence of positive eigenvalues of [N]{}-body [S]{}chrödinger operators. , 87:429–447, 1982.
L. Hörmander. . Springer-Verlag, 1983.
R. Mazzeo. Unique continuation at infinity and embedded eigenvalues for asymptotically hyperbolic manifolds. , 113:25–45, 1991.
R. B. Melrose. . Marcel Dekker, 1994.
R. Schoen and S.-T. Yau. . International Press, Cambridge, MA, 1994.
M. A. Shubin. Spectral theory of elliptic operators on noncompact manifolds. , (207):5, 35–108, 1992. Méthodes semi-classiques, Vol. 1 (Nantes, 1991).
A. Vasy. Exponential decay of eigenfunctions in many-body type scattering with second order perturbations. , 209:468–492, 2004.
Shunhui Zhu. The comparison geometry of [R]{}icci curvature. In [*Comparison geometry (Berkeley, CA, 1993–94)*]{}, volume 30 of [*Math. Sci. Res. Inst. Publ.*]{}, pages 221–262. Cambridge Univ. Press, Cambridge, 1997.
Maciej Zworski. Numerical linear algebra and solvability of partial differential equations. , 229(2):293–307, 2002.
[^1]: A.V. was partially supported by NSF grant DMS-0201092, a Clay Research Fellowship and a Fellowship from the Alfred P.Sloan Foundation.
[^2]: J.W. was partially supported by NSF grants DMS-0323021 and DMS-0401323.
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- Peter Schaffer
- Djamila Aouada
- Shishir Nagaraja
bibliography:
- 'cv.bib'
title: |
Who clicks there!:\
Anonymizing the photographer in a camera saturated society
---
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We apply the light-cone hamiltonian approach to D2-brane, and derive the equivalent gauge-invariant Lagrangian. The later appears to be that of three-dimensional Yang-Mills theory, interacting with matter fields, in the special external induced metric, depending on matter fields. The duality between this theory and 11d membrane is shown.'
author:
- 'R. Manvelyan [^1], A. Melikyan, R. Mkrtchyan[^2]'
- '[*Theoretical Physics Department,*]{}'
- '[*Yerevan Physics Institute*]{}'
- '[*Alikhanyan Br. st.2, Yerevan, 375036 Armenia* ]{}'
title: 'Light-Cone Formulation of D2-Brane'
---
Introduction
============
The recent developments in the field of higher-dimensional extended objects has led to the deep understanding of the non-perturbative aspects of the superstrings and supergravity theories. That results in the unification, through the notion of duality, of all five different superstrings theories, and, moreover, the new, so-called M-theory [@MTH] notion appeared. The M-theory is intrinsically eleven-dimensional, and contains, in the spectrum of excitations, eleven-dimensional supermembrane theory. Also a new, in comparison with well-known p-branes (extended objects with p space dimensions) extended objects, so called D-branes [@Pol], appeared, which contain, in their spectrum, a vector fields (or, in the case of eleven-dimensional 5-brane - a second rank selfdual tensor field). The main goal of present paper is to investigate the light-cone formalism for the bosonic part of action of D-membranes. The presence of vector field gives a crucial difference from the known case of membranes [@deWitt], and leads to an interesting results. The supersymmetrization will be discussed elsewhere.
The light-cone formulation of super-membrane obtained by [@deWitt] is closely connected to the Matrix model representation of M-theory [@Banks] . Corresponding bosonic part of area-preserving action, from which we can obtain the Matrix model by replacing Lie brackets by commutators, looks like$%
\cite{deWitt}$:
$$S_{m}=\int d\tau d^{2}\sigma \left[ \frac{1}{2}(D_{0}X^{M})^{2}-\frac{1}{4}%
\left\{ X^{M},X^{N}\right\} \left\{ X^{M},X^{N}\right\} \right] , \label{MT}$$
where $M=1,2,...9$ and $D_{0}=\partial _{0}+\{\omega ,...\}$ is a covariant area-preserving derivative with gauge field $\omega (\tau ,\sigma
_{1},\sigma _{2})$ and Lie bracket
$$\{X,Y\}=\varepsilon ^{ij}\partial _{i}X\partial
_{j}Y\,,\,\,\,\,\,\,\,\,\,\,\,\,i,j,..=1,2. \label{Leebra}$$
That Lagrangian can be interpreted as a 10-dimensional Yang-Mills theory (if we start from 11-dimensional target space for membrane) reduced to one dimension.
The next new class of extended objects are, as mention above, the so-called D-branes[@Pol]. Main feature of D-branes is that 10-dimensional superstrings can end on them. The incorporation of D-branes in superstring picture gives a lot of issues in the theory of solitonic states in non-perturbative string theory and permits to reveal a different aspects of strings/M-theory dualities.
In this article we shall investigate the light-cone formulation of the 10-dimensional D-membrane described by Dirac-Born-Infeld (DBI) Lagrangian:
$$L_{DBI}=-\sqrt{-G};\qquad G=\det\{g_{\mu \nu }+F_{\mu \nu }\} \label{DBI}$$
$$\,g_{\mu \nu }=\partial _{\mu }X^{M}\partial _{\nu
}X^{M}\,\,\,\,,\,\,\,\,\,\,\,F=\partial _{\mu }A_{\nu }-\partial _{\nu
}A_{\mu },\,$$
$$\,M=+,-,1,2,...8;\,\,\,\,\,\,\,\mu ,\nu =0,1,2,$$ which is a bosonic part of D2-brane. We shall construct the analog of area-preserving membrane action (\[MT\]) for the DBI case.
The main result of our paper is that corresponding gauged light-cone action for D2-brane can be rewritten as a three-dimensional Maxwell theory with matter fields, in the specific curved induced metric:
$$\tilde{G}_{\mu \nu }=\left(
\begin{array}{ll}
-g+\xi ^{i}\left( \omega \right) g_{ij}\xi ^{j}\left( \omega \right) & \xi
^{k}\left( \omega \right) g_{kj} \\
\xi ^{k}\left( \omega \right) g_{ki} & g_{ij}
\end{array}
\right) \label{Met}$$
where $g_{ij}=$ $\partial _{i}X^{M}\partial _{j}X^{M},\xi ^{i}\left( \omega
\right) =\varepsilon ^{ki}\partial _{k}\omega $ and $g=\det g_{ij}$
The second result is that the duality transformation defined with this metric tensor connects our D2-brane light-cone action to the eleven-dimensional membrane light-cone action (connection between D2-brane in 10d and membrane in 11d was first observed by M. J. Duff and J. X. Lu [@Duff]. Schmidhuber [@Sch] and Townsend [@THS] established this connection in both directions). In our formulation we start from DBI action and obtain finally some duality transformation in light-cone formulation directly connecting that with membrane light-cone action obtained from Nambu-Goto membrane action in 11 dimension$\left( \ref{MT}\right) .$
Hamiltonian formulation
=======================
Let’s start with Hamiltonian formulation for action $\left( \ref{DBI}\right)
$ after preliminary light-cone gauge fixing:
$$X^{+}(\tau ,\sigma _{i})=X^{+}(0)+\tau
,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,X^{\pm }=\sqrt{\frac{1}{2}}\left(
X^{10}\pm X^{0}\right) \label{Gaug}$$
In this gauge we have the following components of induced metric:
$$\begin{aligned}
\,G_{rs} &=&g_{rs}+F_{rs}=\partial _{r}X^{M}\partial _{s}X^{M}+\partial
_{r}A_{s}-\partial _{s}A_{r}, \nonumber \\
G_{0r} &=&g_{0r}+F_{0r}=\partial _{0}X^{M}\partial _{r}X^{M}+\partial
_{r}X^{-}+\partial _{0}A_{r}-\partial _{r}A_{0}, \nonumber \\
G_{r0} &=&g_{0r}-F_{0r}=\partial _{0}X^{M}\partial _{r}X^{M}+\partial
_{r}X^{-}+\partial _{r}A_{0}-\partial _{0}A_{r}, \label{MG} \\
G_{00} &=&g_{00}=\partial _{0}X^{M}\partial _{0}X^{M}+2\partial _{0}X^{-},
\nonumber \\
M &=&1,2,...8\,;\,\,\,\,\,r,s=1,2 \nonumber\end{aligned}$$
The determinant of the induced metric and Lagrangian can be written in the form:
$$\begin{aligned}
G &=&-\Delta \bar{G}\,;\,\,\,\,\,\,\Delta =-G_{00}+G_{0r}G^{rs}G_{s0}
\nonumber \\
L &=&-\sqrt{\Delta \bar{G}}\,,\,\bar{G}=\det G_{rs} \label{GL}\end{aligned}$$
Then we can write down the canonical momenta:
$$\begin{aligned}
P^{M} &=&\frac{\partial L}{\partial \dot{X}^{M}}=\sqrt{\frac{\,\bar{G}}{%
\Delta }}\left[ \partial _{0}X^{M}-\frac{1}{2}\left( \partial
_{r}X^{M}G^{rs}G_{s0}+G_{0r}G^{rs}\partial _{s}X^{M}\right) \right] ,
\nonumber \\
P^{+} &=&\frac{\partial L}{\partial \dot{X}^{-}}=\sqrt{\frac{\,\bar{G}}{%
\Delta }},\,\,\,\,\,\,\,\,\,\,\,P_{A}^{r}=\frac{\partial L}{\partial \dot{A}%
_{r}}=\sqrt{\frac{\,\bar{G}}{\Delta }}\left[ G_{0s}G^{sr}+G^{rs}G_{s0}\right]
\label{Mom}\end{aligned}$$
and primary Hamiltonian density and primary constraints:
$$\begin{aligned}
H &=&\frac{P^{M}P^{M}+P_{A}^{r}P_{A}^{s}g_{rs}+\bar{G}}{2P^{+}}%
+P_{A}^{r}\partial _{r}A_{0}\,, \label{Ham} \\
P_{A}^{0} &=&0, \label{Gm} \\
\phi _{r} &=&P^{M}\partial _{r}X^{M}+P^{+}\partial
_{r}X^{-}+P_{A}^{s}F_{rs}=0. \label{Con}\end{aligned}$$
Then from the requirement of conservation of primary constraints we can obtain secondary (Gauss-low) constraint:
$$\chi =\partial _{r}P_{A}^{r}=0\,, \label{GLow}$$
corresponding to $U\left( 1\right) \,$ gauge invariance.
So we can firstly fix the gauge $A_{0}=0$ dropping the second term in $%
\left( \ref{Ham}\right) $ and correctly resolve the primary constraint $%
\left( \ref{Gm}\right) $. After that we have to add remaining first class constraints $\left( \ref{GLow}\right) $ and$\left( \ref{Con}\right) $ to Hamiltonian with arbitrary Lagrange multipliers $\phi _{r}$ and $\lambda $:
$$H=\frac{P^{M}P^{M}+P_{A}^{r}P_{A}^{s}g_{rs}+\bar{G}}{2P^{+}}+c^{r}\phi
_{r}+\,\lambda \chi \label{Ham1}$$
After that we can use $\tau $-dependent reparametrizations of $\sigma ^{r}:$
$$\sigma ^{r}\longrightarrow \sigma ^{r}+\xi ^{r}\left( \tau ,\sigma
^{s}\right)$$ corresponding to the constraints ($\ref{Con})$, for fixing the following gauge:
$$\pi _{r}=g_{0r}+\partial _{0}A_{k}g^{km}F_{rm}=0, \label{Gaug1}$$
where the velocities $\dot{A_{k}}$ and $\dot{X}^{M}$ have to be expressed throw corresponding momenta and coordinates. In this and only in this gauge after simple but tedious algebraic calculation one can prove that
$$\begin{aligned}
c^{r} &=&0 \\
\partial _{0}P^{+} &=&0\end{aligned}$$
according to Hamiltonian equations of motion. It means that in analogy with an ordinary membrane [@deWitt] we can put $P^{+}=1\footnote{%
Strictly speaking we have to put $P^{+}=const.\times w(\sigma _{i})$ , but
this leads only to some density factor in definition of Lie bracket \cite
{deWitt}.}$ and express $X^{-}$ coordinate through the transversal ones :
$$\partial _{r}X^{-}=-\left( P^{M}\partial _{r}X^{M}+P_{A}^{s}F_{rs}\right)
\label{Res}$$
It is easy to see that after using of this expression we shall obtain the residual constraint:
$$\partial _{s}\varepsilon ^{sr}\left( P^{M}\partial
_{r}X^{M}+P_{A}^{t}F_{rt}\right) =0\, \label{Rot}$$
Moreover, in that gauge$\left( \ref{Gaug1}\right) $ the expressions for momenta look very simple:
$$\begin{aligned}
P^{M} &=&\partial _{0}X^{M} \label{Mom1} \\
P_{A}^{r} &=&g^{rs}\partial _{0}A_{s} \label{Mom2}\end{aligned}$$
where
$$g^{rs}=\frac{\varepsilon ^{rt}\varepsilon ^{sp}g_{tp}}{g} \label{met11}$$
So, finally we obtain the following expressions for momenta, Hamiltonian and residual constrains in light-cone gauge:
$$\begin{aligned}
P^{M} &=&\partial
_{0}X^{M},\,\,\,\,\,\,\,\,\,\,\,\,\,\,P_{A}^{r}=g^{rs}\partial _{0}A_{s}
\label{momf} \\
H &=&\frac{P^{M}P^{M}+P_{A}^{r}P_{A}^{s}g_{rs}+\bar{G}}{2}, \label{hamf} \\
\phi &=&\partial _{s}\varepsilon ^{sr}\left( P^{M}\partial
_{r}X^{M}+P_{A}^{t}F_{rt}\right) =0, \label{conf} \\
\chi &=&\partial _{r}P_{A}^{r}=0\, \label{gaugf}\end{aligned}$$
Gauged Lagrangian
=================
The main idea of this section is to find out the Lagrangian containing fields $X^{M}$ , $A_{r}$ and two gauge field $\omega \left( \tau ,\sigma
_{i}\right) $ and $Q(\tau ,\sigma _{i})$ with the following properties:
1.The expressions $\left( \ref{momf}\right) $ and $\left( \ref{hamf}\right) $ have to be derived as standard expressions of the canonical momenta and Hamiltonian for that Lagrangian in the gauge
$$\begin{aligned}
\omega \left( \tau ,\sigma _{i}\right) &=&0 \nonumber \\
Q(\tau ,\sigma _{i}) &=&0 \label{gaug2}\end{aligned}$$
2.The equation of motion for gauge fields $\omega \left( \tau ,\sigma
_{i}\right) $ and $Q(\tau ,\sigma _{i})$ have to coincide (in the gauge $%
\left( \ref{gaug2}\right) $) with corresponding constraints $\left( \ref
{conf}\right) $ and$\left( \ref{gaugf}\right) $.
3.This Lagrangian has to be gauge invariant with following gauge groups:
a)group of area-preserving diffeomorphisms corresponding to the constraint $%
\left( \ref{conf}\right) $:
$$\phi =\left\{ \partial _{0}X^{M},X^{N}\right\} \,+\varepsilon ^{sr}\partial
_{s}\left( \partial _{0}A_{k}g^{kt}F_{rt}\right) =0 \label{constl}$$
b)group of $U\left( 1\right) $ gauge transformations connected to constraint $\left( \ref{gaugf}\right) $:
$$\chi =\partial _{r}\left( g^{rs}\partial _{0 }A_{s}\right) =0 \label{gaugl}$$
The desired Lagrangian has the following form:
$$L=\frac{\left( D_{0}X^{M}\right) ^{2}}{2}+\frac{1}{2}g^{rs}\left(
D_{0}A_{r}-\partial _{r}Q\right) \left( D_{0}A_{s}-\partial _{s}Q\right) -%
\frac{\bar{G}}{2}\,, \label{Lag1}$$
where $\bar{G}=g+F_{12}^{2}$ and
$$\begin{aligned}
&&D_{0}X^{M}=\partial _{0}X^{M}-\varepsilon ^{ij}\partial _{i}\omega
\partial _{j}X^{M}=\partial _{0}X^{M}-\left\{ \omega ,X^{M}\right\}
=\partial _{0}X^{M}-\pounds _{\xi \left( \omega \right) }X^{M} \nonumber \\
&&D_{0}A_{r}=D_{0}A_{r}=\partial _{0}A_{r}-\varepsilon ^{ij}\partial
_{r}\partial _{i}\omega A_{j}-\varepsilon ^{ij}\partial _{i}\omega \partial
_{j}A_{r}=\partial _{0}A_{r}-\pounds _{\xi \left( \omega \right) }A_{r}.
\label{CovD}\end{aligned}$$
Here $\pounds _{\xi \left( \omega \right) }$ is the Lie derivative in direction of divergenceless vector field $\xi ^{i}\left( \omega \right)
=\varepsilon ^{ki}\partial _{k}\omega .$
Lagrangian $\left( \ref{Lag1}\right) $ satisfies all three conditions, the gauge transformations for given fields are following:
$$\begin{aligned}
\delta _{\varepsilon }X^{M} &=&\left\{ \varepsilon ,X^{M}\right\} ,\delta
_{\varepsilon }A_{r}=\pounds _{\xi \left( \omega \right) }A_{r}, \nonumber
\\
\,\delta _{\varepsilon }Q &=&\left\{ \varepsilon \,,Q\right\} ,\delta
_{\varepsilon }\omega =\partial _{0}\varepsilon +\left\{ \varepsilon
\,,\omega \right\} , \label{dtr1} \\
\delta _{\alpha }X^{M} &=&0,\delta _{\alpha }A_{r}=\partial _{r}\alpha
,\delta _{\alpha }Q=\partial _{0}\alpha +\left\{ \alpha \,,\omega \right\}
,\delta _{\alpha }\omega =0. \label{gtr1}\end{aligned}$$
It is easy to see that $U\left( 1\right) $ gauge transformations of $Q$ do not commute with area-preserving ones.
This can be improved by redefinition of field $Q$:
$$A_{0}=Q+\varepsilon ^{ij}\partial _{i}\omega A_{j} \label{azero}$$
Here we introduce new $A_{0}$ component, which, differently from $Q,$ transforms (under area-preserving diffeomorphisms) not as a scalar but as a zero component of three-dimensional vector field :
$$\begin{aligned}
\delta _{\alpha }A_{0} &=&\partial _{0}\alpha \nonumber \\
\delta _{\varepsilon }A_{0} &=&\left\{ \varepsilon \,,A_{0}\right\}
+\partial _{0}\xi ^{i}\left( \varepsilon \right) A_{i} \label{gtr2}\end{aligned}$$
After that the Lagrangian $\left( \ref{Lag1}\right) $ can be rewritten in the following form:
$$\begin{aligned}
L &=&\frac{\left( D_{0}X^{M}\right) ^{2}}{2}-\frac{1}{4}\left\{
X^{M},X^{N}\right\} \left\{ X^{M},X^{N}\right\} +\frac{1}{2}%
g^{ij}F_{0i}F_{0j} \label{Lag2} \\
&+&\frac{1}{2}g^{ij}\xi ^{m}\left( \omega \right) \xi ^{n}\left( \omega
\right) F_{im}F_{jn}-\frac{1}{2}F_{12}^{2}+\frac{1}{2}g^{ij}F_{0i}F_{jn}\xi
^{n}\left( \omega \right) +g^{ij}\xi ^{m}\left( \omega \right) F_{im}F_{0j}
\nonumber\end{aligned}$$
here $F_{0r}=\partial _{0}A_{r}-\partial _{r}A_{0}$ .
Therefore, after introduction of three-dimensional metric $\tilde{G}_{\mu
\nu }$ $\left( \ref{Met}\right) $ with following properties:
$$\begin{aligned}
\tilde{G}_{\mu \nu } &=&\left(
\begin{array}{ll}
-g+\xi ^{i}\left( \omega \right) g_{ij}\xi ^{j}\left( \omega \right) & \xi
^{k}\left( \omega \right) g_{kj} \\
\xi ^{k}\left( \omega \right) g_{ki} & g_{ij}
\end{array}
\right) , \nonumber \\
\tilde{G}^{\mu \nu } &=&\left(
\begin{array}{ll}
-1/g & \xi ^{j}\left( \omega \right) /g \\
\xi ^{i}\left( \omega \right) /g & g^{ij}-\xi ^{i}\left( \omega \right) \xi
^{j}\left( \omega \right) /g
\end{array}
\right) , \label{Met2} \\
\,g_{ij} &=&\partial _{i}X^{M}\partial _{j}X^{M},\,g=\det g_{ij}\,,\,\xi
^{i}\left( \omega \right) =\varepsilon ^{ki}\partial _{k}\omega \nonumber \\
\det \tilde{G}_{\mu \nu } &=&\tilde{G},\,\,\sqrt{-\tilde{G}}%
=g,\,\,\,\,\,\,\,\,\sqrt{-\tilde{G}}G^{00}=-1 \nonumber\end{aligned}$$
and using $\left( \ref{met11}\right) $we can obtain from $\left( \ref{Lag2}%
\right) $ the final expression for our effective
light-cone action:
$$\begin{aligned}
L &=&-\frac{1}{2}\sqrt{-\tilde{G}}\tilde{G}^{\mu \nu }\partial _{\mu
}X^{M}\partial _{\nu }X^{M}+\frac{1}{2}\sqrt{-\tilde{G}} \nonumber \\
&&-\frac{1}{4}\sqrt{-\tilde{G}}\tilde{G}^{\mu \nu }\tilde{G}^{\sigma \lambda
}F_{\mu \sigma }F_{\nu \lambda }, \label{Lagf} \\
\partial _{\mu } &=&\left( \partial _{0},\partial _{i}\right) ,\,\,F_{\mu
\sigma }=\left( F_{0r},F_{ij}=F_{12}\varepsilon _{ij}\right) , \nonumber\end{aligned}$$
Here we used relation:
$$\left\{ X^{M},X^{N}\right\} \left\{ X^{M},X^{N}\right\} =\sqrt{-\tilde{G}}%
\left( \tilde{G}^{ij}+\xi ^{i}\left( \omega \right) \xi ^{j}\left( \omega
\right) /g\right) \partial _{i}X^{M}\partial _{j}X^{M}$$
So, we proved that effective action for light-cone 10d D2-brane can be expressed in the form of usual three-dimensional abelian gauge field coupled to eight scalar matter field $X^{M}$ in the induced metric $\left( \ref{Met2}%
\right) $defined by the same matter fields - target space coordinates $%
X^{M}. $
Duality transformation
======================
Let us introduce the new coordinate $X^{9}$ by standard Abelian duality transformation with our metric $\left( \ref{Met2}\right) $.
For that let us add to $\left( \ref{Lagf}\right) $ metric independent topological term :
$$\int L\left( X,\omega ,F\right) d\tau d^{2}\sigma +\frac{1}{2}\int X^{9}dF
\label{GenLag}$$
Here $F$ is independent second rank antisymmetric tensor field.
Integration over $X^{9}$ leads to $\left( \ref{Lagf}\right) .$ But integration over $F$ gives the following equation of motion:
$$\partial _{\mu }X^{9}=\sqrt{-\tilde{G}}\varepsilon _{\mu \nu \lambda }\tilde{%
G}^{\nu \rho }\tilde{G}^{\lambda \sigma }F_{\rho \sigma } \label{dual1}$$
or in components:
$$\begin{aligned}
\partial _{0}X^{9} &=&F_{12} \nonumber \\
\partial _{i}X^{9} &=&\varepsilon _{ij}\left( g^{jk}-\xi ^{j}\left( \omega
\right) \xi ^{k}\left( \omega \right) /g\right) F_{0k}, \label{dual2}\end{aligned}$$
We see that the substitution of $\left( \ref{dual2}\right) $ in $\left( \ref
{GenLag}\right) $ leads to:
$$L=\frac{1}{2}(D_{0}X^{M})^{2}-\frac{1}{4}\left\{ X^{M},X^{N}\right\} \left\{
X^{M},X^{N}\right\}$$
where $M,N,.=1,2,...9.$
This is the light-cone effective Lagrangian for eleven-dimensional membrane. As mentioned above, connection between 10d D2-brane and 11d membrane was established and exploited in [@Duff], [@THS], [@Sch].
Conclusion
==========
In the present paper the light-cone formalism is developed for 10-dimensional D2-brane, and it was shown, that all corresponding equations of motion and constraints can be derived from the Lagrangian of usual 3d Maxwell theory, interacting with matter fields, in a curved space-time with a special induced metric. This theory is invariant with respect to usual Abelian gauge transformations of gauge fields, and with respect to area-preserving diffeomorphisms. So, we have shown, that complicated non-linear DBI Lagrangian can be substituted, at least at classical level, in light-cone gauge, with quadratic one over gauge fields, although the dependence on a matter fields (coordinates of membrane) remains highly non-linear. There is no any direct connection to a small (gauge) field expansion of initial DBI Lagrangian, although they seem to be similar (but of course there is a lot of differences). The exact integration over gauge fields can be carried out now, at least formally, the corresponding determinant has to be considered as an effective action for D-brane, and can be expanded by the Riemann tensor (and it’s derivatives) of the metric $%
\left( \ref{Met2}\right) $. The properties of that tensor, with given special metric, may be very peculiar. Another way of thinking (which was an initial motivation of the present study) is possible connection with Matrix models. Unfortunately, there is no any evident way of interpreting fields in Lagrangian $\left( \ref{Lagf}\right) $ as a matrixes, with corresponding immersion of gauge groups. Nevertheless, there are some indications that literally same Lagrangian can be derived for D3-brane. That problem, together with supersymmetrization of these results will be considered in separate paper[@MMM]
[**Acknowledgments**]{}
This work was supported in part by the U.S. Civilian Research and Development Foundation under Award \# 96-RP1-253 and by INTAS grants \# 96-538 and \# 93-1038 (ext) .
[99]{} E.Witten, Nucl.Phys.[**B443 (**]{}1996[**)**]{}85,
J.Schwarz, ”Lectures on superstring and M-theory dualities”
hep-th/9607201.
J.Polchinski, ”TASI Lectures on D-Branes”, hep-th/961150,
J.Polchinski, Phys.Rev.Lett.[**75**]{} (1995)4724.
B.de Wit, J.Hoppe and H.Nicolai , Nucl.Phys. [**B305**]{} \[FS23\](1988)545.
T.Banks, W.Fischler, S.Shenker and L.Susskind, Phys.Rev. [**D55**]{} (1997)112.
M. J. Duff and J. X. Lu, Nucl. Phys. [**B390**]{} (1993) 276
C.Schmidhuber, Nucl. Phys. B467 (1996) 146
P.K.Townsend, Phys.Lett. [**B373**]{} (1996) 68.
R.Manvelyan,A.Melikyan,R.Mkrtchyan,
D2 and D3 Branes in light-cone gauge (in preparation).
[^1]: E-mail: [email protected]
[^2]: E-mail:[email protected]
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Topological data analysis offers a rich source of valuable information to study vision problems. Yet, so far we lack a theoretically sound connection to popular kernel-based learning techniques, such as kernel SVMs or kernel PCA. In this work, we establish such a connection by designing a multi-scale kernel for persistence diagrams, a stable summary representation of topological features in data. We show that this kernel is positive definite and prove its stability with respect to the 1-Wasserstein distance. Experiments on two benchmark datasets for 3D shape classification/retrieval and texture recognition show considerable performance gains of the proposed method compared to an alternative approach that is based on the recently introduced persistence landscapes.'
author:
- |
Jan Reininghaus, Stefan Huber\
IST Austria
- |
Ulrich Bauer\
IST Austria, TU München\
- |
Roland Kwitt\
University of Salzburg, Austria\
title: 'A Stable Multi-Scale Kernel for Topological Machine Learning'
---
=1 18[bibtex ]{}
Introduction {#section:introduction}
============
{width="98.00000%"}
In many computer vision problems, data (, images, meshes, point clouds, etc.) is piped through complex processing chains in order to extract information that can be used to address high-level inference tasks, such as recognition, detection or segmentation. The extracted information might be in the form of low-level appearance descriptors, , SIFT [@Lowe04a], or of higher-level nature, , activations at specific layers of deep convolutional networks [@Krizhevsky12a]. In recognition problems, for instance, it is then customary to feed the consolidated data to a discriminant classifier such as the popular support vector machine (SVM), a kernel-based learning technique.
While there has been substantial progress on extracting and encoding discriminative information, only recently have people started looking into the *topological structure* of the data as an additional source of information. With the emergence of *topological data analysis (TDA)* [@Carlsson09a], computational tools for efficiently identifying topological structure have become readily available. Since then, several authors have demonstrated that TDA can capture characteristics of the data that other methods often fail to provide, [@Skraba10a; @Li14a].
Along these lines, studying persistent homology [@Edelsbrunner2010Computational] is a particularly popular method for TDA, since it captures the birth and death times of topological features, , connected components, holes, etc., at multiple scales. This information is summarized by the *persistence diagram*, a multiset of points in the plane. The key feature of persistent homology is its stability: small changes in the input data lead to small changes in the Wasserstein distance of the associated persistence diagrams [@Cohen2010Lipschitz]. Considering the discrete nature of topological information, the existence of such a well-behaved summary is perhaps surprising.
Note that persistence diagrams together with the Wasserstein distance only form a metric space. Thus it is not possible to directly employ persistent homology in the large class of machine learning techniques that require a Hilbert space structure, like SVM or PCA. This obstacle is typically circumvented by defining a kernel function on the domain containing the data, which in turn defines a Hilbert space structure implicitly. While the Wasserstein distance itself does not naturally lead to a valid kernel (see Appendix \[section:definiteness\]), we show that it is possible to define a kernel for persistence diagrams that is stable the 1Wasserstein distance. This is the main contribution of this paper.
**Contribution.** We propose a (positive definite) multi-scale kernel for persistence diagrams (see Fig. \[fig:motivation\]). This kernel is defined via an $L_2$-valued feature map, based on ideas from scale space theory [@Iijima62]. We show that our feature map is Lipschitz continuous with respect to the 1-Wasserstein distance, thereby maintaining the stability property of persistent homology. The scale parameter of our kernel controls its robustness to noise and can be tuned to the data. We investigate, in detail, the theoretical properties of the kernel, and demonstrate its applicability on shape classification/retrieval and texture recognition benchmarks.
Related work {#section:relatedwork}
============
Methods that leverage topological information for computer vision or medical imaging methods can roughly be grouped into two categories. In the first category, we identify previous work that *directly* utilizes topological information to address a specific problem, such as topology-guided segmentation. In the second category, we identify approaches that *indirectly* use topological information. That is, information about topological features is used as input to some machine-learning algorithm.
As a representative of the first category, Skraba [@Skraba10a] adapt the idea of persistence-based clustering [@Chazal11a] in a segmentation method for surface meshes of 3D shapes, driven by the topological information in the persistence diagram. Gao [@Gao13a] use persistence information to restore so called *handles*, , topological cycles, in already existing segmentations of the left ventricle, extracted from computed tomography images. In a different segmentation setup, Chen [@Chen11a] propose to directly incorporate topological constraints into random-field based segmentation models.
In the second category of approaches, Chung [@Chung09a] and Pachauri [@Pachauri11a] investigate the problem of analyzing cortical thickness measurements on 3D surface meshes of the human cortex in order to study developmental and neurological disorders. In contrast to [@Skraba10a], persistence information is not used directly, but rather as a *descriptor* that is fed to a discriminant classifier in order to distinguish between normal control patients and patients with Alzheimer’s disease/autism. Yet, the step of training the classifier with topological information is typically done in a rather adhoc manner. In [@Pachauri11a] for instance, the persistence diagram is first rasterized on a regular grid, then a kernel-density estimate is computed, and eventually the vectorized discrete probability density function is used as a feature vector to train a SVM using standard kernels for $\mathbb{R}^n$. It is however unclear how the resulting kernel-induced distance behaves with respect to existing metrics (, bottleneck or Wasserstein distance) and how properties such as stability are affected. An approach that directly uses well-established distances between persistence diagrams for recognition was recently proposed by Li [@Li14a]. Besides bottleneck and Wasserstein distance, the authors employ persistence landscapes [@Bubenik13a] and the corresponding distance in their experiments. Their results expose the complementary nature of persistence information when combined with traditional bag-of-feature approaches. While our empirical study in Sec. \[subsection:empirical\_results\] is inspired by [@Li14a], we primarily focus on the development of the kernel; the combination with other methods is straightforward.
In order to enable the use of persistence information in machine learning setups, Adcock [@Adcock13] propose to compare persistence diagrams using a feature vector motivated by algebraic geometry and invariant theory. The features are defined using algebraic functions of the birth and death values in the persistence diagram. From a conceptual point of view, Bubenik’s concept of *persistence landscapes* [@Bubenik13a] is probably the closest to ours, being another kind of feature map for persistence diagrams. While persistence landscapes were not explicitly designed for use in machine learning algorithms, we will draw the connection to our work in Sec. \[subsection:landscape\_comparison\] and show that they in fact admit the definition of a valid positive definite kernel. Moreover, both persistence landscapes as well as our approach represent computationally attractive alternatives to the bottleneck or Wasserstein distance, which both require the solution of a matching problem.
Background {#section:background}
==========
First, we review some fundamental notions and results from persistent homology that will be relevant for our work.
#### Persistence diagrams.
*Persistence diagrams* are a concise description of the topological changes occuring in a growing sequence of shapes, called *filtration*. In particular, during the growth of a shape, holes of different dimension (, gaps between components, tunnels, voids, etc.) may appear and disappear. Intuitively, a $k$-dimensional hole, born at time $b$ and filled at time $d$, gives rise to a point $(b,d)$ in the $k$^th^ persistence diagram. A persistence diagram is thus a multiset of points in $\mathbb{R}^2$. Formally, the persistence diagram is defined using a standard concept from algebraic topology called *homology*; see [@Edelsbrunner2010Computational] for details.
Note that not every hole has to disappear in a filtration. Such holes give rise to *essential* features and are naturally represented by points of the form $(b,\infty)$ in the diagram. Essential features therefore capture the topology of the final shape in the filtration. In the present work, we do not consider these features as part of the persistence diagram. Moreover, all persistence diagrams will be assumed to be finite, as is usually the case for persistence diagrams coming from data.
![A function $\mathbb{R}\to\mathbb{R}$ (left) and its 0^th^ persistence diagram (right). Local minima create a connected component in the corresponding sublevel set, while local maxima merge connected components. The pairing of birth and death is shown in the persistence diagram.\[fig:persistence-dgm\]](pdd){width="0.75\columnwidth"}
#### Filtrations from functions.
A standard way of obtaining a filtration is to consider the *sublevel sets* $f^{-1}(-\infty,t]$ of a function $f\colon\Omega\to\mathbb R$ defined on some domain $\Omega$, for $t\in\mathbb R$. It is easy to see that the sublevel sets indeed form a filtration parametrized by $t$. We denote the resulting persistence diagram by $D_f$; see Fig. \[fig:persistence-dgm\] for an illustration.
As an example, consider a grayscale image, where $\Omega$ is the rectangular domain of the image and $f$ is the grayscale value at any point of the domain (, at a particular pixel). A sublevel set would thus consist of all pixels of $\Omega$ with value up to a certain threshold $t$. Another example would be a piecewise linear function on a triangular mesh $\Omega$, such as the popular heat kernel signature [@Sun09a]. Yet another commonly used filtration arises from point clouds $P$ embedded in $\mathbb R^n$, by considering the distance function $d_P(x)=\min_{p\in P}\|x-p\|$ on $\Omega=\mathbb R^n$. The sublevel sets of this function are unions of balls around $P$. Computationally, they are usually replaced by equivalent constructions called *alpha shapes*.
#### Stability.
A crucial aspect of the persistence diagram $D_f$ of a function $f$ is its stability with respect to perturbations of $f$. In fact, only stability guarantees that one can infer information about the function $f$ from its persistence diagram $D_f$ in the presence of noise.
Formally, we consider $f \mapsto D_f$ as a map of metric spaces and define *stability* as Lipschitz continuity of this map. This requires choices of metrics both on the set of functions and the set of persistence diagrams. For the functions, the $L_\infty$ metric is commonly used.
There is a natural metric associated to persistence diagrams, called the *bottleneck distance.* Loosely speaking, the distance of two diagrams is expressed by minimizing the largest distance of any two corresponding points, over all bijections between the two diagrams. Formally, let $F$ and $G$ be two persistence diagrams, each augmented by adding each point $(t,t)$ on the diagonal with countably infinite multiplicity. The *bottleneck distance* is $$d_B(F,G)=\inf_\gamma\sup_{x\in F}\|x-\gamma(x)\|_\infty ,
\label{eqn:bottleneck_distance}$$ where $\gamma$ ranges over all bijections from the individual points of $F$ to the individual points of $G$. As shown by Cohen-Steiner . [@Steiner07a], persistence diagrams are stable with respect to the bottleneck distance.
The bottleneck distance embeds into a more general class of distances, called *Wasserstein distances*. For any positive real number $p$, the *$p$-Wasserstein distance* is $$d_{W,p}(F, G)=\left(\inf_\gamma\sum_{x\in F}\|x-\gamma(x)\|_\infty^p\right)^{1\over p},
\label{eqn:wasserstein_distance}$$ where again $\gamma$ ranges over all bijections from the individual elements of $F$ to the individual elements of $G$. Note that taking the limit $p\to\infty$ yields the bottleneck distance, and we therefore define $d_{W,\infty} = d_B$. We have the following result bounding the $p$-Wasserstein distance in terms of the $L_\infty$ distance:
Assume that $X$ is a compact triangulable metric space such that for every 1-Lipschitz function $f$ on $X$ and for $k\geq 1$, the *degree $k$ total persistence* $\sum_{(b,d)\in D_f}(d-b)^k$ is bounded above by some constant $C$. Let $f,g$ be two $L$-Lipschitz piecewise linear functions on $X$. Then for all $p\geq k$, $$d_{W,p}(D_f,D_g) \leq (LC)^{1\over p} \|f-g\|_\infty^{1-\frac k p}.
\label{eqn:ch_wasserstein_l_inf_bound}$$
We note that, strictly speaking, this is not a stability result in the sense of Lipschitz continuity, since it only establishes Hölder continuity. Moreover, it only gives a constant upper bound for the Wasserstein distance when $p=1$.
#### Kernels.
Given a set $\mathcal{X}$, a function $k \colon \mathcal{X} \times \mathcal{X}
\to \mathbb{R}$ is a *kernel* if there exists a Hilbert space $\mathcal{H}$, called *feature space*, and a map $\Phi \colon \mathcal{X}
\to \mathcal{H}$, called *feature map*, such that $k(x,y) = \langle
\Phi(x), \Phi(y) \rangle_{\mathcal{H}}$ for all $x, y \in \mathcal{X}$. Equivalently, $k$ is a kernel if it is symmetric and positive definite [@Scholkopf01]. Kernels allow to apply machine learning algorithms operating on a Hilbert space to be applied to more general settings, such as strings, graphs, or, in our case, persistence diagrams.
A kernel induces a pseudometric $d_k(x,y) = (k(x,x) + k(y,y) - 2\,
k(x,y))^{\nicefrac{1}{2}}$ on $\mathcal{X}$, which is the distance $\|\Phi(x) -
\Phi(y)\|_\mathcal{H}$ in the feature space. We call the kernel $k$ *stable* a metric $d$ on $\mathcal{X}$ if there is a constant $C > 0$ such that $d_k(x,y) \le C \, d(x,y)$ for all $x, y
\in \mathcal{X}$. Note that this is equivalent to Lipschitz continuity of the feature map.
The stability of a kernel is particularly useful for classification problems: assume that there exists a separating hyperplane $H$ for two classes of data points with margin $m$. If the data points are perturbed by some $\epsilon < m/2$, then $H$ still separates the two classes with a margin $m - 2\epsilon$.
The persistence scale-space kernel {#section:kernel}
==================================
We propose a stable *multi-scale* kernel $k_\sigma$ for the set of persistence diagrams $\mathcal{D}$. This kernel will be defined via a feature map $\Phi_\sigma: \mathcal{D} \rightarrow L_2(\Omega)$, with $\Omega \subset
{\mathbb{R}}^2$ denoting the closed half plane above the diagonal.
To motivate the definition of $\Phi_\sigma$, we point out that the set of persistence diagrams, , multisets of points in $\mathbb{R}^2$, does not possess a Hilbert space structure per se. However, a persistence diagram $D$ can be uniquely represented as a sum of Dirac delta distributions[^1], one for each point in $D$. Since Dirac deltas are functionals in the Hilbert space $H^{-2}(\mathbb{R}^2)$ [@Iorio01 Chapter 7], we can embed the set of persistence diagrams into a Hilbert space by adopting this point of view.
Unfortunately, the induced metric on $\mathcal{D}$ does *not* take into account the distance of the points to the diagonal, and therefore cannot be robust against perturbations of the diagrams. Motivated by scale-space theory [@Iijima62], we address this issue by using the sum of Dirac deltas as an initial condition for a heat diffusion problem with a Dirichlet boundary condition on the diagonal. The solution of this partial differential equation is an $L_2(\Omega)$ function for any chosen scale parameter $\sigma>0$. In the following paragraphs, we will
define the persistence scale space kernel $k_\sigma$,
derive a simple formula for evaluating $k_\sigma$, and
prove stability of $k_\sigma$ the $1$-Wasserstein distance.
Let $\Omega = \{ x = (x_1, x_2) \in \mathbb{R}^2\colon x_2 \geq x_1 \}$ denote the space above the diagonal, and let $\delta_p$ denote a Dirac delta centered at the point $p$. For a given persistence diagram $D$, we now consider the solution $u\colon \Omega \times \mathbb{R}_{\geq 0} \rightarrow \mathbb{R},
(x,t) \mapsto u(x,t)$ of the partial differential equation[^2] $$\begin{aligned}
\Delta_x u &= \partial_t u &&\text{in $\Omega \times \mathbb{R}_{> 0}$}, \\
u &= 0 &&\text{on $\partial\Omega \times \mathbb{R}_{\geq 0}$}, \\
u &= \sum_{p \in D} \delta_p &&\text{on $\Omega \times \{0\}$} \label{eq:initial_condition}.\end{aligned}$$ The feature map $\Phi_\sigma \colon \mathcal{D} \to L_2(\Omega)$ at scale $\sigma > 0$ of a persistence diagram $D$ is now defined as $\Phi_\sigma(D) =
\left.u\right|_{t=\sigma}$. This map yields the persistence scale space kernel $k_\sigma$ on $\mathcal{D}$ as $$k_\sigma(F,G) = \langle \Phi_\sigma(F),\Phi_\sigma(G) \rangle_{L_2(\Omega)}.
\label{eq:kernel_definition}$$
Note that $\Phi_\sigma(D)=0$ for some $\sigma>0$ implies that $u=0$ on $\Omega \times \{0\}$, which means that $D$ has to be the empty diagram. From linearity of the solution operator it now follows that $\Phi_\sigma$ is an injective map.
The solution of the partial differential equation can be obtained by extending the domain from $\Omega$ to $\mathbb{R}^2$ and replacing with $$\begin{aligned}
u &= \sum_{p \in D} \delta_p - \delta_{\overline{p}} &&\text{on $\mathbb{R}^2 \times \{0\}$,}
\label{eq:mod_initial_condition}\end{aligned}$$ where $\overline{p}=(b,a)$ is $p=(a,b)$ mirrored at the diagonal. It can be shown that restricting the solution of this extended problem to $\Omega$ yields a solution for the original equation. It is given by convolving the initial condition with a Gaussian kernel: $$\begin{aligned}
\label{eq:solutionpde}
u(x, t) = \frac{1}{4\pi t} \sum_{p \in D} e^{-\frac{\|x - p\|^2}{4t}} -
e^{-\frac{\|x - \overline{p}\|^2}{4t}}.\end{aligned}$$ Using this closed form solution of $u$, we can derive a simple expression for evaluating the kernel explicitly: $$\begin{aligned}
k_\sigma(F,G)
&= \frac{1}{8 \pi \sigma} \sum_{\substack{p \in F\\q \in G}} e^{-\frac{\|p-q\|^2}{8\sigma}} - e^{-\frac{\|p-\overline{q}\|^2}{8\sigma}}.
\label{eqn:l2ip}\end{aligned}$$ We refer to Appendix \[section:kclosedform\] for the elementary derivation of and for a visualization (see Appendix \[section:featuremapplots\]) of the solution . Note that the kernel can be computed in $\mathcal{O}(|F| \cdot |G|)$ time, where $|F|$ and $|G|$ denote the cardinality of the multisets $F$ and $G$, respectively.
\[thm:robustness\] The kernel $k_\sigma$ is $1$-Wasserstein stable.
To prove $1$-Wasserstein stability of $k_\sigma$, we show Lipschitz continuity of the feature map $\Phi_\sigma$ as follows: $$\|\Phi_\sigma(F) - \Phi_\sigma(G)\|_{L_2(\Omega)} \leq
\frac{1}{\sigma\sqrt{8\pi}}\,d_{W,1}(F,G),
\label{eq:robustness}$$ where $F$ and $G$ denote persistence diagrams that have been augmented with points on the diagonal. Note that augmenting diagrams with points on the diagonal does not change the values of $\Phi_\sigma$, as can be seen from . Since the unaugmented persistence diagrams are assumed to be finite, some matching $\gamma$ between $F$ and $G$ achieves the infimum in the definition of the Wasserstein distance, $d_{W,1}(F,G) = \sum_{u \in F} \|u-\gamma(u)\|_\infty$. Writing $N_u(x) = \frac{1}{4\pi
\sigma} e^{-\frac{\|x - u\|^2_2}{4\sigma}}$, we have $\|N_u-N_v\|_{L_2(\mathbb{R}^2)} = \frac{1}{\sqrt{4 \pi \sigma}} \cdot
\sqrt{1- e^{-\frac{\|u-v\|_2^2}{8 \sigma}}}$. The Minkowski inequality and the inequality $e^{-\xi} \ge 1 - \xi$ finally yield $$\begin{aligned}
&\|\Phi_\sigma(F) - \Phi_\sigma(G)\|_{L_2(\Omega)} \\
&\le \left\| \sum_{u \in F} (N_u - N_{\overline{u}}) - (N_{\gamma(u)} -
N_{\overline{{\gamma(u)}}}) \right\|_{L_2({\mathbb{R}}^2)} \\
&\le 2 \sum_{u \in F} \| N_u - N_{\gamma(u)} \|_{L_2({\mathbb{R}}^2)} \\
&\le \frac{1}{\sqrt{\pi \sigma}} \sum_{u \in F} \sqrt{1-
e^{-\frac{\|u-{\gamma(u)}\|_2^2}{8 \sigma}}} \\
&\le \frac{1}{\sigma \sqrt{8 \pi}} \sum_{u \in F} \|u-{\gamma(u)}\|_2 \quad
\le \quad \frac{1}{2\sigma \, \sqrt{\pi}} d_{W,1}(F,G) . \qedhere\end{aligned}$$
We refer to the left-hand side of as the *persistence scale space distance* $d_{k_{\sigma}}$ between $F$ and $G$. Note that the right hand side of decreases as $\sigma$ increases. Adjusting $\sigma$ accordingly allows to counteract the influence of noise in the input data, which causes an increase in $d_{W,1}(F,G)$. We will see in Sec. \[subsection:texture\_recognition\] that tuning $\sigma$ to the data can be beneficial for the overall performance of machine learning methods. A natural question arising from Theorem \[thm:robustness\] is whether our stability result extends to $p>1$. To answer this question, we first note that our kernel is *additive:* we call a kernel $k$ on persistence diagrams additive if $k(E \cup F, G) =
k(E, G) + k(F, G)$ for all $E, F, G \in \mathcal{D}$. By choosing $F = \emptyset$, we see that if $k$ is additive then $k(\emptyset, G)= 0$ for all $G \in \mathcal{D}$. We further say that a kernel $k$ is *trivial* if $k(F, G) = 0$ for all $F,G \in \mathcal{D}$. The next theorem establishes that Theorem \[thm:robustness\] is sharp in the sense that *no* non-trivial additive kernel can be stable the $p$-Wasserstein distance when $p > 1$.
A non-trivial additive kernel $k$ on persistence diagrams is not stable $d_{W,p}$ for any $1 < p \leq \infty$.
By the non-triviality of $k$, it can be shown that there exists an $F \in \mathcal{D}$ such that $k(F, F) > 0$. We prove the claim by comparing the rates of growth of $d_{k_\sigma}(\bigcup_{i=1}^n F,
\emptyset)$ and $d_{W,p}(\bigcup_{i=1}^n F, \emptyset)$ $n$. We have $$d_{k_\sigma}\left(\bigcup_{i=1}^n F, \emptyset\right) = n \, \sqrt{k(F,F)}.$$ On the other hand, $$d_{W,p}\left(\bigcup_{i=1}^n F, \emptyset\right) =
d_{W,p}(F, \emptyset) \cdot
\begin{cases}
\sqrt[p]{n} & \text{if $p < \infty$} ,\\
1 & \text{if $p = \infty$} .
\end{cases}$$ Hence, $d_{k_\sigma}$ can not be bounded by $C \cdot d_{W,p}$ with a constant $C > 0$ if $p > 1$.
Evaluation {#section:evaluation}
==========
To evaluate the kernel proposed in Sec. \[section:kernel\], we investigate conceptual differences to persistence landscapes in Sec. \[subsection:landscape\_comparison\], and then consider its performance in the context of shape classification/retrieval and texture recognition in Sec. \[subsection:empirical\_results\].
Comparison to persistence landscapes {#subsection:landscape_comparison}
------------------------------------
In [@Bubenik13a], Bubenik introduced *persistence landscapes*, a representation of persistence diagrams as functions in the Banach space $L_p(\mathbb{R}^2)$. This construction was mainly intended for statistical computations, enabled by the vector space structure of $L_p$. For $p=2$, we can use the Hilbert space structure of $L_2(\mathbb{R}^2)$ to construct a kernel analogously to . For the purpose of this work, we refer to this kernel as the *persistence landscape kernel* $k^L$ and denote by $\Phi^L\colon \mathcal{D} \to L_2(\mathbb{R}^2)$ the corresponding feature map. The kernel-induced distance is denoted by $d_{k^L}$. Bubenik shows stability a weighted version of the Wasserstein distance, which for $p=2$ can be summarized as:
\[thm:landscape\_stability\] For any two persistence diagrams $F$ and $G$ we have $$\begin{aligned}
\begin{split}
& \|\Phi^L(F) - \Phi^L(G)\|_{L_2(\mathbb{R}^2)} \leq \\
& \inf_{\gamma} \left(
\sum_{u \in F} \operatorname*{p}(u) \|u-\gamma(u)\|^2_\infty + \frac{2}{3} \|u-\gamma(u)\|^3_\infty
\right)^{\frac{1}{2}} ,
\end{split}
\label{eqn:landscape_stability}\end{aligned}$$ where $\operatorname*{p}(u)=d-b$ denotes the persistence of $u=(b,d)$, and $\gamma$ ranges over all bijections from $F$ to $G$.
For a better understanding of the stability results given in Theorems \[thm:robustness\] and \[thm:landscape\_stability\], we present and discuss two thought experiments.
For the first experiment, let $F_{\lambda} = \{-\lambda,\lambda\}$ and $G_{\lambda} = \{-\lambda+1,\lambda+1\}$ be two diagrams with one point each and $\lambda \in \mathbb{R}_{\geq 0}$. The two points move away from the diagonal with increasing $\lambda$, while maintaining the same Euclidean distance to each other. Consequently, $d_{W,p}(F_\lambda,G_\lambda)$ and $d_{k_\sigma}(F_\lambda,G_\lambda)$ asymptotically approach a constant as $\lambda\to \infty$. In contrast, $d_{k^L}(F_\lambda,G_\lambda)$ grows in the order of $\sqrt{\lambda}$ and, in particular, is unbounded. This means that $d_{k^L}$ emphasizes points of high persistence in the diagrams, as reflected by the weighting term $\operatorname*{p}(u)$ in .
In the second experiment, we compare persistence diagrams from data samples of two fictive classes A (, $F$,$F'$) and B (, $G$), illustrated in Fig. \[fig:stability\_exp2\]. We first consider $d_{k^L}(F,F')$. As we have seen in the previous experiment, $d_{k^L}$ will be dominated by variations in the points of high persistence. Similarly, $d_{k^L}(F,G)$ will also be dominated by these points as long as $\lambda$ is sufficiently large. Hence, instances of classes A and B would be inseparable in a nearest neighbor setup. In contrast, $d_{B}$, $d_{W,p}$ and $d_{k_\sigma}$ do *not* over-emphasize points of high persistence and thus allow to distinguish classes A and B.
![Two persistence diagrams from and one diagram from . The classes only differ in their points of low-persistence (, points closer to the diagonal).\[fig:stability\_exp2\]](exp2){width="0.98\columnwidth"}
{width="100.00000%"}
Empirical results {#subsection:empirical_results}
-----------------
We report results on two vision tasks where persistent homology has already been shown to provide valuable discriminative information [@Li14a]: *shape classification/retrieval* and *texture image classification*. The purpose of the experiments is *not* to outperform the state-of-the-art on these problems – which would be rather challenging by exclusively using topological information – but to demonstrate the advantages of $k_\sigma$ and $d_{k_\sigma}$ over $k^L$ and $d_{k^L}$.
#### Datasets.
For shape classification/retrieval, we use the <span style="font-variant:small-caps;">SHREC 2014</span> [@Pickup2014] benchmark, see Fig. \[fig:dataset\_visual\]. It consists of both *synthetic* and *real* shapes, given as 3D meshes. The synthetic part of the data contains $300$ meshes of humans (five males, five females, five children) in $20$ different poses; the real part contains $400$ meshes from $40$ humans (male, female) in $10$ different poses. We use the meshes in full resolution, , without any mesh decimation. For classification, the objective is to distinguish between the different human models, , a 15-class problem for SHREC 2014 (synthetic) and a 40-class problem for SHREC 2014 (real).
For texture recognition, we use the `Outex_TC_00000` benchmark [@Ojala02a], downsampled to $32\times 32$ pixel images. The benchmark provides 100 predefined training/testing splits and each of the 24 classes is equally represented by 10 images during training and testing.
#### Implementation.
For shape classification/retrieval, we compute the classic *Heat Kernel Signature (HKS)* [@Sun09a] over a range of ten time parameters $t_i$ of increasing value. For each specific choice of $t_i$, we obtain a piecewise linear function on the surface mesh of each object. As discussed in Sec. \[section:background\], we then compute the persistence diagrams of the induced filtrations in dimensions $0$ and $1$.
For texture classification, we compute CLBP [@Guo10a] descriptors, ( [@Li14a]). Results are reported for the rotation-invariant versions of the CLBP-Single (`CLBP-S`) and the CLBP-Magnitude (`CLBP-M`) operator with $P=8$ neighbours and radius $R=1$. Both operators produce a scalar-valued response image which can be interpreted as a weighted cubical cell complex and its lower star filtration is used to compute persistence diagrams; see [@Wagner12a] for details.
For both types of input data, the persistence diagrams are obtained using <span style="font-variant:small-caps;">Dipha</span> [@Bauer14a], which can directly handle meshes and images. A standard soft margin $C$-SVM classifier [@Scholkopf01], as implemented in <span style="font-variant:small-caps;">Libsvm</span> [@Chang11a], is used for classification. The cost factor $C$ is tuned using ten-fold cross-validation on the training data. For the kernel $k_\sigma$, this cross-validation further includes the kernel scale $\sigma$.
### Shape classification {#subsubsection:shape_classification}
Tables \[table:shrec14\_clf\_syn\_inner\_product\] and \[table:shrec14\_clf\_real\_inner\_product\] list the classification results for $k_\sigma$ and $k^L$ on <span style="font-variant:small-caps;">SHREC 2014</span>. All results are averaged over ten cross-validation runs using random 70/30 training/testing splits with a roughly equal class distribution. We report results for $1$-dimensional features only; $0$-dimensional features lead to comparable performance.
On both real and synthetic data, we observe that $k_\sigma$ leads to consistent improvements over $k^L$. For some choices of $t_i$, the gains even range up to $30\%$, while in other cases, the improvements are relatively small. This can be explained by the fact that varying the HKS time $t_i$ essentially varies the smoothness of the input data. The scale $\sigma$ in $k_\sigma$ allows to compensate—at the classification stage—for unfavorable smoothness settings to a certain extent, see Sec. \[section:kernel\]. In contrast, $k^L$ does not have this capability and essentially relies on suitably preprocessed input data. For some choices of $t_i$, $k^L$ does in fact lead to classification accuracies close to $k_\sigma$. However, when using $k^L$, we have to carefully adjust the HKS time parameter, corresponding to changes in the input data. This is undesirable in most situations, since HKS computation for meshes with a large number of vertices can be quite time-consuming and sometimes we might not even have access to the meshes directly. The improved classification rates for $k_\sigma$ indicate that using the additional degree of freedom is in fact beneficial for performance.
### Shape retrieval {#subsection:shape_retrieval}
In addition to the classification experiments, we report on shape retrieval performance using standard evaluation measures (see [@Shilane04a; @Pickup2014]). This allows us to assess the behavior of the kernel-induced distances $d_{k_\sigma}$ and $d_{k^L}$.
For brevity, only the nearest-neighbor performance is listed in Table \[table:shrec14\_retrieval\] (for a listing of all measures, see Appendix \[section:additionalresults\]). Using each shape as a query shape once, nearest-neighbor performance measures how often the top-ranked shape in the retrieval result belongs to the same class as the query. To study the effect of tuning the scale $\sigma$, the column $d_{k_\sigma}$ lists the *maximum* nearest-neighbor performance that can be achieved over a range of scales.
As we can see, the results are similar to the classification experiment. However, at a few specific settings of the HKS time $t_i$, $d_{k^L}$ performs on par, or better than $d_{k_\sigma}$. As noted in Sec. \[subsubsection:shape\_classification\], this can be explained by the changes in the smoothness of the input data, induced by different HKS times $t_i$. Another observation is that nearest-neighbor performance of $d_{k^L}$ is quite unstable around the top result with respect to $t_i$. For example, it drops at $t_2$ from 91% to 53.3% and 76.7% on <span style="font-variant:small-caps;">SHREC 2014</span> (synthetic) and at $t_8$ from 70% to 45.2% and 43.5% on <span style="font-variant:small-caps;">SHREC 2014</span> (real). In contrast, $d_{k_\sigma}$ exhibits stable performance around the optimal $t_i$.
To put these results into context with existing works in shape retrieval, Table \[table:shrec14\_retrieval\] also lists the top three entries (out of 22) of [@Pickup2014] on the same benchmark. On both real and synthetic data, $d_{k_\sigma}$ ranks among the top five entries. This indicates that topological persistence alone is a rich source of discriminative information for this particular problem. In addition, since we only assess one HKS time parameter at a time, performance could potentially be improved by more elaborate fusion strategies.
Texture recognition {#subsection:texture_recognition}
-------------------
For texture recognition, all results are averaged over the $100$ training/testing splits of the `Outex_TC_00000` benchmark. Table \[table:outex\] lists the performance of a SVM classifier using $k_\sigma$ and $k^L$ for $0$-dimensional features (, connected components). Higher-dimensional features were not informative for this problem. For comparison, Table \[table:outex\] also lists the performance of a SVM, trained on normalized histograms of `CLBP-S/M` responses, using a $\chi^2$ kernel.
First, from Table \[table:outex\], it is evident that $k_\sigma$ performs better than $k^L$ by a large margin, with gains up to $\approx$11% in accuracy. Second, it is also apparent that, for this problem, topological information alone is not competitive with SVMs using simple orderless operator response histograms. However, the results of [@Li14a] show that a *combination* of persistence information (using persistence landscapes) with conventional bag-of-feature representations leads to state-of-the-art performance. While this indicates the complementary nature of topological features, it also suggests that kernel combinations (, via multiple-kernel learning [@Gonen11a]) could lead to even greater gains by including the proposed kernel $k_\sigma$.
To assess the stability of the (customary) cross-validation strategy to select a specific $\sigma$, Fig. \[fig:acc\_vs\_scale\] illustrates classification performance as a function of the latter. Given the smoothness of the performance curve, it seems unlikely that parameter selection via cross-validation will be sensitive to a specific discretization of the search range $[\sigma_{\min},\sigma_{\max}]$.
Finally, we remark that tuning $k^L$ has the same drawbacks in this case as in the shape classification experiments. While, in principle, we could smooth the textures, the CLBP response images, or even tweak the radius of the CLBP operators, all those strategies would require changes at the beginning of the processing pipeline. In contrast, adjusting the scale $\sigma$ in $k_\sigma$ is done at the *end* of the pipeline during classifier training.
CLBP Operator $k^L$ $k_\sigma$ $\Delta$
------------------------- --------------- ------------------------ ----------
`CLBP-S` $58.0\pm 2.3$ $\mathbf{69.2\pm 2.7}$
`CLBP-M` $45.2\pm 2.5$ $\mathbf{55.1\pm 2.5}$
`CLBP-S` (SVM-$\chi^2$)
`CLBP-M` (SVM-$\chi^2$)
: Classification performance on `Outex_TC_00000`.\[table:outex\]
Conclusion {#section:conclusion}
==========
We have shown, both theoretically and empirically, that the proposed kernel exhibits good behavior for tasks like shape classification or texture recognition using a SVM. Moreover, the ability to tune a scale parameter has proven beneficial in practice.
One possible direction for future work would be to address computational bottlenecks in order to enable application in large scale scenarios. This could include leveraging additivity and stability in order to approximate the value of the kernel within given error bounds, in particular, by reducing the number of distinct points in the summation of .
While the 1-Wasserstein distance is well established and has proven useful in applications, we hope to improve the understanding of stability for persistence diagrams the Wasserstein distance beyond the previous estimates. Such a result would extend the stability of our kernel from persistence diagrams to the underlying data, leading to a full stability proof for topological machine learning.
In summary, our method enables the use of topological information in all kernel-based machine learning methods. It will therefore be interesting to see which other application areas will profit from topological machine learning.
[10]{}=-1pt A. [Adcock]{}, E. [Carlsson]{}, and G. [Carlsson]{}. . arXiv, available at <http://arxiv.org/abs/1304.0530>, 2013.
R. Bapat and T. Raghavan. . Cambridge University Press, 1997.
U. Bauer, M. Kerber, and J. Reininghaus. Distributed computation of persistent homology. In [*ALENEX*]{}, 2014.
C. Berg, J.-P. Reus-Christensen, and P. Ressel. . Springer, 1984.
P. Bubenik. Statistical topological data analysis using persistence landscapes. arXiv, available at <http://arxiv.org/abs/1207.6437>, 2012.
G. Carlsson. Topology and data. , 46:255–308, 2009.
C.-C. Chang and C.-J. Lin. : A library for support vector machines. , 2(3):1–27, 2011.
F. Chazal, L. Guibas, S. Oudot, and P. Skraba. Persistence-based clustering in [Riemannian]{} manifolds. In [*[SoSG]{}*]{}, 2011.
C. Chen, D. Freedman, and C. Lampert. Enforcing topological constraints in random field image segmentation. In [*CVPR*]{}, 2013.
M. Chung, P. Bubenik, and P. Kim. Persistence diagrams of cortical surface data. In [*IPMI*]{}, 2009.
D. Cohen-Steiner, H. Edelsbrunner, and J. Harer. Stability of persistence diagrams. , 37(1):103–120, 2007.
D. Cohen-Steiner, H. Edelsbrunner, J. Harer, and Y. Mileyko. Lipschitz functions have [$L_p$]{}-stable persistence. , 10(2):127–139, 2010.
H. Edelsbrunner and J. Harer. AMS, 2010.
M. Gao, C. Chen, S. Zhang, Z. Qian, D. Metaxas, and L. Axel. Segmenting the papillary muscles and the trabeculae from high resolution cardiac [CT]{} through restoration of topological handles. In [*IPMI*]{}, 2013.
M. Gönen and E. Alpaydin. Multiple kernel learning algorithms. , 12:2211–2268, 2011.
Z. Guo, L. Zhang, and D. Zhang. A completed modeling of local binary pattern operator for texture classification. , 19(6):1657–1663, 2010.
T. Iijima. Basic theory on normalization of a pattern (in case of typical one-dimensional pattern). , 26:368–388, 1962.
R. J. j. Iorio and V. de Magalhães Iorio. Cambridge Stud. Adv. Math., 2001.
A. Krizhevsky, I. Sutskever, and G. Hinton. classification with deep convolutional neural networks. In [*NIPS*]{}, 2012.
C. Li, M. Ovsjanikov, and F. Chazal. Persistence-based structural recognition. In [*CVPR*]{}, 2014.
D. Lowe. Distinctive image features from scale-invariant keypoints. , 60(2):91–110, 2004.
T. Ojala, T. Mäenpää, M. Pietikäinen, J. Viertola, J. Kyllonen, and S. Huovinen. – new framework for empirical evaluation of texture analysis algorithms. In [*ICPR*]{}, 2002.
D. Pachauri, C. Hinrichs, M. Chung, S. Johnson, and V. Singh. Topology-based kernels with application to inference problems in [Alzheimer’s]{} disease. , 30(10):1760–1770, 2011.
. track: Shape retrieval of non-rigid 3d human models. In [*Proceedings of the 7th Eurographics workshop on 3D Object Retrieval*]{}, EG 3DOR’14. Eurographics Association, 2014.
B. Schölkopf. The kernel-trick for distances. In [*NIPS*]{}, 2001.
B. Schölkopf and A. J. Smola. . MIT Press, Cambridge, MA, USA, 2001.
P. Shilane, P. Min, M. Kazhdan, and T. Funkhouser. The [Princeton]{} shape benchmark. In [*Shape Modeling International*]{}, 2004.
P. Skraba, M. Ovsjanikov, F. Chazal, and L. Guibas. Persistence-based segmentation of deformable shapes. In [*CVPR Workshop on Non-Rigid Shape Analysis and Deformable Image Alignment*]{}, 2010.
J. Sun, M. Ovsjanikov, and L. Guibas. A concise and probably informative multi-scale signature based on heat diffusion. In [*SGP*]{}, 2009.
H. Wagner, C. Chen, and E. Vuçini. Efficient computation of persistent homology for cubical data. In [*Topological Methods in Data Analysis and Visualization II*]{}, Mathematics and Visualization, pages 91–106. Springer Berlin Heidelberg, 2012.
Appendix {#appendix .unnumbered}
========
Indefiniteness of $d_{W,p}$ {#section:definiteness}
===========================
It is tempting to try to employ the Wasserstein distance for constructing a kernel on persistence diagrams. For instance, in Euclidean space, $k(x,y) = -\|x - y\|^2, x,y
\in \mathbb{R}^n$ is conditionally positive definite and can be used within SVMs. Hence, the question arises if $k(x,y) = -d_{W,p}(x,y), x,y \in \mathcal{D}$ can be used as well.
In the following, we demonstrate (via counterexamples) that neither $-d_{W,p}$ nor $\exp(-\xi d_{W,p}(\cdot,\cdot))$ – for different choices of $p$ – are (conditionally) positive definite. Thus, they cannot be employed in kernel-based learning techniques.
First, we briefly repeat some definitions to establish the terminology; this is done to avoid potential confusion, references [@Berg84a; @Bapat97a; @Scholkopf01]), about what is referred to as (conditionally) positive/negative definiteness in the context of kernel functions.
A symmetric matrix $\mathbf{A} \in {\mathbb{R}}^{n \times n}$ is called positive definite (p.d.) if $\mathbf{c}^\top\mathbf{A} \mathbf{c} \ge 0$ for all $\mathbf{c} \in {\mathbb{R}}^n$. A symmetric matrix $\mathbf{A} \in {\mathbb{R}}^{n \times n}$ is called negative definite (n.d.) if $\mathbf{c}^\top \mathbf{A} \mathbf{c} \le 0$ for all $\mathbf{c} \in {\mathbb{R}}^n$.
Note that in literature on linear algebra the notion of definiteness as introduced above is typically known as semidefiniteness. For the sake of brevity, in the kernel literature the prefix “semi” is typically dropped.
A symmetric matrix $\mathbf{A} \in {\mathbb{R}}^{n \times n}$ is called conditionally positive definite (c.p.d.) if $\mathbf{c}^t \mathbf{A} \mathbf{c} \ge 0$ for all $\mathbf{c} = (c_1, \dots, c_n) \in {\mathbb{R}}^n$ s.t. $\sum_i c_i = 0$. A symmetric matrix $\mathbf{A} \in {\mathbb{R}}^{n \times n}$ is called conditionally negative definite (c.n.d.) if $\mathbf{c}^\top \mathbf{A} c \le 0$ for all $\mathbf{c} = (c_1, \dots, c_n) \in {\mathbb{R}}^n$ s.t. $\sum_i c_i = 0$.
Given a set $\mathcal{X}$, a function $k \colon \mathcal{X} \times \mathcal{X} \to {\mathbb{R}}$ is a *positive definite kernel* if there exists a Hilbert space $\mathcal{H}$ and a map $\Phi \colon \mathcal{X} \to \mathcal{H}$ such that $k(x,y) = \langle \Phi(x), \Phi(y) \rangle_{\mathcal{H}}$.
Typically a positive definite kernel is simply called *kernel*. Roughly speaking, the utility of p.d. kernels comes from the fact that they enable the “kernel-trick”, , the use of algorithms that can be formulated in terms of dot products in an implicit feature space [@Scholkopf01]. However, as shown by Schölkopf in [@Schoelkopf01b], this “kernel-trick” also works for distances, leading to the larger class of c.p.d. kernels (see Definition \[def:cpdkernel\]), which can be used in kernel-based algorithms that are translation-invariant (, SVMs or kernel PCA).
\[def:cpdkernel\] A function $k \colon \mathcal{X} \times \mathcal{X} \to {\mathbb{R}}$ is (conditionally) positive (negative, resp.) definite kernel if and only if $k$ is symmetric and for every finite subset $\{x_1, \dots, x_m\} \subseteq \mathcal{X}$ the Gram matrix $(k(x_i, x_j))_{i,j = 1, 1}^{m, m}$ is (conditionally) positive (negative, resp.) definite.
To demonstrate that a function is not c.p.d. or c.n.d., resp., we can look at the eigenvalues of the corresponding Gram matrices. In fact, it is known that a matrix $\mathbf{A}$ is p.d. if and only if all its eigenvalues are nonnegative. The following lemmas from [@Bapat97a] give similar, but weaker results for (nonnegative) c.n.d. matrices, which will be useful to us.
If $\mathbf{A}$ is a c.n.d. matrix, then $\mathbf{A}$ has at most one positive eigenvalue.
Let $\mathbf{A}$ be a nonnegative, nonzero matrix that is c.n.d. Then $\mathbf{A}$ has exactly one positive eigenvalue. \[cor:nonnegativecnd\]
The following theorem establishes a relation between c.n.d. and p.d. kernels.
Let $\mathcal{X}$ be a nonempty set and let $k: \mathcal{X} \times \mathcal{X}
\to \mathbb{R}$ be symmetric. Then $k$ is a conditionally negative definite kernel if and only if $\exp(-\xi k(\cdot,\cdot))$ is a positive definite kernel for all $\xi >0$. \[thm:1\]
In the code (`test_negative_type_simple.m`)[^3], we generate simple examples for which the Gram matrix $\mathbf{A} = (d_{W,p}(x_i,x_j))_{i,j=1,1}^{m,m}$ – for various choices of $p$ – has at least two positive and two negative eigenvalue. Thus, it is neither (c.)n.d. nor (c.)p.d. according to Corollary \[cor:nonnegativecnd\]. Consequently, the function $\exp(-d_{W,p})$ is not p.d. either, by virtue of Theorem \[thm:1\]. To run the <span style="font-variant:small-caps;">Matlab</span> code, simply execute:
load options_cvpr15.mat;
test_negative_type_simple(options);
This will generate a short summary of the eigenvalue computations for a selection of values for $p$, including $p=\infty$ (bottleneck distance).
**Remark.** While our simple counterexamples suggest that typical kernel constructions using $d_{W,p}$ for different $p$ (including $p=\infty$) do not lead to (c.)p.d. kernels, a formal assessment of this question remains an open research question.
Plots of the feature map $\Phi_\sigma$ {#section:featuremapplots}
======================================
Given a persistence diagram $D$, we consider the solution $u\colon \Omega \times
\mathbb{R}_{\geq 0} \rightarrow \mathbb{R}, (x,t) \mapsto u(x,t)$ of the following partial differential equation $$\begin{aligned}
\Delta_x u &= \partial_t u &&\text{in $\Omega \times \mathbb{R}_{> 0}$}, \\
u &= 0 &&\text{on $\partial\Omega \times \mathbb{R}_{\geq 0}$}, \\
u &= \sum_{p \in D} \delta_p &&\text{on $\Omega \times \{0\}$}.\end{aligned}$$ To solve the partial differential equation, we extend the domain from $\Omega$ to ${\mathbb{R}}^2$ and consider for each $p \in D$ a Dirac delta $\delta_p$ and a Dirac delta $-\delta_{\overline{p}}$, as illustrated in Fig. \[fig:plot3d-pre\] (left). By convolving $\sum_{p
\in D} \delta_p - \delta_{\overline{p}}$ with a Gaussian kernel, see Fig. \[fig:plot3d-pre\] (right), we obtain a solution $u\colon {\mathbb{R}}^2 \times \mathbb{R}_{\geq 0} \rightarrow \mathbb{R}, (x,t)
\mapsto u(x,t)$ for the following partial differential equation: $$\begin{aligned}
\Delta_x u &= \partial_t u &&\text{in ${\mathbb{R}}^2 \times \mathbb{R}_{> 0}$}, \\
u &= \sum_{p \in D} \delta_p - \delta_{\overline{p}} &&\text{on ${\mathbb{R}}^2
\times \{0\}$}.\end{aligned}$$ Restricting the solution $u$ to $\Omega \times {\mathbb{R}}_{\ge 0}$, we then obtain the following solution $u \colon \Omega \times {\mathbb{R}}_{\ge 0} \to {\mathbb{R}}$, $$\begin{aligned}
u(x, t) = \frac{1}{4\pi t} \sum_{p \in D} e^{-\frac{\|x - p\|^2}{4t}} -
e^{-\frac{\|x - \overline{p}\|^2}{4t}}\end{aligned}$$ for the original partial differential equation and $t > 0$. This yields the feature map $\Phi_\sigma \colon \mathcal{D}
\to L_2(\Omega)$: $$\begin{aligned}
\Phi_\sigma(D) \colon \Omega \to {\mathbb{R}}, \quad x \mapsto \frac{1}{4\pi \sigma}
\sum_{p \in D} e^{-\frac{\|x - p\|^2}{4 \sigma}} - e^{-\frac{\|x -
\overline{p}\|^2}{4 \sigma}} .\end{aligned}$$
In Fig. \[fig:featuremapsigma\], we illustrate the effect of an increasing scale $\sigma$ on the feature map $\Phi_\sigma(D)$. Note that in the right plot the influence of the low-persistence point close to the diagonal basically vanishes. This effect is essentially due to the Dirichlet boundary condition and is responsible for gaining stability for our persistence scale-space kernel $k_\sigma$.
Closed-form solution for $k_\sigma$ {#section:kclosedform}
===================================
For two persistence diagrams $F$ and $G$, the persistence scale-space kernel $k_\sigma(F, G)$ is defined as $\langle \Phi_\sigma(F),\Phi_\sigma(G)
\rangle_{L_2(\Omega)}$, which is $$\begin{aligned}
k_\sigma(F,G)
&= \int_{\Omega} \Phi_\sigma(F) \, \Phi_\sigma(G) \,dx.\end{aligned}$$ By extending its domain from $\Omega$ to ${\mathbb{R}}^2$, we see that $\Phi_\sigma(D)(x) =
- \Phi_\sigma(D)(\overline{x})$ for all $x \in {\mathbb{R}}^2$. Hence, $\Phi_\sigma(F)(x)
\cdot \Phi_\sigma(G)(x) = \Phi_\sigma(F)(\overline{x}) \cdot
\Phi_\sigma(G)(\overline{x})$ for all $x \in {\mathbb{R}}^2$, and we obtain $$\begin{aligned}
k_\sigma(F,G)
&= \frac{1}{2} \int_{{\mathbb{R}}^2} \Phi_\sigma(F) \, \Phi_\sigma(G) \,dx \\
&= \frac{1}{2} \frac{1}{(4 \pi \sigma)^2} \int_{{\mathbb{R}}^2}
\left( \sum_{p \in F} e^{-\frac{\|x - p\|^2}{4 \sigma}} -
e^{-\frac{\|x - \overline{p}\|^2}{4 \sigma}} \right)
\cdot\\
&\quad\left( \sum_{q \in G} e^{-\frac{\|x - q\|^2}{4 \sigma}} -
e^{-\frac{\|x - \overline{q}\|^2}{4 \sigma}} \right)
\,dx \\
&= \frac{1}{2} \frac{1}{(4 \pi \sigma)^2} \sum_{\substack{p \in F\\ q \in G}} \int_{{\mathbb{R}}^2}
\left( e^{-\frac{\|x - p\|^2}{4 \sigma}} -
e^{-\frac{\|x - \overline{p}\|^2}{4 \sigma}} \right)
\cdot\\
&\quad \left( e^{-\frac{\|x - q\|^2}{4 \sigma}} -
e^{-\frac{\|x - \overline{q}\|^2}{4 \sigma}} \right)
\,dx \\
&= \frac{1}{(4 \pi \sigma)^2} \sum_{\substack{p \in F\\ q \in G}} \int_{{\mathbb{R}}^2}
e^{-\frac{\|x - p\|^2 + \|x - q\|^2}{4 \sigma}} - e^{-\frac{\|x - p\|^2 + \|x -
\overline{q}\|^2}{4 \sigma}} \,dx.\end{aligned}$$ We calculate the integrals as follows: $$\begin{aligned}
\int_{{\mathbb{R}}^2} e^{-\frac{\|x - p\|^2 + \|x - q\|^2}{4 \sigma}} \, dx
&= \int_{{\mathbb{R}}^2} e^{-\frac{\|x - (p-q)\|^2 + \|x \|^2}{4 \sigma}} \, dx \\
&= \int_{{\mathbb{R}}} \int_{{\mathbb{R}}} e^{-\frac{(x_1 - \|p-q\|)^2 + x_2^2 \;+\; x_1^2 +
x_2^2}{4 \sigma}} \, dx_1\, dx_2 \\
&= \int_{{\mathbb{R}}} e^{-\frac{x_2^2}{2 \sigma}} \, dx_2 \cdot
\int_{{\mathbb{R}}} e^{-\frac{(x_1 - \|p-q\|)^2 + x_1^2}{4 \sigma}} \, dx_1 \\
&= \sqrt{2 \pi \sigma} \cdot
\int_{{\mathbb{R}}} e^{-\frac{(x_1 - \|p-q\|)^2 + x_1^2}{4 \sigma}} \, dx_1 \\
&= \sqrt{2 \pi \sigma} \cdot
\int_{{\mathbb{R}}} e^{-\frac{(2 x_1 - \|p-q\|)^2 + \|p-q\|^2}{8 \sigma}} \, dx_1 \\
&= \sqrt{2 \pi \sigma} \; e^{-\frac{\|p-q\|^2}{8 \sigma}} \cdot
\int_{{\mathbb{R}}} e^{-\frac{(2 x_1 - \|p-q\|)^2 }{8 \sigma}} \, dx_1 \\
&= \sqrt{2 \pi \sigma} \; e^{-\frac{\|p-q\|^2}{8 \sigma}} \cdot
\int_{{\mathbb{R}}} e^{-\frac{x_1^2 }{2\sigma}} \, dx_1 \\
&= 2 \pi \sigma \; e^{-\frac{\|p-q\|^2}{8\sigma}}.\end{aligned}$$ In the first step, we applied a coordinate transform that moves $x-q$ to $x$. In the second step, we performed a rotation such that $p-q$ lands on the positive $x_1$-axis at distance $\|p-q\|$ to the origin and we applied Fubini’s theorem. We finally obtain the closed-form expression for the kernel $k_\sigma$ as: $$\begin{aligned}
k_\sigma(F,G)
&= \frac{1}{(4 \pi \sigma)^2} \, 2 \pi \sigma \sum_{\substack{p \in F\\ q \in G}}
e^{-\frac{\|p-q\|^2}{8\sigma}} - e^{-\frac{\|p-\overline{q}\|^2}{8\sigma}}\\
&= \frac{1}{8 \pi \sigma} \sum_{\substack{p \in F\\ q \in G}}
e^{-\frac{\|p-q\|^2}{8\sigma}} - e^{-\frac{\|p-\overline{q}\|^2}{8\sigma}} .\end{aligned}$$
Additional retrieval results on SHREC 2014 {#section:additionalresults}
==========================================
[|c|cc||r| c |cc||r|]{} HKS $t_i$ & & & & & & &\
$t_1$ & $59.9$ & $71.3$ & $\cellcolor{green!10}{+11.4}$ & & $26.0$ & $21.4$ & $\cellcolor{red!10}{-4.6}$\
$t_2$ & $\mathbf{75.1}$ & $76.0$ & $\cellcolor{green!10}{+0.9}$ & & $23.8$ & $22.7$ & $\cellcolor{red!10}{-1.1}$\
$t_3$ & $49.6$ & $64.8$ & $\cellcolor{green!10}{+15.2}$ & & $19.1$ & $20.7$ & $\cellcolor{green!10}{+1.6}$\
$t_4$ & $59.4$ & $\mathbf{77.5}$ & $\cellcolor{green!10}{+18.1}$ & & $23.5$ & $26.1$ & $\cellcolor{green!10}{+2.6}$\
$t_5$ & $68.1$ & $75.2$ & $\cellcolor{green!10}{+7.1}$ & & $22.7$ & $27.4$ & $\cellcolor{green!10}{+4.7}$\
$t_6$ & $50.0$ & $55.2$ & $\cellcolor{green!10}{+5.2}$ & & $18.9$ & $26.2$ & $\cellcolor{green!10}{+7.3}$\
$t_7$ & $47.6$ & $53.6$ & $\cellcolor{green!10}{+6.0}$ & & $27.4$ & $31.8$ & $\cellcolor{green!10}{+4.4}$\
$t_8$ & $53.1$ & $62.4$ & $\cellcolor{green!10}{+9.3}$ & & $\mathbf{45.3}$ & $\mathbf{39.8}$ & $\cellcolor{red!10}{-5.5}$\
$t_9$ & $51.2$ & $56.3$ & $\cellcolor{green!10}{+5.1}$ & & $24.4$ & $30.3$ & $\cellcolor{green!10}{+5.9}$\
$t_{10}$ & $39.6$ & $49.7$ & $\cellcolor{green!10}{+10.1}$ & & $2.5$ & $21.8$ & $\cellcolor{green!10}{+19.3}$\
Top-$3$ [@Pickup2014] & & &\
[|c|cc||r| c |cc||r|]{} HKS $t_i$ & & & & & & &\
$t_1$ & $87.7$ & $91.4$ & $\cellcolor{green!10}{+3.7}$ & & $41.5$ & $34.6$ & $\cellcolor{red!10}{-6.9}$\
$t_2$ & $\mathbf{91.1}$ & $\mathbf{95.1}$ & $\cellcolor{green!10}{+4.0}$ & & $40.8$ & $37.1$ & $\cellcolor{red!10}{-3.7}$\
$t_3$ & $70.4$ & $83.4$ & $\cellcolor{green!10}{+13.0}$ & & $36.5$ & $36.8$ & $\cellcolor{green!10}{+0.3}$\
$t_4$ & $77.7$ & $93.6$ & $\cellcolor{green!10}{+15.9}$ & & $39.8$ & $43.4$ & $\cellcolor{green!10}{+3.6}$\
$t_5$ & $90.8$ & $92.3$ & $\cellcolor{green!10}{+1.5}$ & & $35.1$ & $41.8$ & $\cellcolor{green!10}{+6.7}$\
$t_6$ & $73.9$ & $75.4$ & $\cellcolor{green!10}{+1.5}$ & & $31.6$ & $40.2$ & $\cellcolor{green!10}{+8.6}$\
$t_7$ & $70.6$ & $74.4$ & $\cellcolor{green!10}{+3.8}$ & & $38.6$ & $47.6$ & $\cellcolor{green!10}{+9.0}$\
$t_8$ & $73.3$ & $79.3$ & $\cellcolor{green!10}{+6.0}$ & & $\mathbf{56.5}$ & $\mathbf{57.6}$ & $\cellcolor{green!10}{+1.1}$\
$t_9$ & $72.7$ & $76.2$ & $\cellcolor{green!10}{+3.5}$ & & $31.8$ & $42.5$ & $\cellcolor{green!10}{+10.7}$\
$t_{10}$ & $57.8$ & $66.6$ & $\cellcolor{green!10}{+8.8}$ & & $4.8$ & $31.0$ & $\cellcolor{green!10}{+26.2}$\
Top-$3$ [@Pickup2014] & & &\
[|c|cc||r| c |cc||r|]{} HKS $t_i$ & & & & & & &\
$t_1$ & $60.6$ & $65.3$ & $\cellcolor{green!10}{+4.7}$ & & $25.4$ & $22.8$ & $\cellcolor{red!10}{-2.6}$\
$t_2$ & $\mathbf{65.0}$ & $67.4$ & $\cellcolor{green!10}{+2.4}$& & $25.0$ & $23.4$ & $\cellcolor{red!10}{-1.6}$\
$t_3$ & $48.4$ & $58.8$ & $\cellcolor{green!10}{+10.4}$ & & $24.0$ & $24.0$ & $\cellcolor{green!10}{+0.0}$\
$t_4$ & $55.2$ & $\mathbf{67.6}$ & $\cellcolor{green!10}{+12.4}$& & $25.3$ & $27.4$ & $\cellcolor{green!10}{+2.1}$\
$t_5$ & $63.7$ & $66.2$ & $\cellcolor{green!10}{+2.5}$ & & $21.6$ & $25.2$ & $\cellcolor{green!10}{+3.6}$\
$t_6$ & $51.0$ & $52.7$ & $\cellcolor{green!10}{+1.7}$ & & $20.7$ & $23.7$ & $\cellcolor{green!10}{+3.0}$\
$t_7$ & $48.4$ & $51.7$ & $\cellcolor{green!10}{+3.3}$ & & $22.5$ & $27.5$ & $\cellcolor{green!10}{+5.0}$\
$t_8$ & $51.1$ & $56.5$ & $\cellcolor{green!10}{+5.4}$ & & $\mathbf{30.2}$ & $\mathbf{33.2}$ & $\cellcolor{green!10}{+3.0}$\
$t_9$ & $50.4$ & $53.2$ & $\cellcolor{green!10}{+2.8}$ & & $15.8$ & $25.3$ & $\cellcolor{green!10}{+9.5}$\
$t_{10}$ & $39.8$ & $46.7$ & $\cellcolor{green!10}{+6.9}$ & & $3.6$ & $19.0$ & $\cellcolor{green!10}{+15.4}$\
Top-$3$ [@Pickup2014] & & &\
[|c|cc||r| c |cc||r|]{} HKS $t_i$ & & & & & & &\
$t_1$ & $81.3$ & $91.5$ & $\cellcolor{green!10}{+10.2}$ & & $53.0$ & $49.6$ & $\cellcolor{red!10}{-3.4}$\
$t_2$ & $\mathbf{92.1}$ & $93.4$ & $\cellcolor{green!10}{+1.3}$ & & $51.1$ & $51.3$ & $\cellcolor{green!10}{+0.2}$\
$t_3$ & $80.3$ & $89.3$ & $\cellcolor{green!10}{+9.0}$ & & $47.7$ & $48.4$ & $\cellcolor{green!10}{+0.7}$\
$t_4$ & $85.0$ & $\mathbf{93.8}$ & $\cellcolor{green!10}{+8.8}$ & & $52.7$ & $55.5$ & $\cellcolor{green!10}{+2.8}$\
$t_5$ & $89.0$ & $93.2$ & $\cellcolor{green!10}{+4.2}$ & & $51.2$ & $55.5$ & $\cellcolor{green!10}{+4.3}$\
$t_6$ & $78.6$ & $82.5$ & $\cellcolor{green!10}{+3.9}$ & & $48.1$ & $54.2$ & $\cellcolor{green!10}{+6.1}$\
$t_7$ & $77.2$ & $81.6$ & $\cellcolor{green!10}{+4.4}$ & & $55.7$ & $60.5$ & $\cellcolor{green!10}{+4.8}$\
$t_8$ & $80.4$ & $86.3$ & $\cellcolor{green!10}{+5.9}$ & & $\mathbf{72.8}$ & $\mathbf{68.3}$ & $\cellcolor{red!10}{-4.5}$\
$t_9$ & $79.7$ & $83.9$ & $\cellcolor{green!10}{+4.2}$ & & $50.4$ & $61.0$ & $\cellcolor{green!10}{+10.6}$\
$t_{10}$ & $70.8$ & $78.9$ & $\cellcolor{green!10}{+8.1}$ & & $27.7$ & $51.3$ & $\cellcolor{green!10}{+23.6}$\
Top-$3$ [@Pickup2014] & & &\
[^1]: A Dirac delta distribution is a functional that evaluates a given smooth function at a point.
[^2]: Since the initial condition is not an $L_2(\Omega)$ function, this equation is to be understood in the sense of distributions. For a rigorous treatment of existence and uniqueness of the solution, see [@Iorio01 Chapter 7].
[^3]: <https://gist.github.com/rkwitt/4c1e235d702718a492d3>; the file `options_cvpr15.mat` can be found at: <http://www.rkwitt.org/media/files/options_cvpr15.mat>
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- |
Horace P. Yuen\
\
Department of Electrical Engineering and Computer Science\
Department of Physics and Astronomy\
Northwestern University, Evanston Il. 60208\
[email protected]
title: '**Essential lack of security proof in quantum key distribution[^1]**'
---
ABSTRACT {#abstract .unnumbered}
========
All the currently available unconditional security proofs on quantum key distribution, in particular for the BB84 protocol and its variants including continuous-variable ones, are invalid or incomplete at many points. In this paper we discuss some of the main known problems, particularly those on operational security guarantee and error correction. Most basic are the points that there is no security parameter in such protocols and it is not the case the generated key is perfect with probability $\geq
1-\epsilon$ under the trace distance criterion $d\leq\epsilon$, which is widely claimed in the technical and popular literature. The many serious security consequences of this error about the QKD generated key would be explained, including practical ramification on achievable security levels. It will be shown how the error correction problem alone may already defy rigorous quantitative analysis. Various other problems would be touched upon. It is pointed out that rigorous security guarantee of much more efficient quantum cryptosystems may be obtained by abandoning the disturbance-information tradeoff principle and utilizing instead the known KCQ (keyed communication in quantum noise) principle in conjunction with a new DBM (decoy bits method) principle that will be detailed elsewhere.
INTRODUCTION
============
QKD (quantum key distribution) \[1\] protocols of the BB84 varieties involving disturbance information tradeoff for security have been widely claimed and perceived to provide “perfect security” at reasonable key generation rate. This is the case in numerous popular expositions and technical cryptography books which use exactly the words “perfect security” \[2\], in major technical QKD review articles that claim perfect security except with a very small probability \[3\], and in numerous technical papers on security theory and experimental implementations with the words “unconditional security” \[4\]. This has been continuing despite the criticisms on the invalidity of these claims, both fundamentally in theory and empirically in practice. See \[5\] for some references. It is the purpose of this paper to present an accurate and readily understandable presentation of some fundamental points in this connection, for proper appreciation of the scope and limit of QKD.
Cryptographic security occupies a very unusual status compared to most other issues in science and engineering. It cannot be established experimentally in sufficient generality, if only because there are unlimited classes of specific attacks, in addition to numerous other issues. One is justified in claiming security of a cryptographic scheme only by proving it rigorously for a specific well defined mathematical model. Rigor is important, many cryptosystems once thought secure turned out not, and many others such as AES appear to be secure already. If only seed key is needed, plenty can be stored compactly in many applications. The issue of why a mathematical model is applicable to a concrete QKD system gives rise to major problems not found in ordinary cryptography, where the security mechanism is based on purely mathematical relations in contrast to QKD which involves quantum effects of very small signals. Here we would just focus on the security claims about specific given models. There are numerous invalid inferences in every major step of the offered security proofs, see \[5\] for brief descriptions of some of them. Many will be mentioned but only a few will be discussed in the following.
In this paper we will concentrate on one most fundamental issue, the security criterion and its adequacy for *operational security guarantee*. We will identify the serious ramifications of a major error of interpreting the trace distance criterion still widely perpetuated today. Cryptographic security is a serious business and has to be validated by theory, which cannot be assured without scrutiny of the arguments offered for security proof. As a concluding implication of our presentation, it will be indicated that given the apparently insurmountable security proof problems facing BB84 type protocols, major modification of existing QKD protocols appears necessary in which the disturbance-tradeoff principle is abandoned in lieu of other more powerful principles for valid security proof. One such possibility will be indicated.
COMPARISON OF QKD WITH MATHEMATICAL CRYPTOGRAPHY
================================================
In QKD two users Alice and Bob try to establish a new fresh key between them by a protocol that involves five major steps outlined in section IV of \[5\]. For general security the protocol should be secure against an adversary Eve who could launch any attack consistent with the laws of physics, active as wells as passive, both during protocol execution and during actual use of the generated key $K$ in a cryptographic application. Protocol execution is interactive and requires message authentication between the users. Some sort of information-disturbance tradeoff is utilized by the users which requires checking a portion of the quantum signals received by Bob to try to make sure they have not been disturbed much in order that the information the attacker Eve can extract from her attack on the other signals can be bounded below a tolerable threshold. In this paper we can just take the quantum signals to be qubits modulated by the digital data bit sequence $X$ chosen randomly by Alice (except for CV-QKD in section 6).
QKD has always been compared to RSA or public key cryptography to this day, emphasizing its information theoretic security (ITS) as compared to the complexity based security of RSA. I pointed out in \[6\] that QKD should be compared to symmetric-key expansion which *also* offers ITS for the key security before it is used and the ITS level of such symmetric-key ciphers is quite *good compared* to QKD systems. A prior shared secret key is also needed in QKD as in symmetric-key expansion, at least for message authentication during protocol execution needed to thwart man-in-the-middle attack. What is unsatisfactory about ordinary symmetric-key expansion is that it has no ITS under known-plaintext attack (KPA) when the expanded key is used.
How does a known-plaintext attack work? Consider the case where $K$ is broken into two segments $K=K_1||K_2$ and used to OTP (one-time pad), or more accurately in an additive stream cipher to xor into the data $X'=X_1'||X_2'$ in an encryption application of $K$. We use sequence concatenation $||$ here for simplicity instead of subset disjoint union. In KPA let Eve know $K_1'$ exactly for simplicity. The cipher text $K\oplus X'$ is always open. Then Eve knows $K_1$ exactly and can use it to derive knowledge of $K_2$ to help identify $X_2'$ with the known $K_2\oplus X_2'$. If $K$ is not uniformly distributed to Eve, there may be correlation between $K_1$ and $K_2$ that strongly compromises $X_2'$. The two significant points are that the precise quantitative ITS level of $K$ is important and that for encryption it is KPA security that the advantage of QKD consists in.
It is evident that there can be no KPA on $X$ chosen by Alice. However, when $K$ is used for OTP $X'$, KPA becomes possible for commercial applications at least. If KPA is not possible in some military applications so that $X'$ is uniform to Eve, $K$ is *totally protected* and one can just use the ciphertext-only ITS from a running key generated by standard pseudo-random number generators. Some indication on how much more secure that is as compared to any realistic QKD key has been provided in \[6\], and a completed demonstration will be given elsewhere.
MUTUAL INFORMATION, SECURITY CRITERION AND SECURITY PARAMETER
=============================================================
The security criterion commonly employed in QKD for a long time, including in all the well known QKD asymptotic security proofs and many experimental claims, is Eve’s mutual information on $K$ that she could obtain from measurement on her quantum probe together with the side information she collects during protocol execution and during use of $K$ in an application. In this quantum case it is often called her “accessible information,” the maximum mutual information she could obtain by any quantum measurement. There is no need to consider specific quantum feature here, we can take her quantum measurement to be an appropriately optimal one. Thus, in obvious notation, $$I_E\equiv H(K)-H(K|E)$$ Let $n$ be the bit length of the generated key $K$, $|K|=n$, before accounting for any key cost during protocol execution. They asymptotic security proofs assert that below a fixed nonzero threshold key rate $r$ when the protocol is not aborted from qubit checking due to too much disturbance, one may obtain $$I_E\to 0 \quad\:as\:\quad n\to\infty$$ With (2), $n$ is taken to be a cryptographic *security parameter* which means security can be made arbitrarily close to perfect by making the security parameter arbitrarily large. This is what the “perfect security” claim was then based upon. More precisely, a protocol having *unconditional security* \[4\] means there is a security parameter for arbitrary attacks consistent with the laws of physics, and in fact the key length in a QKD round of key generation is taken to be such a security parameter.
It has been pointed out repeatedly since 2004 \[7\], through several talks including \[8\] and a full description in \[9\], that it the rate that $I_E$ goes to zero that determines the security of $K$, not $I_E$ itself. This will be explained and illustrated below. But first the proper description of Eve’s *information* of $K$ from her attack should be given, as follows \[9\].
From her probe measurement result $y_E$ together with side information, Eve derives a whole probability distribution $p(k|y_E)$ obtained from $p(y_E|k)$ via Bayes rule, and $p(y_E|k)$ through $\rho^k_E$, her probe state \[9\]. We suppress $y_E$ and write $p(k|y_E)$ as $P={P_i}$ on the possible values of $K$. Let $N=2^n$, and order $P_1\geq ... \geq P_N$. One should compare $P$ to $U$, the uniform distribution of $N$ values. This is evidently the case if one looks at the quantitative security problem as a detection theory problem for correctly identifying various subsets of $K$ given whatever information, such as that of KPA in identifying $K_2$ (of the previous section) from knowing $K_1=k_1$ (lower case denotes specific value of the upper case random variable). $M$-ary detection is the appropriate framework for security analysis, not Shannon information, for several reasons. Mutual information is a theoretical quantity, one has to provide *operational guarantee* on Eve’s probabilities of success, and bit error rate (BER) when she fails to identify a subsequence of $K$ but nevertheless gets a lot more bits correctly compared to the perfect BER 1/2 from a uniform key, similar to the case of a non-uniform a priori distribution of $K$ known to Eve. It is clear the quality of $K$ and its quantitative level must be compared to that of the uniform $U$, and there is no reason to conclude a priori that any theoretical quantity is an adequate criterion when it is not zero (with zero corresponding to perfect security). “Information” is just a technical term in this connection. One cannot read too much into the statement “Eve’s information is small.” How quantitatively small is small enough for what purpose?
Thus, generally one needs the whole $P$ Eve possesses to be able to assess various security details, which appears to be an impossible task to estimate usefully in a QKD protocol. One may use a single-number criterion such as $I_E$ and try to assess what operational guarantee may be derived from it. Note that a single-number guarantee *merely* expresses a constraint on $P$. For security guarantee one must show none of the $P$ not ruled out by the constraint would allow undesirable information for Eve. In this connection, Eve’s maximum probability $P_1$ in identifying the whole $K$ is especially crucial. Its average over the a priori distribution of $K$ is simply related by $-log$ to the so called minimum entropy, $H_{min}$. Note also that there is no useful sense to talk further about the probability of a particular $P$. Eve chooses for whatever reason a specific unknown attack to the users with a resulting $P$ that is only constrained by the single-number criterion. It was shown \[7-9\] that under an $I_E$ guarantee, $P_1$ can be relatively very large with some possible $P$. From Lemma 2 in \[9\], for $l<n$ it is possible that $$\frac{I_E}{n}\leq 2^{-l}\:,\: P_1\sim 2^{-l}$$
Since $l$ is typically very much smaller than $n$ with or without privacy amplification, the $P_1$ of (3) is very much larger than that of a uniform $K$. Note that $P_1$ typically increases, and surely cannot be decreased by any privacy amplification code (PAC) which is a many-to-one deterministic transformation. It is clear the $P_1$ is a main determining factor on the security of $K$. PAC may increase overall security for the same or larger $P_1$ because the key after PAC is shorter than before and hence the new $P_1$ is effectively less damaging or can even be close to ideal. From (3) it follows that, $\frac{I_E}{n}\leq 2^{-l}$ strictly limits the amount of (near) uniform key bits that can be obtained from $K$ by further PAC, and strictly limits the security level of such $K$ by itself. This $P_1$ consideration applies to the trace distance criterion similarly, as is the operational meaning problem.
It follows from (3) that a very poor insecure $K$ can have $I_E\to 0$ for $n\to\infty$. Let $$I_E=2^{-(\lambda n-\log{n})}$$ for a constant $\lambda$. Then $I_E$ goes to zero exponentially in $n$ but it is possible that $P_1$ is given by $2^{-\lambda n}$ compared to $2^{-n}$ of a uniform key. It is clear that however long $n$ is, if $\lambda<<1$ the key $K$ is then always very poor compared to $U$ and it never improves relatively for any $n$. As a consequence, $n$ is *not* a security parameter at all in QKD. There is in fact *no* security parameter in any known QKD protocol. Thus there can be *no* unconditional security proof for QKD in the original sense of the term \[4\].
In this connection, it may be observed that asymptotic results are not sufficient for quantifying the performance of a realistic system in any event. Everything in the real world is finite with no parameter value approaching infinity. It is the quantitative security behavior of real systems that one must be concerned with. In particular, the so-called *secrecy capacity* has no real security significance for concrete cryptosystems for two major reasons beyond finite versus asymptotic. The secrecy capacity is defined as the difference between the information capacities of the users and Eve. Since Eve cannot do coding on the data her capacity overestimates her information gain in general. Since the users don’t know what active attack Eve has launched, there is no guarantee they can achieve their information capacity apart from the finite $n$ issue due to the lack of attacker channel characterization. Thus, the difference in capacities or mutual information between them has *no* definite meaning. Note also that such capacity is derived from a constant channel among different uses, which simply does not obtain under active attacks and especially under entanglement attack in QKD. As indicated above, mutual information is not the proper framework for analyzing QKD security, $M$-ary detection is more appropriate.
It turns out the phenomenon of quantum information locking makes $I_E$ not a good security criterion \[10,11\]. In \[12\] it is claimed that the system is secure if $I_E$ is exponentially small. Such claim is vague as can be seen from (4) above unless $\lambda$ is specified. It is falsified from \[11\] generally, since it is not ruled out that knowing $\log{n}$ data bits in a KPA on $K$ would reveal the entire $n$-bit $K$. Thus, the trace distance criterion $d$ is suggested instead \[10, 13-14\] while $d$ is first proposed in \[12\] for composition security. The latter is a vague concept because no operational quantitative meaning is ever given for $d$, nor is any precise definition provided for “universal composition”. These problems are automatically solved, however, under the prevalent incorrect interpretation of $d$.
THE TRACE DISTANCE CRITERION
============================
The criterion $d$ is the trace distance between the real and the ideal quantum states for the users \[10-14\] and can be written as a $K$-average distance \[15\], with $p_0(k)$ being the prior probability of $K$, $\rho^k_E$ Eve’s probe state for each $k$, and $\rho_E$ the $k$-average $\rho^k_E$, $$d=\frac{1}{2}\sum_k||p_0(k)\rho^k_E-\frac{1}{N}\rho_E||_1$$
Generally $0\leq d\leq 1$, the smaller $d$ is the more secure $K$ is with $d=0$ for the perfect case. Upon measurement on Eve’s probe, trace distance becomes or bounds a classical statistical distance $\delta(P,Q)$ between two classical distributions $P$ and $Q$. The interpretation of an “$\epsilon$-secure” key, namely $K$ with $d\leq\epsilon$, amounts to saying perfect security is obtained with a probability $\geq 1-\epsilon$ or equivalently the “maximum failure probability” is $\epsilon$. For some specific quotes see note \[25\] in \[6\]. We will call such interpretation of a $d\leq\epsilon$ key an “$\epsilon$-uniform” key.
This error is maintained in the review \[3\] and has never been re-tracked in the literature, contributing to the widespread misconception mentioned in the Introduction. It has enormous consequences on the quality of a QKD generated $K$, which becomes far inferior to a uniform key. Before discussing such consequences we summarize three possible reasons for holding such a wrong interpretation and why they cannot be valid. Indeed, it can be said that for $d>0$ the key $K$ is *not uniform* with probability 1 in general rather than uniform with probability $\geq 1-\epsilon$. Actually, the mere talk of such probability is already misleading because there is no general meaning one can give to such probability.
The statistical distance $\delta(P,Q)$ defined by $$\delta(P,Q)\equiv\frac{1}{2}\sum_i|P_i-Q_i|$$ is interpreted in \[14, 15\] as the probability that $P$ and $Q$ are the same except with a probability at most $\delta(P,Q)$. This is obtained through a joint probability that gives $P$ and $Q$ as marginals from a mathematical lemma that does not guarantee its existence. Such a joint probability does not make sense since there is no random source giving rise to it, and it does not imply the interpretation even if it is in force. See \[6, 9, 15\] for further discussion of this first reason. The second possible reason is the “distinguishability” probability interpretation of $d$ or $\delta$, which neglects that there is an additive factor of 1/2 for binary decision \[16\]. The third possible reason is the following decomposition under $\delta(P,U)=\delta_E$, $$P_i=(1-\lambda)U_i+\lambda P_i'$$ for another distribution $P'$ on $i\in\overline{1-N}$
We would not go into why (7) does not fully imply the wrong interpretation when $\lambda=\delta_E$. It can be readily shown that (7) holds if and only if $$\frac{1-\lambda}{N}\leq P_i\leq\lambda + \frac{1-\lambda}{N}$$ which implies all $P_i$ are essential uniform. In contrast, under $d\leq\epsilon$ it is possible to have \[9,15\] $$P_1=\frac{1}{N}+\epsilon$$ Note that $\epsilon=10^{-20}$ \[20\] is very small for a binary decision problem but very big for an $N$-ary decision problem for $N=2^{1,000}$ and much larger in QKD protocols. In addition, there is the problem of many repeated uses.
The failure and consequence of the wrong interpretation of $d$ can be easily seen from the following result. Under $\delta(P,Q)\leq\epsilon$, it is well known \[17\] that for an unconditional event $A$, $$|P(A)-Q(A)|\leq\epsilon$$ For $Q=U$, it is the case \[18\] that a conditional event $B$ given $A$ may achieve the following bound for some $P$ under the given constraint, $$|P(B|A)-U(B|A)|\leq\frac{\epsilon}{U(A)}$$ The rhs of (11) can be much larger than $\epsilon$ and in fact exceed 1, which means $P(B|A)$ is not constrained and can reach 1. Under the wrong interpretation, on the other hand, $P(A|B)=
U(A|B)=\frac{|A|}{|B|}$ with probability $1-\epsilon$. The enormous difference in KPA security implication is obvious.
Such possible drastic breach of security may be expected to average out over different possible conditioning. Indeed it is shown in \[15\] that one recovers, for the $K_1$ and $K_2$ averaged $\overline{P_1}(K_2^*|K_1$ with $K=K_1||K_2$ and $K_2^*$ any subset of $K_2$ that, for $\delta_E\leq\epsilon$, $$\overline{P_1}(K_2^*|K_1)\leq 2^{-|K_2^*|}+\epsilon$$ Note that not just the whole $K$, not subset $K_2^*$ has a better protection that $\epsilon$ from (12). Contrast such $K$ average with the case of$ $U which holds for any $k$ and $k$-subset. Due to such additional averaging, instead of just averaging over PAC the Markov inequality needs to be applied twice to convert the average guarantee to individual guarantee which greatly increases the $d$ level for individual guarantee to $d^{\frac{1}{3}}$ \[15\].
Generally, the wrong interpretation of a $d\leq\epsilon$ key as an $\epsilon$-uniform key solves the following security problems handily which now have to be dealt with anew. As we shall see, either the solution is not known or appears extraordinarily difficult or the resulting situation is very unfavorable for such QKD key.
(i) A primary point is what numerical value of $d$ is adequate for security. The value $d=10^{-20}$ suggested in ref. \[19\] is good under the wrong interpretation if only a one-shot trial is involved, and thus a perfect key is for all practical purposes guaranteed except for the PAC and $K$ averaging involved. However, the resulting level of effective $d'\sim 10^{-7}$ for individual guarantee from Markov inequality is far from adequate. If we take $10^{-15}$ to be an effective one-shot impossibility, a $d$-level less than $10^{-40}$ is required. Even such relatively very large $d$ level (for $n$ exceeding 1,000) cannot be remotely approached in a concrete QKD protocol. To ensure a “near-uniform” key of $d\sim2^{-n}$, it follows from (9) that one needs $d\sim
10^{-300}$ for $n=1,000$. The most up-to-date single-photon BB84 protocol analysis (with many invalid steps) already gives zero net key rate for $K$ at $d=10^{-15}$ \[20\]. If a near-uniform key is desired, $d=10^{-20}$ would give no more than 22 bits in principle, not enough to cover the message authentication key bits not yet accounted for. The situation is dire for repeated uses of the QKD system. This issue of actual numerical values will be further elaborated in section 8.
(ii) KPA security becomes now a serious issue due to (11), but is resolved in principle satisfactorily from our (12) for the case of sequence (subset of $K$) estimation by Eve.
(iii) As mentioned in section 3, Eve’s BER needs to be bounded even when she fails to identify a subset of $K$ correctly. There is no such problem for a perfect key, of course. Apart from the whole $K$ before it is used \[15\], there is *no* such bound known in more general situations including KPA.
(iv) Universal composition which follows immediately from the wrong interpretation of $d$ \[21\] is lost. Each application of $K$ has to be examined by itself to see what operational guarantee would result, in particular under possible quantum information locking leak.
(v) Security proofs involving the error correcting code and the subsequent privacy amplification code are seriously affected when $K$ is not perfect. We will discuss this separately in the following section 5. This appears to be an issue for which *no* satisfactory solution can be found without the introduction of major new cryptographic technique.
(vi) An imperfect key can have a very serious detrimental security effect on its use in an information theoretically secure message authentication code (MAC) \[18, 22-24\], especially for the relatively large $d$-level that can be obtained in concrete QKD protocols as discussed in (i) above. A typical MAC with ITS consists of a keyed hash family with key $K_h$ and often another key $K_t$ for OTP the authentication tag. The security against impersonation and substitution attacks is guaranteed through an $\epsilon$-ASU hash family, in which $\epsilon$, $0<\epsilon<1$, upper bounds Eve’s success probability and $\epsilon$ itself is lower bounded by $\frac{1}{|t|}$. When $K_h$ has a statistical distance $\epsilon_h$ from uniform, or equivalently has a $d$-level $\epsilon_h$ in the quantum case, Eve’s success probability may reach 1 for some tag sequences as in the privacy case (11) above \[18\]. Upon tag average similar to (12), an $\epsilon$-ASU family becomes an $\epsilon +\epsilon_h$-ASU family \[25\]. When $K_t$ with $d=\epsilon_t$ is used, it becomes an $(\epsilon+m\epsilon_t)$-ASU family when the hash function is used $m$ times \[24\]. Thus, a lower limit is now set by $\epsilon_h$ or $\epsilon_t$ however long the authentication tag is! There is *no longer* a security parameter for MAC since the QKD $\epsilon$-key itself has none. Typical tag length of 64 bits already requires $\epsilon_h$ or $\epsilon_t$ be at the level of $d\sim 10^{-20}$ for individual guarantee, tens of orders of magnitude from that derived for just theoretical single-photon BB84 \[20\] and fifty orders of magnitude from the current experimental level. When the tag average us taken into account it could reach one hundred orders of magnitude. Thus, a QKD key so generated does not and *cannot* measure up to the key needed in most common MAC.
ERROR CORRECTION AND PRIVACY AMPLIFICATION PROBLEMS
===================================================
In this section we will show explicitly, for the first time since the ECC and PAC problems were originally indicated in \[7\] ten years ago and elaborated somewhat in \[25\], that these problems appear quantitatively insurmountable. The main culprit is the ECC problem, the PAC problem can by itself be handled as in \[14\] from a proper EC treatment that appears impossible to carry out. The ad hoc and invalid ECC treatment in the literature thus serves two purposes, to quantify the information leak (or key cost) from ECC and to allow the application of PAC theory, without which there would be no quantification of the security and key rate of a QKD protocol.
In the QKD literature the Cascade reconciliation protocol was popular, but it has numerous invalid steps on estimating the information leak to Eve which cannot be usefully bounded \[26\]. This information leak from error correction is simply neglected in the earlier general security papers. More recently it is given by the ad hoc formula, for $1\leq f\leq 2$, $$leak_{EC}=f\cdot n\cdot h(Q)$$ Here $Q$ is the quantum bit error rate the users measure and $h$ is the binary entropy function. It is ad hoc because the factor $f$ is arbitrarily taken to represent the effect of a finite protocol with $n\cdot h(Q)$ itself taken to be the asymptotic leak. What is the derivation of (13) under a general attack, just asymptotically? There is none offered in the literature, this crucial difficulty not mentioned at all! In \[20\] the whole book \[17\] is referred to for $n\cdot h(Q)$ but \[17\] does not treat such problem. In particular, the memoryless channel treated in \[17\] simply does not apply to joint attack. In \[3\] there is no formula given for $leak_{EC}$ and thus the results are true by definition, except it is then not shown that the final key rate is positive.
The discussion in \[25\] would not be repeated here on exactly what $leak_{EC}$ may be under a general attack. Under collective attack only, it may appear that (13) for $f=1$ may be derived asymptotically by OTP the parity check digits of a linear ECC with uniform key bits. When the key bits are not uniform to Eve as those from a QKD key, there is *no* quantification at all on what the resulting $leak_{EC}$ is, as point (v) of the last section indicates. However, even when uniform bits are available for padding the problem is far more complicated even just for collective attacks, for the following reason.
Let $\rho_x$ be the density operator of Eve’s probe which depends on the data $x$ chosen by Alice on the sifted key. With a specific ECC, say the $ith$ one among a given set $\mathscr{I}$ of possible ECC, the state becomes $\rho^i_x$, $i\in\mathscr{I}$. Padding the parity digits merely shows that only $p_i$ but not $i$ is known to Eve. Thus Eve’s state $\rho_x$ is transformed to $\rho'_x$ $$\rho'_x=\sum_ip_i\rho^i_x$$ There may still be information leak from the change of $\rho_x$ to $\rho'_x$ in addition to the padding bit cost, because $\rho_x^i$ with an ECC may leak a lot more information to Eve than $\rho_x$ itself for at least some or possibly all $i\in\mathscr{I}$. This happens because the ECC allows Eve to correct her erros too. At any rate $\rho_x'$ needs to be dealt with for security analysis after ECC, not $\rho_x$. It appears nothing general can be derived without attention to specific family $\mathscr{I}$ of the chosen ECC for dealing with $\rho'_x$.
This problem spills into the PAC one as follows. From \[14, 27\] one bounds the input state minimum entropy (equivalently $\overline{P_1}$) or its $\epsilon$-smooth generalization, and an output $K$ can be guaranteed with a certain $d$ level from universal hashing. Padding the ECC parity digits, however, does not imply one can use the $\rho_x$ bound for the PAC input. One has to use that on $\rho'_x$ from (14). Incidentally, it is also clear that the ECC output state, which is the PAC input state, has to be bounded appropriately no matter what $leak_{EC}$, whether correct or not, is being used. Since the incorrect input state $\rho_x$ is used for the PAC input, the PAC guarantee itself becomes *invalid*.
There are only two approaches to finite protocols that offer a more or less complete treatment of a QKD protocol apart from message authentication. We have discussed the Renner approach above. Hayashi completes and generalizes the Shor-Preskill \[28\] approach to directly treat BB84 protocols, with different ways of accounting for the ECC \[29\] and PAC leaks \[30\]. However, the ECC leak bound cannot be quantitatively carried out and (13) is used instead \[31\]. It should be clear that (14) or its equivalent in any QKD approach would constitute a major obstacle for quantifying security with ECC, and hence PAC also. They do *not* appear to be amenable to quantitative treatment for the long ECC needed in QKD.
PROBLEMS OF CONTINUOUS VARIABLE QKD
===================================
There have been much recent work on continuous variable QKD (CV-QKD) \[3\] due to its immunity to detector blinding attacks \[32\] from homodyne detection. There is a special $leak_{EC}$ problem in CV-QD but already the issue from (14) cannot be dealt with for whatever reconciliation procedure. There is a further *robustness* problem that has never been addressed but which would render CV-QKD impractical, and it apparently cannot be overcome, as follows.
Under an active heterodyne intercept-resend attack, the security analysis of CV-QKD with mutual information or whatever criterion in the literature becomes invalid because Eve has fundamentally altered the channel. She would get the data better than Alice or Bob in either the direct or the reverse reconciliation approach \[33\]. For security she must be caught during the checking phase of the protocol. However, it is practically impossible to run a protocol with such check for the following reason.
Let $T$ be the system transmittance between Alice and Bob, and $S$ the source photon number. Let $a$ be the source fluctuation or knowledge inaccuracy, $b$ that of $T$, so that the total inaccuracy in the output signal level is $(a+b-ab)ST$. Thus, for 1% uncertainty in both $S$ and $T$, the output uncertainty is $\sim 2\%$. Whenever $ST$ is significantly less than $\frac{1}{2}$, the users cannot tell Eve’s presence with her heterodyne-resend attack, thus establishing a loss limit on security. Whenever $(a+b-ab)ST$ is bigger than $\frac{1}{4}$, the users cannot distinguish Eve’s attack from uncertainty in $ST$. This sets a strict upper limit on $S$.
In reality, the users do not know many system parameters to 1% even when they do not fluctuate or change from bit to bit. Line loss is especially widely uncertain, even in a fiber. We cannot assume it is constant during the execution of a QKD protocol. Such small uncertainty does not matter in ordinary optical communications but matters a lot for QKD. During checking for Eve’s presence it may well happen that no threshold can be found with both acceptable security and low enough false alarm rate (leading to protocol being aborted) that renders the protocol not much more inefficient than its already very low efficiency compared to ordinary optical communications. Similar false alarm issue is also present in BB84 though not as seriously. Note that key bits spent in false alarm rounds are cost of the QKD protocol. It seems that single-photon or low signal level cryptography, QKD or whatever, is a bad engineering idea.
A LIST OF SOME OTHER QKD SECURITY PROOF PROBLEMS
================================================
In this section we list some further major security proof problems with brief comments. More detailed discussions will be referenced or provided in the future.
(1) No proof other than occasional invalid brief remarks has ever been provided to show why channel loss has no security effect other than throughput reduction, even after reduction of the state space to include at least the vacuum state. See \[34\] for some specific discussion and more will be provided elsewhere.
(2) The use of decoy states for multi-photon sources has widely been assumed \[3\] to lead to general security for a 0.5 average photon number Poisson source. Some problems on such claim are described in \[35\] and more will be given in the future. Basically, lots of attacks from Eve have not been accounted for. Furthermore, there is no known concrete protocol that gives the claimed key rate and accessible information level even if that exists from \[28, 36\]. Privacy amplification with decoy states cannot be quantified for several reasons, in contrast to the claim in \[36\], including one similar to (14) that arises from multi-photon leak specifically and not just from ECC.
(3) The classical inference from checking qubits to the sifted key qubits underestimates the error significantly. A similar error is made in \[20\] in the bounding of $H_{min}$. See \[5\] for a brief discussion with further details to be given elsewhere.
(4) The symmetrization argument for general attack bounding is not applicable to the concrete BB84 protocols. Again see \[5\] and future treatment.
(5) Eve’s quantum probe needs to be considered for any application of $K$, it cannot be argued away from “universal composition” without the incorrect $d$ interpretation. The phenomenon of quantum information locking has to be covered. In particular, the situation has not been treated for the application of $K$ to conventional MAC with ITS. The treatments in \[18, 21-24\] are classical.
(6) Detector blinding attacks \[32\] show that there is no security proof without modeling the relevant detector behavior explicitly, which has not been carried out. There is a more general problem of model completeness and related issues \[37\] that is absent or much less serious in mathematical cryptography and when larger signals are used in physical cryptography. These issues cannot be avoided in the “device independent” type approaches. Indeed, such issues can be traced to the use of small single-photon signals which are in principle sensitive to minute disturbance in order that the disturbance-information tradeoff principle can function.
(7) We include here the problems of actual experimental implementations with quoted theoretical claims about security level and key rate that are not justified even according to the theory literature. Thus, the NEC system security claim \[38\] does not account for ECC and PAC leaks, and the Toshiba UK system \[39\] cannot derive from \[28\] and \[36\] their key rate and security level ( and their decoy state generalization) because such results are, in addition to being invalid as discussed some above, are only claimed to have been established for CSS codes as ECC and PAC that are not implemented in their system.
NUMERICAL VALUES AND INDIVIDUAL GUARANTEE
=========================================
The claimed theoretical single-photon BB84 security level of up to $d=10^{-14}$ \[20\], or even the often claimed practically achieved level of $d=10^{-9}$ \[40\] for higher key rates, may seem adequate under the wrong interpretation of $d$ that $K$ is $\epsilon$-uniform. Perhaps a probability of $10^{-14}$ is synonymous with practical impossibility, for a *single* trial. If such probability level is sufficient, no one would need a 64 bits or much longer key. It is easily seen that just mere repetitions would render such probability unacceptable for security. If 100 rounds per second are carried out in a QKD system, one day of operation would yield $\sim10^7$ rounds. Thus, even under the wrong $\epsilon$-uniform interpretation the security level is not adequate for many applications.
The situation is worse because there are two separate averages over different random variables in the $d$ of a QKD guarantee $d\leq\epsilon$. One of them is from averaging over PAC that is present in the wrong interpretation. One is averaging over $K$ that is not present in the wrong interpretation, because $K$ is then perfect with a high probability and one can just treat it as uniform for all its values $k$. Even in ordinary manufacturing, individual guarantee is used for quality control. Thus, one needs to convert an average guarantee to an individual one for a QKD key $K$ via Markov inequality, that for a random variable $Z$ one has $$Pr[|Z|\geq\delta]\leq\frac{E[|Z|]}{\delta}$$ When (15) is applied to minimize the total “failure probability” \[15,41\], the effective $d$ level becomes $d^{\frac{1}{2}}$ for the wrong interpretation and $d^{\frac{1}{3}}$(but not $d^{\frac{1}{4}}$) under (12). Just from the square root of the wrong interpretation, the effective individual guarantee level as compared to the uniform is unacceptable at $10^{-14}$, not to say from the actual cube root. In effect, there could be many more breaches under an average guarantee and to ensure otherwise, the average level itself has to be further reduced. If one considers $10^{-9}$ as the current state of art (though that is invalidly derived as we discussed in this paper) and $10^{-15}$ for individual guarantee as proper goal, there is a 36 orders of magnitude gap toward the goal. New cryptographic technique or principle is clearly called for to bridge such gap.
In sum, not only is there a fundamental tradeoff between security level and key rate in QKD protocols of finite $n$, there is such a tradeoff asymptotically also and there is no security parameter which can be independently varied to increase security without affecting key rate. The precise quantitatively behavior is very unfavorable as illustrated above and in point (i) of section 4.
REMEDY VIA KCQ AND DBM
======================
The different quantum cryptographic approach of KCQ (keyed communication in quantum noise) \[7,9\] was originally developed to alleviate the inefficiency, sensitivity (lack of robustness), and infrastructure incompatibility (commercially) of BB84 type protocols to make quantum cryptography practical. The original version of Alpha-Eta for direct encryption \[42\] has been extensively developed, in particular by the US company Nucrypt, which can be called PSK-Y00 to distinguish it from other signal set choices based on the same principle such as ISK-Y00 \[43\], CPPM or PPM-Y00 \[9\], and the QAM-Y00 being investigated in Japan. I would like to separate the term “QKD” for protocols that depend on the disturbance-information tradeoff principle for security, in contrast with the term “KCQ” that utilizes quantum effects associated with the optimal quantum receiver principle for quantum detection \[44\] which does not require checking disturbance. KCQ is nevertheless fully quantum with no classical analog and is capable of delivering ITS as in QKD. It is fully compatible with ordinary optical communications through fibers or other media.
However, great difficulties are encountered in providing general security proofs to KCQ protocols, although under “collective attacks” all protocols, classical or quantum, can be readily proved secure apart from the ECC problem for finite protocols. The DBM (decoy bits method, which has nothing to do with decoy states in QKD) approach is introduced to make general rigorous proof possible. It is a widely applicable technique in both classical and quantum cryptography, and will be presented in detail elsewhere. Here we indicate how BB84 could be modified to become not just the qb-KCQ protocol in \[9\] but also by DBM that allows a different rigorous approach to security without disturbance-information tradeoff: simply use a pseudo-random number generator with a shared secret seed key to pick the qubits as sifted key. The channel characterization may be done separately or from the unused qubits. A final message authentication check on the generated key should (always) be employed. Its quantitative ITS will be given in future papers.
CONCLUSION
==========
Cryptography is a tricky subject, physical and quantum cryptography more so from the added essential physical features on top. The problems involved are far from merely mathematical or physical. It involves conceptual issues on the relation between mathematics and the real world in ways not encountered in other fields and not discussed in books and articles. The numerous erroneous claims in QKD arise from that in part, the prevalent wrong interpretation of an $\epsilon$-secure key as an $\epsilon$-uniform key being a good example. Since cryptography is a serious matter, we must scrutinize our security arguments and pay due attention to Eve’s viewpoint to ensure our security claim is validly established. In this paper we have presented some apparently very serious difficulties in providing valid quantitative security claims in QKD. The keys given in the analyzed theoretical protocols cannot be considered secure even if the derivation is valid, while many steps in the analysis are actually invalid. Claims that the corresponding experimental systems are secure in principle are not founded. Hopefully the reader would be motivated to look seriously into alternative approaches, not just for security but also for the very relevant issues of efficiency, robustness, and whether quantum cryptography does provide a sensible real world alternative in various applications.
ACKNOWLEDGMENTS {#acknowledgments .unnumbered}
===============
I would like to thank Greg Kanter and Aysajan Abindin for useful discussions. This work was supported by AFOSR and DARPA.
GLOSSARY {#glossary .unnumbered}
========
------ --------------------------------------
DBM decoy bits method
ECC error correcting code
KCQ keyed communication in quantum noise
KPA known-plaintext attack
MAC message authentication code
OTP one time pad
PACE privacy amplification code
QKD quantum key distribution
------ --------------------------------------
[19]{} N. Gisin, G. Ribordy, W. Tittel, and H. Zinden, Rev. Mod. Phys. 74, 145 (2002).
L. Chen and G. Gong, Communication System Security, CRC Press, 2012.
V. Scarani, H. Bechmann-Pasquinucci, N. J. Cerf, M. Dusek, N. Lutkenhaus, and M. Peev, Rev. Mod. Phys. 81, 1301 (2009).
D. Mayers, J. ACM 48, 351 (2001).
H.P. Yuen, arXiv:1210.2804 (2012).
H.P. Yuen, Phys. Rev. A 82, 062304 (2010).
H.P. Yuen, arXiv:0311061 (2003).
H.P. Yuen, in Proceedings of the 8th QCMC, O. Hirota, J.H. Shapiro, and M. Sasaki, Eds, NICT Press, 163, 2007.
H.P. Yuen, IEEE J. SEl. Top. Quantum Electron. 15, 1630 (2009).
R. Konig, R. Renner, A. Bariska, and U. Maurer, Phys. Rev. Lett. 98, 140502 (2007).
F. Dupuis, J. Florjanczyk, P. Hayden, and D. Leung, arXiv:1011.1612 (2010).
M. Ben-Or, M. Horodecki, D. Leung, D. Mayers, and J. Oppenheim, Lecture Notes on Computer Science, vol. 3387, Springer, 380, 2005.
R. Renner and R. Konig, Lecture Notes on Computer Science, vol. 3378, Springer, 407, 2005.
R. Renner, arXiv:0512258, also in I. Quant. Inf. 6, 1 (2008).
H.P. Yuen, arXiv:1205.5065 (2012).
C.F. Fung, X. Ma and H.F. Chau, Phys. Rev. A 81, 012318 (2010).
T.M. Cover and J.A. Thomas, Elements of Information Theory, Wiley, 1991.
A. Abidin and J-A Larsson, arXiv:1303.0210 (2013)
R. Renner, arXiv:1209.2423 (2012).
M. Tomamichel, C. Lin, N. Gisin, and R. Renner, Nat. Commun. 3, 634 (2012).
J. Muller-Quade and R. Renner, New J. Phys. 11, 085006 (2009).
J. Cederlof and J-A Larsson, IEEE Trans. Inform. Theory 54, 1735 (2008).
A. Abindin and J-A Larsson, arXiv: 1109.5168 (2011).
C. Portmann, arXiv:1202.1229 (2012).
H.P. Yuen, arXiv:1205.3820 (2012).
Y. Yamazaki, R. Nair, and H.P. Yuen, in Proceedings of the 8th QCMC, ed. by Hirota, Shapiro and Sasaki, NICT Press, 201, 2007. M. Tomamichel, C. Schaffner, A. Smith, and R. Renner, arXiv 1002.2436 (2010).
P.W. Shor and J. Preskill, Phys. Rev. Lett. 85, 441 (2000).
M. Hayashi, arXiv:1202.0322 (2012).
M. Hayashi, arXiv:1202.0601 (2012).
M. Hayashi and T. Tsurumaru, arXiv:1107.0589v2 (2012).
I. Gerhardt, Q. Liu, A. Lamas-Linares, J. Skaar, C. Kursiefer, and V. Makarov, Nat. Commun. 2, 349 (2011).
H.P. Yuen, arXiv:1208.5827 (2012).
4 H.P. Yuen, arXiv:1109.1049 (2011).
H.P. Yuen, arXiv:1207.6985 (2012).
D. Gottesman, H.K. Lo, N. Lutkenhaus, and J. Preskill, Quantum Inf. Comput. 4, 325 (2004).
J.M. Myers and F.H. Madjid, I. Opt. B: Quant. Semiclass. Opt. 4, 5109 (2002).
J. Hasegawa, M. Hayashi, T. Hiroshima, and A. Tomita, arXiv:0705.3081 (2007).
A. Dixon, Z. Yuan, J. Dynes, A. Sharpe and A. Shields, Appl. Phys. Lett. 96, 161102 (2010).
N. Walenta, etc, arXiv:1109.1051 (2011).
H.P. Yuen, arXiv:1109.1051 (2011).
G. Barbosa, E. Corndorf, P. Kumar, and H.P, Yuen, Phys. Rev. Lett. 90, 227901 (2003).
O. Hirota, M. Sohma, M. Fuse, and K. Kato, Phys. Rev. A 72, 022335 (2005).
H.P. Yuen, R.S. Kennedy, and M. Lax, IEEE Trans. Inform. Theory 21, 125 (1975).
[^1]: This paper with a similar title is to be published in the Proceedings of the SPIE Conference on Quantum-Physics-Based Information Security held in Dresden, Germany, Sep 23-24, 2013. This v2 corrects some types in v1.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We introduce a topology-based nonlinear network model of protein dynamics with the aim of investigating the interplay of spatial disorder and nonlinearity. We show that spontaneous localization of energy occurs generically and is a site-dependent process. Localized modes of nonlinear origin form spontaneously in the stiffest parts of the structure and display site-dependent activation energies. Our results provide a straightforward way for understanding the recently discovered link between protein local stiffness and enzymatic activity. They strongly suggest that nonlinear phenomena may play an important role in enzyme function, allowing for energy storage during the catalytic process.'
author:
- 'B. Juanico'
- 'Y.–H. Sanejouand'
- 'F. Piazza'
- 'P. De Los Rios'
bibliography:
- 'protdyn.bib'
title: Discrete breathers in nonlinear network models of proteins
---
The predictions of elastic network models (ENMs) of proteins [@Tirion:96; @Bahar:97; @Hinsen:98; @NMA] have proven useful in quantitatively describing amino-acid fluctuations at room temperature [@Tirion:96], often in good agreement with isotropic [@Bahar:97], as well as anisotropic measurements [@Phillips:07; @Maritan:02]. Moreover, it has been shown that a few low-frequency normal modes can provide fair insight on the large amplitude motions of proteins upon ligand binding [@Tama:01; @Delarue:02; @Gerstein:02], as previously noticed when more detailed models were considered [@Brooks:85; @Marques:95; @Perahia:95], also by virtue of the robust character of the collective functional motions [@Nicolay:06].
However, low-frequency modes of proteins are known to be highly anharmonic [@Levy:82; @Go:95], a property which has to be taken into account in order to understand energy storage and transfer within their structure as a consequence of ligand binding, chemical reaction, [*etc*]{} [@Straub:00; @Leitner:01]. Indeed, there is growing experimental evidence that long-lived modes of nonlinear origin may exist in proteins [@Edler:2004uq; @Xie:2000fk]. Likewise, many theoretical studies have appeared suggesting that localized vibrations may play an active role in, e.g., enzyme catalysis [@Sitnitsky:06]. These include topological excitations such as solitons [@dOvidio:2005qy] as well as discrete breathers (DBs) [@Archilla:2002lr; @Aubry:01].
-1.8 cm -4.0 mm
-5.0 mm
The latter are nonlinear modes that emerge in many contexts as a result of both nonlinearity and discreteness [@Flach:1998fj]. Although their existence and stability properties are well understood in systems with translational invariance, much less is known of the subtle effects arising from the interplay of spatial disorder and anharmonicity [@DB:04; @nonlin-disorder:01; @Rasmussen:1999vn]. For this purpose, in the present work we introduce the nonlinear network model (NNM). Our aim is to extend the simple scheme of ENMs, known to capture the topology-based features of protein dynamics [@Tirion:96; @Bahar:97; @Hinsen:98], by adding anharmonic terms. Within the NNM framework, we show that spontaneous localization of energy can occur in protein-like systems and that its properties may be intuitively rationalized in the context of specific biological functions. In our model, the potential energy of a protein, $E_p$, has the following form: $$\label{FPU}
E_p=\sum_{d_{ij}^0 < R_c} \left[
\frac{k_{2}}{2} (d_{ij}-d_{ij}^0)^2 +
\frac{k_{4}}{4} (d_{ij}-d_{ij}^0)^4
\right]$$ where $d_{ij}$ is the distance between atoms $i$ and $j$, $d_{ij}^0$ their distance in the structure under examination (as e.g. solved through X-ray crystallography) and $R_c$ is a cutoff that specifies the interacting pairs. As done in numerous studies, only C$_\a$ atoms are taken into account [@NMA] and $k_{2}$ is assumed to be the same for all interacting atom pairs [@Tirion:96]. As in previous ENM studies [@Delarue:02; @Helene:03], we take $R_c=$10 Å, and fix $k_{2}$ so that the low-frequency part of the linear spectrum match actual protein frequencies, as calculated through realistic force fields [@Brooks:85; @Marques:95; @Perahia:95]. This gives $k_{2}= 5$ kcal/mol/Å$^2$, with the mass of each C$_\a$ fixed to 110 a.m.u., that is, the average mass of amino-acid residues. Note that standard ENM corresponds to $k_{4} = 0$, while in the present work $k_{4} = 5$ kcal/mol/Å$^4$.
Proteins live and perform their functions immersed in water and exchange energy with the solvent through their sizable surface portion. In a previous paper we showed that complex energy relaxation patterns are observed as a result of the inhomogeneity of the coupling to the solvent of bulk and surface atoms [@Piazza:05]. In the presence of nonlinearity, boundary relaxation is known to drive a wide array of systems towards regions of phase space corresponding to localized modes that emerge spontaneously [@Aubry:96; @Piazza:03; @Reigada:2003; @Livi:2006vn]. Thus, in order to study [*typical*]{} excitations of nonlinear origin in protein structures, it appears natural to perform a boundary cooling experiment. Our protocol is the following. After 50 psec of microcanonical molecular dynamics (MD) simulation performed at a temperature $T_{eq}$, the protein is cooled down by adding a linear dissipation term to the force acting on surface atoms, that is, those belonging to amino-acids with more than 25 Å$^2$ of solvent accessible surface area. This represents nearly 40% of the amino-acid residues, for all proteins considered in the present study. The viscous friction coefficient $\g$ is set to 2 psec$^{-1}$, a typical value for protein atoms in a water environment [@Straub:00]. Hereafter, the equilibration energies considered are in the range $k_{{\scriptscriptstyle}B} T_{eq}=2-20$ kcal/mol, that is, of the order of, e.g., the energy release of ATP hydrolysis. With such initial conditions, energy in the system remains high for a period of time long enough so that localization can occur.
-4.0cm ![\[spectrum\] Locality of citrate synthase harmonic modes, as a function of their frequencies, together with the locality and frequency of a discrete breather (DB). ](Fig_cithcl_fpu10_spectre.pdf "fig:"){width="12.0"} -5mm
-5mm
In Fig. \[etot\], we show the energy of dimeric citrate synthase (PDB code 1IXE) as a function of time, as well as the energy of two amino-acids of monomer A, Thr 208 and Ala 209. After $t=20$ psec and a few large fluctuations, a DB centered at Thr 208 forms. At $t=200$ psec, more than 80% of the total energy is located there. Note the slow decay of the total energy after $t=$100 psec and the periodic energy exchanges of Thr 208 with Ala 209, another among the few amino-acids involved in the DB. Note also that at $t=20$ psec the energy of Thr 208 is higher than at $t=$0, that is, when the friction was turned on, a clear-cut demonstration of the known tendency of DBs to harvest energy from lower-energy excitations [@Flach:1998fj]. In order to check that the phenomenon shown in Fig. \[etot\] is indeed the spontaneous localization of a DB, we switched off the friction at $t=200$ psec and performed 100 more psec of microcanonical MD simulation. Then, we projected the latter trajectory on the first eigenvector of the corresponding velocity-covariance matrix, which gives the pattern of correlated atomic velocities involved in the DB. The Fourier transform of such a projection yields an accurate value for the DB frequency, while the spectral line-width provides information on the DB stability over the 100 psec analysis time-span.
-7.0cm -1cm ![\[DB\] Stiffness of dimeric citrate synthase as a function of residue number (dashed line). The number of DBs found at a given site out of 500 instances is also reported (black diamonds, right y-axis).](Fig_cithcl_kfce_av10h_db.pdf "fig:"){width="9.5"} -10mm
-5mm
In Fig. \[spectrum\] we report the harmonic spectrum of citrate synthase as well as the DB frequency as functions of a locality measure. The latter is defined as $L_{k}= \sum_{i,\alpha} [\xi_{i\alpha}^k]^4/
[\sum_{i,\alpha} [\xi_{i\alpha}^k]^2]^2$, where $\xi_{i\alpha}^k$ is the $\alpha$ $ (x,y,z)$ coordinate of the $i$-th atom in the $k$-th displacement pattern (normalized eigenvector, DB). As expected, the DB frequency (130 cm$^{-1}$) lies above the highest frequency of the harmonic spectrum (101 cm$^{-1}$). Moreover, the corresponding spatial pattern is much more localized than any of the harmonic modes (note the logarithmic scale).
Starting from random initial conditions, we obtained 500 stable DBs following the above-outlined protocol. Although in many cases several DBs emerged, we decided to retain only the runs where a single DB catched most of the system energy, and more energy than the average amount per site at $t=0$. In Fig. \[DB\] we report the number of DBs found at each site. The largest fraction (20 %) of these highly energetic DBs formed at Thr 208 in monomer A, but we also observed DBs at 27 other sites, noteworthy at Thr 192 of monomer B (18%). Note also that, although the studied protein is a dimer, that is, with a an approximate but clear two-fold structural symmetry, the probability to observe a DB at a given site varies from one monomer to the other, indicating that the localization dynamics is rather sensitive to small changes in the local environment. As shown in Fig. \[DB\], this probability is higher in the stiffest parts of the protein scaffold, as measured through an indicator of local stiffness $s_{i}$. For amino-acid $i$, the latter is defined as: $$\label{Stiffness}
s_i=\frac{1}{\mathcal{N}_i}
\sum_{j,\alpha} \sum_{k \in \mathcal{S}} [\xi_{j\alpha}^k]^2 \theta(R_{c}-d^0_{ij})$$ where $\mathcal{N}_i = \sum_{j} \theta(R_{c}-d^0_{ij})$ is the number of neighbors of the $i$-th residue and $\theta(x)$ is the Heaviside step function. The second sum is over the set $\mathcal{S}$ of the ten highest frequency harmonic modes. The averaging over the $\mathcal{N}_i$ neighbors slightly smoothes mode contributions and helps underlining the fact that in each monomer of citrate synthase there is a stretch of nearly fourty consecutive amino-acids (residues 185-225) with a remarkably stiff environment, deeply buried at the interface between the two monomers. This is obviously where most DBs tend to emerge. Note, however, that the relationship between high-frequency harmonic modes and spontaneous energy localization is not a straightforward one: for instance, DBs were observed only a couple of times at the site the most involved in the highest frequency normal mode, namely, Ser 213. As a matter of fact, as suggested by the large energy fluctuations observed at site Thr 208 before the DB shown in Fig. \[etot\] springs up, a competition between [*potential*]{} DBs is likely to occur, with possible weak-to-strong energy transfers, before a given site is occupied by a stable mode.
-3.5cm -5mm ![\[enefreq\] DB frequencies in citrate synthase as a function of their energy (pluses). The cases of Threonine 208 (filled circles) and Alanine 209 (filled squares) are highlighted. Using our protocol, no DB with an energy lower than 37.8 kcal/mole was observed, out of a total of 500 cases. ](Fig_cithcl_dbenefqcy.pdf "fig:"){width="9.0"} -10mm
-5mm
In lattice systems sites are obviously equivalent. Here, as shown in Fig. \[enefreq\], the energy-frequency relationship is site-dependent. Furthermore, the probability for a DB to localize at a given site depends in a non-obvious fashion upon the energy it needs to reach a given frequency at that location. While most DBs emerge at Thr 208, i.e. the site where the least energy is required for a given frequency, many DBs are also observed at Ala 209 in monomer A, one of the sites that demands more energy. In more than one dimension one expects DBs to appear only above a characteristic energy [@Kastner:2004yq; @Flach:1998fj]. Hence, our results hint at a strong site-dependence of such energy threshold, non-trivially related to local structural properties. To shed light on this intriguing feature, a detailed characterization of the small-ampitude side of the DB energy-amplitude curves at different sites is currently under way.
In the following step, we looked for DBs in other proteins, both dimeric and monomeric. For small proteins like HIV-1 protease (PDB code 1A30), a dimeric $2 \times 99$ amino-acids enzyme, no DB could be obtained. This is likely to be due to the fact that in small proteins too many amino-acids are in direct interaction with a site where energy dissipation occurs. This means that small proteins may require more detailed models, like all-atom schemes, where cutoff values of the order of 5 Å are customary [@Elnemo1; @Nicolay:06; @Phillips:07].
-7.0cm -10mm ![\[Stats\] Stiffness of the environment of amino-acid residues involved in enzymatic activity (black squares), compared to that of amino-acids of same chemical type (crosses) randomly chosen within the same set of enzyme structures. The broken line is only a guide for the eye. ](Fig_enmav_csa_10h_stat.pdf "fig:"){width="9.5"} -5mm
-5mm
In the case of aconitase (PDB code 1FGH), a monomeric 753 amino-acids enzyme, and alkaline phosphatase (PDB code 1ALK), a dimeric $2 \times 499$ amino-acids enzyme, DBs prove nearly as easy to generate than in the case of citrate synthase. However, for proteins of similar sizes, the probability of similar events turns out to vary significantly from a protein to another. For instance, in the cases of phospholipase D (PDB code 1V0Y), a monomeric 504 amino-acids enzyme, and isoamylase (PDB code 1BF2), a monomeric 750 amino-acids enzyme, out of 100 cooling MD simulations, only 8 and 5 DBs were obtained, respectively, in contrast to citrate-synthase, where our success rate is over 50 %. This points to the intriguing conclusion that not only DBs in proteins are site-selective, but also appear to be non-trivially fold-selective.
In all the analyzed structures, spontaneous localization of energy occurs in the stiffest parts of the structure. Thus, we turn now to examine the relationship between protein stiffness and function. Following the hypotheses that enzymatic activity may require some kind of energy storage and that DBs may play a role in the process, we computed high-frequency normal modes for a set of 833 enzymes from the 2.1.11 version of the Catalytic Site Atlas [@CSA]. Then, we determined the stiffness of each amino-acid known to be involved in enzymatic activity according to . As a comparison, we also determined stiffnesses of amino-acids of the same chemical type, but picked at random among those not known to be involved in enzymatic activity. As shown in Fig. \[Stats\], catalytic amino-acids tend to be located in stiffer parts of enzyme structures, in agreement with our hypotheses. This is not an obvious result, since for the sake of catalytic activity amino-acids have to interact with enzyme substrates, that is, to be accessible to them. Such a trend has already been noticed in other studies. Noteworthy, using the ease of displacing any given amino-acid residue with respect to the others as a stiffness measure, it was shown that roughly 80 % of the catalytic residues are located in stiff parts of enzyme structures [@Lavery:07]. In a more indirect way, it was also remarked that global hinge centers colocalize with catalytic sites in more than 70 % of enzymes [@Bahar:05]. So, stiff parts may play a role of pivot, allowing for accurate large-amplitude conformational changes of enzymes upon substrate binding.
What our results further suggest is that stiff parts of enzyme structures may also play another major role in enzyme function, namely by allowing for an active role of nonlinear localized modes in energy storage during the catalytic process.
Y-.H.S. wishes to thank M. Peyrard and T. Dauxois for an invitation to talk at a training school held in Les Houches [@DB:04], where he was introduced to the fascinating world of discrete breathers.
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- Wenqi Wang
- Yifan Sun
- Brian Eriksson
- Wenlin Wang
- Vaneet Aggarwal
- |
Wenqi Wang\
Purdue University\
[[email protected]]{}
- |
Yifan Sun\
Technicolor Research\
[[email protected]]{}
- |
Brian Eriksson\
Adobe\
[[email protected]]{}
- |
Wenlin Wang\
Duke University\
[[email protected]]{}
- |
Vaneet Aggarwal\
Purdue University\
[[email protected]]{}
bibliography:
- 'Ref.bib'
title: 'Wide Compression: Tensor Ring Nets'
---
=1
Introduction {#sec:intro}
============
Related Work {#sec:relWork}
============
Tensor Ring Nets (TRN) {#sec:model}
======================
\[sec:fully\]
\[sec:conv\]
\[sec:init\]
\[sec:exp\]
Conclusion
==========
\[sec:conclude\]
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
This article is concerned with automated complexity analysis of term rewrite systems. Since these systems underlie much of declarative programming, time complexity of functions defined by rewrite systems is of particular interest. Among other results, we present a variant of the dependency pair method for analysing runtime complexities of term rewrite systems automatically. The established results significantly extent previously known techniques: we give examples of rewrite systems subject to our methods that could previously not been analysed automatically. Furthermore, the techniques have been implemented in the Tyrolean Complexity Tool. We provide ample numerical data for assessing the viability of the method.
*Key words*: Term rewriting, Termination, Complexity Analysis, Automation, Dependency Pair Method
author:
- |
Nao Hirokawa\
School of Information Science,\
Japan Advanced Institute of Science and Technology, Japan,\
[[[email protected]]([email protected])]{}
- |
Georg Moser\
Institute of Computer Science,\
University of Innsbruck, Austria\
[[[email protected]]([email protected])]{}
date: June 2011
title: 'Automated Complexity Analysis Based on the Dependency Pair Method[^1]'
---
Introduction {#Introduction}
============
This article is concerned with automated complexity analysis of term rewrite systems (TRSs for short). Since these systems underlie much of declarative programming, time complexity of functions defined by TRSs is of particular interest.
Several notions to assess the complexity of a terminating TRS have been proposed in the literature, compare [@CKS:1989; @HofbauerLautemann:1989; @CL:1992; @HM:2008]. The conceptually simplest one was suggested by Hofbauer and Lautemann in [@HofbauerLautemann:1989]: the complexity of a given TRS is measured as the maximal length of derivation sequences. More precisely, the *derivational complexity function* with respect to a terminating TRS relates the maximal derivation height to the size of the initial term. However, when analysing complexity of a function, it is natural to refine derivational complexity so that only terms whose arguments are constructor terms are employed. Conclusively the *runtime complexity function* with respect to a TRS relates the length of the longest derivation sequence to the size of the initial term, where the arguments are supposed to be in normal form. This terminology was suggested in [@HM:2008]. A related notion has been studied in [@CKS:1989], where it is augmented by an *average case* analysis. Finally [@CL:1992] studies the complexity of the functions *computed* by a given TRS. This latter notion is extensively studied within *implicit computational complexity theory* (*ICC* for short), see [@BMR:2009] for an overview. A conceptual difference from runtime complexity is that polynomial computability addresses the number of steps by means of (deterministic) Turing machines, while runtime complexity measures the number of rewrite steps which is closely related to operational semantics of programs. For instance, a statement like a quadratic complexity of sort algorithm is in the latter sense.
This article presents methods for (over-)estimating runtime complexity automatically. We establish the following results:
1. We extend the applicability of direct techniques for complexity results by showing how the monotonicity constraints can be significantly weakened through the employ of *usable replacement maps*.
2. We revisit the *dependency pair method* in the context of complexity analysis. The dependency pair method is originally developed for proving termination [@ArtsGiesl:2000], and known as one of the most successful methods in automated termination analysis.
3. We introduce the *weight gap principle* which allows the estimation of the complexity of a TRS in a modular way.
4. We revisit the dependency graph analysis of the dependency pair method in the context of complexity analysis. For that we introduce a suitable notion of *path analysis* that allows to modularise complexity analysis further.
Note that while we have taken seminal ideas from termination analysis as starting points, often the underlying principles are crucially different from those used in termination analysis.
A preliminary version of this article appeared in [@HM:2008; @HM:2008b]. Apart from the correction of some shortcomings, we extend our earlier work in the following way: First, all results on usable replacement maps are new (see Section \[CSR\]). Second, the side condition for the weight gap principle [@HM:2008 Theorem 24] is corrected in Section \[semantical gap\]. Thirdly, the weight gap principle is extended by exploiting the initial term conditions and is generalised by means of matrix interpretations (see Section \[semantical gap\]). Finally, the applicability of the path analysis is strengthened in comparison to the conference version [@HM:2008b] (see Section \[DG\]).
The remainder of this article is organised as follows. In the next section we recall basic notions. We define runtime complexity and a subclass of matrix interpretations for its analysis in Section \[Runtime Complexity\]. In Section \[CSR\] we relate context-sensitive rewriting to runtime complexity. In the next sections several ingredients in the dependency pair method are recapitulated for complexity analysis: dependency pairs and usable rules (Section \[dependency pairs\]), reduction pairs via the weight gap principle (Section \[semantical gap\]), and dependency graphs (Section \[DG\]). In order to access viability of the presented techniques all techniques have been implemented in the *Tyrolean Complexity Tool*[^2] ( for short) and its empirical data is provided in Section \[Experiments\]. Finally we conclude the article by mentioning related works in Section \[Conclusion\].
Preliminaries {#Preliminaries}
=============
We assume familiarity with term rewriting [@BaaderNipkow:1998; @Terese] but briefly review basic concepts and notations from term rewriting, relative rewriting, and context-sensitive rewriting. Moreover, we recall matrix interpretations.
Rewriting
---------
Let $\VS$ denote a countably infinite set of variables and $\FS$ a signature, such that $\FS$ contains at least one constant. The set of terms over $\FS$ and $\VS$ is denoted by $\TERMS$. The *root symbol* of a term $t$, denoted as $\rt(t)$, is either $t$ itself, if $t \in \VS$, or the symbol $f$, if $t = f(\seq{t})$. The *set of position* $\Pos(t)$ of a term $t$ is defined as usual. We write $\Pos_{\GG}(t) \subseteq \Pos(t)$ for the set of positions of subterms, whose root symbol is contained in $\GG \subseteq \FS$. The subterm of $t$ at position $p$ is denoted as $\atpos{t}{p}$, and $t[u]_p$ denotes the term that is obtained from $t$ by replacing the subterm at $p$ by $u$. The subterm relation is denoted as $\subterm$. $\Var(t)$ denotes the set of variables occurring in a term $t$. The *size* $\size{t}$ of a term is defined as the number of symbols in $t$: $$\size{t} \defsym
\begin{cases}
1 & \text{if $t$ is a variable} \tkom\\
1+ \sum_{1 \leqslant i \leqslant n} \size{t_i} & \text{if $t=f(t_1,\dots,t_n)$} \tpkt
\end{cases}$$
A *term rewrite system* (*TRS*) $\RS$ over $\TERMS$ is a *finite* set of rewrite rules $l \to r$, such that $l \notin \VS$ and $\Var(l) \supseteq \Var(r)$. The smallest rewrite relation that contains $\RS$ is denoted by $\to_{\RS}$. The transitive closure of $\to_{\RS}$ is denoted by $\rstrew{\RS}$, and its transitive and reflexive closure by $\rssrew{\RS}$. We simply write $\to$ for $\to_{\RS}$ if $\RS$ is clear from context. Let $s$ and $t$ be terms. If exactly $n$ steps are performed to rewrite $s$ to $t$ we write $s \to^n t$. Sometimes a derivation $s = s_0 \to s_1 \to \cdots \to s_n = t$ is denoted as $A \colon s \rss t$ and its length $n$ is referred to as $\card{A}$. A term $s \in \TERMS$ is called a *normal form* if there is no $t \in \TERMS$ such that $s \to t$. With $\NF(\RS)$ we denote the set of all normal forms of a term rewrite system $\RS$. The *innermost rewrite relation* $\irew{\RS}$ of a TRS $\RS$ is defined on terms as follows: $s \irew{\RS} t$ if there exist a rewrite rule $l \to r \in \RS$, a context $C$, and a substitution $\sigma$ such that $s = C[l\sigma]$, $t = C[r\sigma]$, and all proper subterms of $l\sigma$ are normal forms of $\RS$. *Defined symbols* of $\RS$ are symbols appearing at root in left-hand sides of $\RS$. The set of defined function symbols is denoted as $\DS$, while the *constructor symbols* $\FF \setminus \DD$ are collected in $\CS$. We call a term $t = f(\seq{t})$ *basic* or *constructor based* if $f \in \DS$ and $t_i \in \TA(\CS,\VS)$ for all $1 \leqslant i \leqslant n$. The set of all basic terms are denoted by $\TB$. A TRS $\RS$ is called *duplicating* if there exists a rule $l \to r \in \RS$ such that a variable occurs more often in $r$ than in $l$. We call a TRS *(innermost) terminating* if no infinite (innermost) rewrite sequence exists.
We recall the notion of *relative rewriting*, cf. [@Geser:1990; @Terese]. Let $\RS$ and $\SS$ be TRSs. The relative TRS $\RS/\SS$ is the pair $(\RS, \SS)$. We define ${s \rsrew{\RS / \RSS} t} \defsym
{s \rssrew{\RSS} \cdot \rsrew{\RS} \cdot \rssrew{\RSS} t}$ and we call $\rsrew{\RS /\RSS}$ the *relative rewrite relation* of $\RS$ over $\RSS$. Note that ${\rsrew{\RS / \RSS}} = {\rsrew{\RS}}$, if $\SS = \varnothing$. $\RS / \RSS$ is called *terminating* if $\rsrew{\RS / \RSS}$ is well-founded. In order to generalise the innermost rewriting relation to relative rewriting, we introduce the slightly technical construction of the *restricted* rewrite relation, compare [@T07]. The *restricted rewrite relation $\toss{\QS}_{\RS}$* is the restriction of $\rsrew{\RS}$ where all arguments of the redex are in normal form with respect to the TRS $\QS$. We define the *innermost relative rewriting relation* (denoted as $\irew{\RS/\RSS}$) as follows: $${\irew{\RS/\RSS}} \defsym
{{\toss{\RS \cup \RSS}_{\RSS}^{\ast}} \cdot
{\toss{\RS \cup \RSS}_{\RS}} \cdot
{\toss{\RS \cup \RSS}_{\RSS}^{\ast}}} \tkom$$
We briefly recall context-sensitive rewriting. A replacement map $\mu$ is a function with $\mu(f) \subseteq \{1,\ldots, n\}$ for all $n$-ary functions with $n \geqslant 1$. The set $\Pos_\mu(t)$ of *$\mu$-replacing positions* in $t$ is defined as follows: $$\Pos_\mu(t) =
\begin{cases}
\{ \epsilon \} & \text{if $t$ is a variable} \tkom \\
\{ \epsilon \} \cup \{ ip \mid \text{$i \in \mu(f)$ and
$p \in \Pos_\mu(t_i)$} \}
& \text{if $t = f(\seq{t})$} \tpkt
\end{cases}$$ A *$\mu$-step* $s \muto{\mu} t$ is a rewrite step $s \to t$ whose rewrite position is in $\Pos_\mu(s)$. The set of all non-$\mu$-replacing positions in $t$ is denoted by $\NPos_\mu(t)$; namely, $\NPos_\mu(t) \defsym \Pos(t) \setminus \Pos_\mu(t)$.
Matrix Interpretations
----------------------
One of the most powerful and popular techniques for analysing derivational complexities is use of orders induced from matrix interpretations [@EWZ08]. In order to define it first we define (weakly) monotone algebras.
A *proper order* is a transitive and irreflexive relation and a *preorder* (or *quasi-order*) is a transitive and reflexive relation. A proper order $\succ$ is *well-founded* if there is no infinite decreasing sequence $t_1 \succ t_2 \succ t_3 \cdots$. We say a proper order $\succ$ and a TRS $\RS$ are *compatible* if $\RS \subseteq {\succ}$.
An $\FS$-*algebra* $\A$ consists of a carrier set $A$ and a collection of interpretations $f_\A$ for each function symbol in $\FS$. By $\eval{\alpha}{\A}(\cdot)$ we denote the usual evaluation function of $\A$ according to an assignment $\alpha$ which maps variables to values in $A$. A *monotone $\FS$-algebra* is a pair $(\A,\succ)$ where $\A$ is an $\FS$-algebra and $\succ$ is a proper order such that for every function symbol $f\in\FS$, $f_\A$ is strictly monotone in all coordinates with respect to $\succ$. A *weakly monotone $\FS$-algebra* $(\A,\succcurlyeq)$ is defined similarly, but for every function symbol $f\in\FS$, it suffices that $f_\A$ is weakly monotone in all coordinates (with respect to the quasi-order $\succcurlyeq$). A monotone $\FS$-algebra $(\A,\succ)$ is called *well-founded* if $\succ$ is well-founded. We write *WMA* instead of well-founded monotone algebra.
Any (weakly) monotone $\FS$-algebra $(\A,\R)$ induces a binary relation $\R_\A$ on terms: define $s \R_\A t$ if $\eval{\alpha}{\A}(s) \R \eval{\alpha}{\A}(t)$ for all assignments $\alpha$. Clearly if $\R$ is a proper order (quasi-order), then $\R_\A$ is a proper order (quasi-order) on terms and if $\R$ is a well-founded, then $\R_\A$ is well-founded on terms. We say $\A$ is *compatible* with a TRS $\RS$ if ${\RS} \subseteq {\R_\A}$. Let $\geqord{\A}$ denote the quasi-order induced by a weakly monotone algebra $(\A,\succcurlyeq)$, then $\eqord{\A}$ denotes the equivalence (on terms) induced by $\geqord{\A}$. Let $\mu$ denote a replacement map. Then we call a well-founded algebra $(\A,\succ)$ *$\mu$-monotone* if for every function symbol $f \in \FS$, $f_\A$ is strictly monotone *on* $\mu(f)$, i.e., $f_\A$ is strictly monotone with respect to every argument position in $\mu(f)$. Similarly a (strict) relation $\R$ is called $\mu$-monotone if (strictly) monotone on $\mu(f)$ for all $f \in \FS$. Let $\RS$ be a TRS compatible with a $\mu$-monotone relation $\R$. Then clearly any $\mu$-step $s \muto{\mu} t$ implies $s \R t$.
We recall the concept of *matrix interpretations* on natural numbers (see [@EWZ08] but compare also [@HW06]). Let $\FS$ denote a signature. We fix a dimension $d\in\N$ and use the set $\N^d$ as the carrier of an algebra $\A$, together with the following extension of the natural order $>$ on $\N$: $$(x_1,x_2,\ldots,x_d) > (y_1,y_2,\ldots,y_d) \defeqv
x_1>y_1 \wedge x_2 \geqslant y_2 \wedge \ldots \wedge x_d \geqslant y_d \tpkt$$ Let $\mu$ be a replacement map. For each $n$-ary function symbol $f$, we choose as an interpretation a linear function of the following shape: $$f_{\A} \colon (\vec{v}_1,\ldots,\vec{v}_n)
\mapsto F_1 \vec{v}_1 + \cdots + F_n \vec{v}_n + \vec{f}
\tkom$$ where $\vec{v}_1,\ldots,\vec{v}_n$ are (column) vectors of variables, $F_1,\ldots,F_n$ are matrices (each of size $d \times d$), and $\vec{f}$ is a vector over $\N$. Moreover, suppose for any $i \in \mu(f)$ the top left entry $(F_i)_{1,1}$ is positive. Then it is easy to see that the algebra $\A$ forms a $\mu$-monotone WMA. Let $\A$ be a matrix interpretation, let $\alpha_0$ denotes the assignment mapping any variable to $\vec{0}$, i.e., $\alpha_0(x) = \vec{0}$ for all $x \in \VS$, and let $t$ be a term. In the following we write $[t]$, $[t]_j$ as an abbreviation for $\eval{\alpha_0}{\A}(t)$, or $\left( \eval{\alpha_0}{\A}(t) \right)_j$ ($1 \leqslant j \leqslant d$), respectively, if the algebra $\A$ is clear from the context.
Runtime Complexity {#Runtime Complexity}
==================
In this section we formalise runtime complexity and then define a subclass of matrix interpretations that give polynomial upper-bounds.
The *derivation height* of a term $s$ with respect to a well-founded, finitely branching relation $\to$ is defined as: $\dheight(s,\to) = \max\{ n \mid \exists t \; s \to^n t \}$. Let $\RS$ be a TRS and $T$ be a set of terms. The *complexity function with respect to a relation $\to$ on $T$* is defined as follows: $$\comp(n, T, \rew) = \max\{ \dheight(t, \rew) \mid
\text{$t \in T$ and $\size{t} \leqslant n$}\} \tpkt$$ In particular we are interested in the (innermost) complexity with respect to $\rsrew{\RS}$ ($\irew{\RS}$) on the set $\TB$ of all *basic* terms.
Let $\RS$ be a TRS. We define the *runtime complexity function* $\Rc{\RS}(n)$, the *innermost runtime complexity function* $\Rci{\RS}(n)$, and the *derivational complexity function* $\Dc{\RS}(n)$ as $\comp(n, {\TB}, \rsrew{\RS})$, $\comp(n, {\TB}, \irew{\RS})$, and $\comp(n, \TA(\FS,\VS), \rsrew{\RS})$, respectively.
Note that the above complexity functions need not be defined, as the rewrite relation $\rsrew{\RS}$ is not always well-founded *and* finitely branching. We sometimes say the (innermost) runtime complexity of $\RS$ is *linear*, *quadratic*, or *polynomial* if there exists a (linear, quadratic) polynomial $p(n)$ such that $\Rcpareni{\RS}(n) \leqslant p(n)$ for sufficiently large $n$. The (innermost) runtime complexity of $\RS$ is called *exponential* if there exist constants $c$, $d$ with $c,d \geqslant 2$ such that $c^n \leqslant \Rcpareni{\RS}(n) \leqslant d^n$ for sufficiently large $n$.
The next example illustrates a difference between derivational complexity and runtime complexity.
\[ex:1\] \[ex:div\] Consider the following TRS $\RSdiv$[^3] $$\begin{aligned}
{4}
1\colon &\; &
x - \m{0} & \to x
&
\qquad
3\colon &\; &
\m{0} \div \m{s}(y) & \to \m{0}
\\
2\colon &\; &
\m{s}(x) - \m{s}(y) & \to x - y
&
\qquad
4\colon &\; &
\m{s}(x) \div \m{s}(y) & \to \m{s}((x - y) \div \m{s}(y))
\tpkt\end{aligned}$$ Although the functions *computed* by $\RSdiv$ are obviously feasible this is not reflected in the derivational complexity of $\RSdiv$. Consider rule 4, which we abbreviate as $C[x] \to D[x,x]$. Since the maximal derivation height starting with $C^n[x]$ equals $2^{n-1}$ for all $n > 0$, $\RSdiv$ admits (at least) exponential derivational complexity. In general any duplicating TRS admits (at least) exponential derivational complexity.
In general it is not possible to bound $\Dc{\RS}$ polynomially in $\Rc{\RS}$, as witnessed by Example \[ex:1\] and the observation that the runtime complexity of $\RS$ is linear (see Example \[ex:1:ua\], below). We will use Example \[ex:1\] as our running example.
Below we define classes of orders whose compatibility with a TRS $\RS$ bounds its runtime complexity from the above. Note that $\dheight(t, {\succ})$ is undefined, if the relation $\succ$ is not well-founded or not finitely branching. In fact compatibility of a constructor TRS with the polynomial path order $>_{\m{pop*}}$ ([@AM:2009]) induces polynomial innermost runtime complexity, whereas $\m{f}(x) >_{\m{pop*}} \cdots >_{\m{pop*}} \cdots >_{\m{pop*}}
\m{g}^2(x) >_{\m{pop*}} \m{g}(x) >_{\m{pop*}} x$ holds when precedence $\m{f} > \m{g}$ is used. Hence $\dheight(t, {>_{\m{pop*}}})$ is undefined, while the order $>_{\m{pop*}}$ can be employed in complexity analysis.
\[d:collapsible\] Let $\R$ be a binary relation over terms, let $\succ$ be a proper order on terms, and let $\Slow$ denote a mapping associating a term with a natural number. Then $\succ$ is *$\Slow$-collapsible on $\R$* if $\Slow(s) > \Slow(t)$, whenever ${s} \R {t}$ and ${s} \succ {t}$ holds. An order $\succ$ is *collapsible (on $\R$)*, if there is a mapping $\Slow$ such that $\succ$ is $\Slow$-collapsible (on $\R$).
Let $\R$ be a finitely branching and well-founded relation. Further, let $\succ$ be a $\Slow$-collapsible order with ${\R} \subseteq {\succ}$. Then $\dheight(t,{\R}) \leqslant \Slow(t)$ holds for all terms $t$.
The alert reader will have noticed that any proper order $\succ$ is collapsible on a finitely branching and well-founded relation $\R$: simply set $\Slow(t) \defsym \dheight(t,{\R})$. However, this observation is of limited use if we wish to bound the derivation height of $t$ in independence of $\R$.
If a TRS $\RS$ and a $\mu$-monotone matrix interpretation $\A$ are compatible, $\Slow(t)$ can be given by $[t]_1$. In order to estimate derivational or runtime complexity, one needs to associate $[t]_1$ to $|t|$. For this sake we define degrees of matrix interpretations.
A matrix interpretation is of *(basic) degree* $d$ if there is a constant $c$ such that $[t]_i \leqslant c \cdot |t|^d$ for all (basic) terms $t$ and $i$, respectively.
An *upper triangular complexity matrix* is a matrix $M$ in $\N^{d\times d}$ such that we have $M_{j,k}=0$ for all $1 \leqslant k < j \leqslant d$, and $M_{j,j}\leqslant 1$ for all $1 \leqslant j \leqslant d$. We say that a WMA $\A$ is a *triangular matrix interpretation* (*TMI* for short) if $\A$ is a matrix interpretation (over $\N$) and all matrices employed are of upper triangular complexity form. It is easy to define triangular matrix interpretations, such that an algebra $\A$ based on such an interpretation, forms a well-founded *weakly* monotone algebra. To simplify notation we will also refer to $\A$ as a TMI, if no confusion can arise from this. A TMI $\A$ of dimension 1, that is a linear polynomial, is called a *strongly linear interpretation* (*SLI* for short) if all interpretation functions $f_{\A}$ are strongly linear. Here a polynomial $P(x_1,\dots,x_n)$ is strong linear if $P(x_1,\dots,x_n) = x_1 + \cdots + x_n + c$.
\[l:8\] Let $\A$ be a TMI and let $M$ denote the component-wise maximum of all matrices occurring in $\A$. Further, let $d$ denote the number of ones occurring along the diagonal of $M$. Then for all $1 \leqslant i,j \leqslant d$ we have $(M^n)_{i,j} = \bO(n^{d-1})$.
The lemma is a direct consequence of Lemma 4 in [@NZM:2010] together with the observation that for any triangular complexity matrix, the diagonal entries denote the multiset of eigenvalues.
\[l:9\] Let $\A$ and $d$ be defined as in Lemma \[l:8\]. Then $\A$ is of degree $d$.
For any (triangular) matrix interpretation $\A$, there exist vectors $\vec{v}_i$ and a vector $\vec{w}$ such that the evaluation $[t]$ of $t$ can be written as follows: $$[t] = \sum_{i=1}^{\ell} \vec{v}_i + \vec{w} \tkom$$ where each vector $\vec{v}_i$ is the product of those matrices employed in the interpretation of function symbols in $\A$ and a vector representing the constant part of a function interpretation. It is not difficult to see that there is a one-to-one correspondence between the number of vectors $\vec{v}_1,\dots,\vec{v}_{\ell}$ and the number of subterms of $t$ and thus $\ell = \size{t}$. Moreover for each $\vec{v}_i$ the number of products is less than the depth of $t$ and thus bounded by $\size{t}$. In addition, due to Lemma \[l:8\] the entries of the vectors $\vec{v}_i$ and $\vec{w}$ are bounded by a polynomial of degree at most $d-1$. Thus for all $1 \leqslant j \leqslant d$, there exists $k \leqslant d$ such that $([t])_j = \bO(\size{t}^{k})$.
[[@NZM:2010 Theorem 9],[@W:2010]]{} \[t:TMI\] Let $\A$ and $d$ be defined as in Lemma \[l:8\]. Then, $\gord{\A}$ is $\bO(n^d)$-collapsible.
The theorem is a direct consequence of Lemmas \[l:8\] and \[l:9\].
In order to cope with runtime complexity, a similar idea to restricted polynomial interpretations (see [@BCMT:2001]) can be integrated to triangle matrix interpretations. We call $\A$ a *restricted matrix interpretation* (*RMI* for short) if $\A$ is a matrix interpretation, but for each constructor symbol $f \in \FS$, the interpretation $f_\A$ of $f$ employs upper triangular complexity matrices, only. The next theorem is a direct consequence of the definitions in conjunction with Lemma \[l:9\].
\[t:rmi\] Let $\A$ be an RMI and let $t$ be a basic term. Further, let $M$ denote the component-wise maximum of all matrices used for the interpretation of constructor symbol, and let $d$ denote the number of ones occurring along the diagonal of $M$. Then $\A$ is of basic degree $d$. Furthermore, if $M$ is the unit matrix then $\A$ is of basic degree $1$.
Usable Replacement Maps {#CSR}
=======================
Unfortunately, there is no RMI compatible with the TRS of our running example. The reason is that the monotonicity requirement of TMI is too severe for complexity analysis. Inspired by the idea of Fernández [@F:2005], we show how context-sensitive rewriting is used in complexity analysis. Here we briefly explain our idea. Let $\mathbf{n}$ denote the numeral $s^n(\m{0})$. Consider the derivation from $\mathbf{4} \div \mathbf{2}$: $$\underline{\mathbf{4} \div \mathbf{2}}
\to \m{s}(\underline{(\mathbf{3} - \mathbf{1})} \div \mathbf{2})
\to \m{s}((\underline{\mathbf{2} - \m{0}}) \div \mathbf{2})
\to \m{s}(\underline{\mathbf{2} \div \mathbf{2}})
\to \cdots$$ where redexes are underlined. Observe that e.g. any second argument of $\div$ is never rewritten. More precisely, any derivation from a basic term consists of only $\mu$-steps with the replacement map $\mu$: $\mu(\m{s}) = \mu({\div}) = \{1\}$ and $\mu({-}) = \varnothing$.
We present a simple method based on a variant of $\ICAP$ in [@GTS05] to estimate a suitable replacement map. Let $\mu$ be a replacement map. Clearly the function $\mu$ is representable as set of ordered pairs $(f,i)$. Below we often confuse the notation of $\mu$ as a function or as a set. Recall that $\Pos_\mu(t)$ denotes the set of *$\mu$-replacing positions* in $t$ and $\NPos_\mu(t) = \Pos(t) \setminus \Pos_\mu(t)$. Further, a term $t$ is a *$\mu$-replacing term* with respect to a TRS $\RS$ if ${\atpos{t}{p}} \not\in {\NF(\RS)}$ implies that $p \in Pos_\mu(t)$. The set of all $\mu$-replacing terms is denoted by $\MUTERM{\mu}$. In the following $\RS$ will always denote a TRS.
\[d:uargs\] Let $\RS$ be a TRS and let $\mu$ be a replacement map. We defined the operator $\Upsilon^\RS$ as follows: $$\Upsilon^\RS(\mu) \defsym \{ (f,i) \mid
\text{$l \to C[f(\seq{r})] \in \RS$ and $\MUCAP{\mu}{l}{r_i} \neq r_i$} \} \tpkt$$ Here $\MUCAP{\mu}{s}{t}$ is inductively defined on $t$ as follows: $$\MUCAP{\mu}{s}{t} =
\begin{cases}
t & \text{$t = \atpos{s}{p}$ for some $p \in \NPos_\mu(s)$} \tkom\\
u & \text{if $t = f(\seq{t})$ and
$u$ and $l$ unify for no
$l \to r \in \RS$} \tkom \\
y & \text{otherwise} \tkom
\end{cases}$$ where, $u = f(\MUCAP{\mu}{s}{t_1},\ldots,\MUCAP{\mu}{s}{t_n})$, $y$ is a fresh variable, and $\Var(l) \cap \Var(u) = \varnothing$ is assumed.
We define the *innermost usable replacement map* $\IURM{\RS}$ as follows $\IURM{\RS} \defsym \Upsilon^\RS(\varnothing)$ and let the *usable replacement map* $\URM{\RS}$ denote the least fixed point of $\Upsilon^\RS$. The existence of $\Upsilon^\RS$ follows from the monotonicity of $\Upsilon^\RS$. If $\RS$ is clear from context, we simple write $\imu$, $\tmu$, and $\Upsilon$, respectively. Usable replacement maps satisfy a desired property for runtime complexity analysis. In order to see it several preliminary lemmas are necessary.
First we take a look at $\MUCAP{\mu}{s}{t}$. Suppose $s \in \TT(\mu)$: observe that the function $\MUCAP{\mu}{s}{t}$ replaces a subterm $u$ of $t$ by a fresh variable if $u\sigma$ is a redex for some $s\sigma \in \TT(\mu)$. This is exemplified below.
Consider the TRS $\RSdiv$. Let $l \to r$ be rule 4, namely, $l = \m{s}(x) \div \m{s}(y)$ and $r = \m{s}((x - y) \div \m{s}(y))$. Suppose $\mu(f) = \varnothing$ for all functions $f$ and let $w$ and $z$ be fresh variables. The next table summarises $\MUCAP{\mu}{l}{t}$ for each proper subterm $t$ in $r$. To see the computation process, we also indicate the term $u$ in Definition \[d:uargs\].
--------------------- ----- ----- --------- ------------ -----------------------
$t$ $x$ $y$ $x - y$ $\m{s}(y)$ $(x-y) \div \m{s}(y)$
$u$ – – $x - y$ $\m{s}(y)$ $w \div \m{s}(y)$
$\MUCAP{\mu}{l}{t}$ $x$ $y$ $w$ $\m{s}(y)$ $z$
--------------------- ----- ----- --------- ------------ -----------------------
By underlining proper subterms $t$ in $r$ such that $\MUCAP{\mu}{l}{t} \neq t$, we have $$\m{s}(\underline{\underline{(x-y)} \div \m{s}(y)})$$ which indicates $(\m{s}, 1), ({\div},1) \in \Upsilon(\mu)$.
The next lemma states a role of $\MUCAP{\mu}{s}{t}$.
\[l:MUCAP\] If $s\sigma \in \MUTERM{\mu}$ and $\MUCAP{\mu}{s}{t} = t$ then $t\sigma \in \NF(\RS)$.
We use induction on $t$. Suppose $s\sigma \in \MUTERM{\mu}$ and $\MUCAP{\mu}{s}{t} = t$. If $t = \atpos{s}{p}$ for some $p \in \NPos_\mu(s)$ then $t\sigma = \atpos{(s\sigma)}{p} \in \NF$ follows by definition of $\MUTERM{\mu}$.
We can assume that $t = f(\seq{t})$. Assume otherwise that $t = x \in \VS$, then $\MUCAP{\mu}{s}{x} = x$ entails that $x\sigma$ occurs at a non-$\mu$-replacing position in $s\sigma$. Hence $x\sigma \in \NF$ follows from $s\sigma \in \MUTERM{\mu}$. Moreover, by assumption we have:
1. \[en:MUCAP:1\] $\MUCAP{\mu}{s}{t_i} = t_i$ for each $i$, and
2. \[en:MUCAP:2\] there is no rule $l \to r \in \RS$ such that $t$ and $l$ unify.
Due to \[en:MUCAP:2\]) $l\sigma$ is not reducible at the root, and the induction hypothesis yields $t_i\sigma \in \NF$ because of \[en:MUCAP:1\]). Therefore, we obtain $t\sigma \in \NF$.
For a smooth inductive proof of our key lemma we prepare a characterisation of the set of $\mu$-replacing terms $\MUTERM{\mu}$.
\[d:upsilon\] The set $\{ (f,i) \mid \text{$f(\seq{t}) \subterm t$ and $t_i \not\in \NF(\RS)$} \}$ is denoted by $\upsilon(t)$.
\[l:MUTERM\] $\MUTERM{\mu} = \{ t \mid \upsilon(t) \subseteq \mu \}$.
The inclusion from left to right essentially follows from the definitions. Let $t \in \MUTERM{\mu}$ and let $(f,i) \in \upsilon(t)$. We show $(f,i) \in \mu$. By Definition \[d:upsilon\] there is a position $p \in \Pos(t)$ with $\atpos{t}{p} = f(\seq{t})$ and ${\atpos{t}{pi}} \not\in {\NF}$. Thus $pi \in \Pos_\mu(t)$ and $i \in \Pos_\mu(\atpos{t}{p})$. Hence $(f,i) \in \mu$ is concluded.
Next we consider the reverse direction ${\{ t \mid \upsilon(t) \subseteq \mu \}} \subseteq {\MUTERM{\mu}}$. Let $t$ be a minimal term such that $\upsilon(t) \subseteq \mu$ and $t \not\in \MUTERM{\mu}$. One can write $t = f(\seq{t})$. Then, there exists a position $p \in \NPos_\mu(t)$ such that $\atpos{t}{p} \not\in \NF$. Because $\epsilon \not\in \NPos_\mu(t)$ holds in general, $p$ is of the form $iq$ with $i \in \NN$. As $iq \in \NPos_\mu(t)$ one of $(f, i) \not\in \mu$ or $q \in \NPos_\mu(\atpos{t}{i})$ must hold. As $t$ is minimal and ${\atpos{t}{iq}} \not\in {\NF}$ implies that ${\atpos{t}{i}} \not\in {\NF}$, we have $(f, i) \not\in \mu$. However, by Definition \[d:upsilon\], $(f,i) \in \upsilon(t) \subseteq \mu$. Contradiction.
The next lemma about the operator $\Upsilon$ is a key for the main theorem. Note that every subterm of a $\mu$-replacing term is a $\mu$-replacing term.
\[l:Upsilon\] If $l \to r \in \RS$ and $l\sigma \in \MUTERM{\mu}$ then $r\sigma \in \MUTERM{\mu \cup \Upsilon(\mu)}$.
Let $l \to r \in \RS$ and suppose $l\sigma \in \MUTERM{\mu}$. By Lemma \[l:MUTERM\] we have $$\MUTERM{\mu} = \{ t \mid \upsilon(t) \subseteq \mu \} \qquad
\MUTERM{\mu \cup \Upsilon(\mu)} = \{ t \mid {\upsilon(t)} \subseteq {\mu \cup \Upsilon(\mu)}\} \tpkt$$ Hence it is sufficient to show $\upsilon(r\sigma) \subseteq \mu \cup \Upsilon(\mu)$. Let $(f,i) \in \upsilon(r\sigma)$. There is $p \in \Pos(r\sigma)$ with ${\atpos{r\sigma}{p}} = {f(\seq{t})}$ and $t_i \not\in \NF$. If $p$ is below some variable position of $r$, ${\atpos{r\sigma}{p}}$ is a subterm of $l\sigma$, and thus $\upsilon(\atpos{r\sigma}{p}) \subseteq \upsilon(l\sigma) \subseteq \mu$. Otherwise, $p$ is a non-variable position of $r$. We may write $\atpos{r}{p} = f(\seq{r})$ and $r_i\sigma = t_i \not\in \NF$. Due to Lemma \[l:MUCAP\] we obtain $\MUCAP{\mu}{l}{r_i} \neq r_i$. Therefore, $(f,i) \in \Upsilon(\mu)$.
Remark that if $s, t \in \MUTERM{\mu}$ and $p \in \Pos_\mu(s)$ then $s[t]_p \in \MUTERM{\mu}$.
\[l:mu-closed\] The following implications hold.
1. \[en::mu-closed:1\] If $s \in \MUTERM{\imu}$ and $s \ito t$ then $t \in \MUTERM{\imu}$.
2. \[en::mu-closed:2\] If $s \in \MUTERM{\tmu}$ and $s \to t$ then $t \in \MUTERM{\tmu}$.
We show property \[en::mu-closed:1\]). Suppose $s \in \MUTERM{\imu}$ and $s \ito t$ is a rewrite step at $p$. Due to the definition of innermost rewriting, we have $\atpos{s}{p} \in \MUTERM{\varnothing}$. Hence, $\atpos{t}{p} \in \MUTERM{\imu}$ is obtained by Lemma \[l:Upsilon\]. Because $s\in \MUTERM{\imu}$ we have $p \in \Pos_\imu(s)$. Hence due to $\atpos{t}{p} \in \MUTERM{\imu}$ we conclude $t = s[\atpos{t}{p}]_p \in \MUTERM{\imu}$ due to the above remark. The proof of \[en::mu-closed:2\]) proceeds along the same pattern and is left to the reader.
We arrive at the main result of this section.
\[t:mu-inclusion\] Let $\RS$ be a TRS, and let $\Desc(L)$ denote the descendants of the set of terms $L$. Then $\rsisrew{\RS}(\MUTERM{\varnothing}) \subseteq \MUTERM{\imu}$ and $\rssrew{\RS}(\MUTERM{\varnothing}) \subseteq \MUTERM{\tmu}$.
Recall that $\Desc(L) \defsym \{t \mid \text{$\exists s \in L$ such that $s \to^\ast t$}\}$. We focus on the second part of the theorem, where we have to prove that $t \in \MUTERM{\tmu}$, whenever there exists $s \in \MUTERM{\varnothing}$ such that $s \rssrew{\RS} t$. As $\MUTERM{\varnothing} \subseteq \MUTERM{\tmu}$ this follows directly from Lemma \[l:mu-closed\].
Note that $\MUTERM{\varnothing}$ is the set of all argument normalised terms. Therefore, ${\TB} \subseteq {\MUTERM{\varnothing}}$. The following corollary to Theorem \[t:mu-inclusion\] is immediate.
Let $\RS$ be a TRS and let $\muto{\imu}$, $\muto{\tmu}$ denote the $\imu$-step and $\tmu$-step relation, respectively. Then for all terminating terms $t \in \TB$ we have $\dheight(t, {\rsirew{\RS}}) \leqslant \dheight(t, {\muto{\imu}})$ and $\dheight(t, {\rsrew{\RS}}) \leqslant \dheight(t, {\muto{\tmu}})$.
An advantage of the use of context-sensitive rewriting is that the compatibility requirement of monotone algebra in termination or complexity analysis is relaxed to $\mu$-monotone algebra. We illustrate its use in the next example.
\[ex:1:ua\] Recall the TRS $\RSdiv$ given in Example \[ex:div\] above. The usable argument positions are as follows: $$\imu(\m{-}) = \varnothing \quad \imu(\ms) = \imu(\m{\div}) = \{1\} \qquad
\tmu(\ms) = \tmu(\m{-}) = \tmu(\m{\div}) = \{1\} \tpkt$$ Consider the $1$-dimensional RMI $\A$ (i.e., linear polynomial interpretations) with
[4]{} \_& = 1 & \_(x) & = x + 2 & [-\_]{}(x, y) & = x + 1 & [\_]{}(x, y) & = 3x
which is strictly $\imu$-monotone and $\tmu$-monotone. The rules in $\RSdiv$ are interpreted and ordered as follows. $$\begin{aligned}
{6}
& 1\colon\quad & x + 1 & > x
& \qquad
& 3\colon\quad & 3 & > 1
\\
& 2\colon\quad & x + 3 & > x + 2
& \qquad
& 4\colon\quad & 3x + 6 & > 3x + 5
\tpkt\end{aligned}$$ Therefore, $\RSdiv \subseteq {>_\A}$ holds. By an application of Theorem \[t:rmi\] we conclude that the (innermost) runtime complexity is *linear*, which is optimal.
We cast the observations in the example into another corollary to Theorem \[t:mu-inclusion\].
\[c:mu-inclusion\] Let $\RS$ be a TRS and let $\A$ be a $d$-degree $\imu$-monotone (or $\tmu$-monotone) RMI compatible with $\RS$. Then the (innermost) runtime complexity function $\Rcpareni{\RS}$ with respect to $\RS$ is bounded by a $d$-degree polynomial.
It suffices to consider the case for full rewriting. Let $s$, $t$ be terms such that $s \rsrew{\RS} t$. By the theorem, we have $s \muto{\tmu} t$. Furthermore, by assumption ${\RS} \subseteq {\gord{\A}}$ and for any $f \in \FS$, $f_\A$ is strictly monotone on all $\tmu(f)$. Thus $s \gord{\A} t$ follows. Finally, the corollary follows by application of Theorem \[t:rmi\].
We link Theorem \[t:mu-inclusion\] to related work by Fernández [@F:2005]. In [@F:2005] it is shown how context-sensitive rewriting is used for proving innermost termination.
\[c:sin\] A TRS $\RS$ is innermost terminating if $\muto{\imu}$ is terminating.
We show the contraposition. If $\RS$ is not innermost terminating, there is an infinite sequence $t_0 \ito t_1 \ito t_2 \ito \cdots$, where $t_0 \in \MUTERM{\varnothing}$. From Theorem \[t:mu-inclusion\] and Lemma \[l:mu-closed\] we obtain $t_0 \muto{\imu} t_1 \muto{\imu} t_2 \muto{\imu} \cdots$. Hence, $\muto{\imu}$ is not terminating.
One might think that a similar claim holds for full termination if one uses $\tmu$. The next examples clarifies that this is not the case.
Consider the famous Toyama’s example $\RS$
[3]{} (,,x) & (x,x,x) & (x,y) & x & (x,y) & y
The replacement map $\tmu$ is empty. Thus, the algebra $\A$ over $\NN$
[4]{} \_(x,y,z) & = {x - y, 0} & \_(x,y) & = x + y + 1 & \_& = 1 & \_& = 0
is $\tmu$-monotone and we have $\RS \subseteq {>_\A}$. However, we should not conclude termination of $\RS$, because $\m{f}(\m{a},\m{b},\m{g}(\m{a},\m{b}))$ is non-terminating.
Weak Dependency Pairs {#dependency pairs}
=====================
In Section \[CSR\] we investigated argument positions of rewrite steps. This section is concerned about contexts surrounding rewrite steps. Recall the derivation: \[eq:5\] $$\begin{aligned}
{3}
\boxed{\mathbf{4} \div \mathbf{2}}
& ~\rsrew{\RSdiv} \m{s}(\,\boxed{(\mathbf{3} - \mathbf{1}) \div \mathbf{2}}\,)
&& ~\rsnrew{\RSdiv}{2} \m{s}(\,\boxed{\mathbf{2} \div \mathbf{2}}\,)
\\
& ~\rsrew{\RSdiv} \m{s}(\m{s}(\,\boxed{(\mathbf{1} - \mathbf{1}) \div \mathbf{2}}\,))
&& ~\rsnrew{\RSdiv}{2} \m{s}(\m{s}(\,\boxed{\m{0} \div \mathbf{2}}\,))
\\
& ~\rsrew{\RSdiv} \m{s}(\m{s}(\m{0})) \tkom\end{aligned}$$ where we boxed outermost occurrences of defined symbols. Obviously, their surrounding contexts are not rewritten. Here an idea is to simulate rewrite steps from basic terms with new rewrite rules, obtained by dropping unnecessary contexts. In termination analysis this method is known as the dependency pair method [@ArtsGiesl:2000]. We recast its main ingredient called dependency pairs.
Let $X$ be a set of symbols. We write $\SC{t_1, \ldots, t_n}_X$ to denote $C[t_1,\ldots,t_n]$, whenever $\rt(t_i) \in X$ for all $1 \leqslant i \leqslant n$ and $C$ is an $n$-hole context containing no $X$-symbols. (Note that the context $C$ may be degenerate and doesn’t contain a hole $\ctx$ or it may be that $C$ is a hole.) Then, every term $t$ can be uniquely written in the form $\SC{t_1, \ldots, t_n}_X$.
\[l:1\] Let $t$ be a terminating term, and let $\sigma$ be a substitution. Then $\dheight(t\sigma, \to_\RS) = \sum_{1 \leqslant i \leqslant n} \dheight(t_i\sigma, \rsrew{\RS})$, whenever $t = \SC{t_1, \ldots, t_n}_{\DD \cup \VV}$.
The idea is to replace such a $n$-hole context with a fresh $n$-ary function symbol. We define the function $\COM$ as a mapping from tuples of terms to terms as follows: $\COM(\seq{t})$ is $t_1$ if $n = 1$, and $c(t_1,\ldots,t_n)$ otherwise. Here $c$ is a fresh $n$-ary function symbol called *compound symbol*. The above lemma motivates the next definition of *weak dependency pairs*.
\[d:WDP\] Let $t$ be a term. We set $t^\sharp \defsym t$ if $t \in \VV$, and $t^\sharp \defsym f^\sharp(t_1,\dots,t_n)$ if $t = f(\seq{t})$. Here $f^\sharp$ is a new $n$-ary function symbol called *dependency pair symbol*. For a signature $\FS$, we define $\FS^\sharp = \FS \cup \{f^\sharp \mid f\in \FS\}$. Let $\RS$ be a TRS. If $l \rew r \in \RS$ and $r = \SC{\seq{u}}_{\DD \cup \VV}$ then the rewrite rule $l^\sharp \to \COM(u_1^\sharp,\ldots,u_n^\sharp)$ is called a *weak dependency pair* of $\RS$. The set of all weak dependency pairs is denoted by $\WDP(\RS)$.
While dependency pair symbols are defined with respect to $\WDP(\RS)$, these symbols are not defined with respect to the original system $\RS$. In the sequel defined symbols refer to the defined function symbols of $\RS$.
\[ex:1:WDP\] The set $\WDP(\RSdiv)$ consists of the next four weak dependency pairs: $$\begin{aligned}
{4}
5\colon &\;& x -^\sharp \m{0} & \to x & \qquad
7\colon &\;& \m{0} \div^\sharp \m{s}(y) &\to \m{c} \\
6\colon && \m{s}(x) -^\sharp \m{s}(y) & \to x -^\sharp y & \qquad
8\colon && \m{s}(x) \div^\sharp \m{s}(y) &\to (x - y) \div^\sharp \m{s}(y)
\tpkt\end{aligned}$$ Here $\m{c}$ denotes a fresh compound symbols of arity $0$.
The derivation on page corresponds to the derivation of $\WDP(\RSdiv) \cup \RSdiv$:\[eq:7\]$$\begin{aligned}
{3}
\mathbf{4} \div^\sharp \mathbf{2}
& ~\rsrew{\WDP(\RSdiv)}~ (\mathbf{3} - \mathbf{1}) \div^\sharp \mathbf{2}
&& ~\rsnrew{\RSdiv}{2} \mathbf{2} \div^\sharp \mathbf{2}
\\
& ~\rsrew{\WDP(\RSdiv)}~ (\mathbf{1} - \mathbf{1}) \div^\sharp \mathbf{2}
&& ~\rsnrew{\RSdiv}{2} \m{0} \div^\sharp \mathbf{2}
\\
& ~\rsrew{\WDP(\RSdiv)}~ \m{c}
\tkom\end{aligned}$$ which preserves the length. The next lemma states that this is generally true.
\[l:2\] Let $t \in \TA(\FS,\VS)$ be a terminating term with defined root. Then we obtain: $\dheight(t,\rsrew{\RS})=\dheight(t^\sharp, \rsrew{\WDP(\RS) \cup \RS})$.
We show $\dheight(t, \rsrew{\RS}) \leqslant
\dheight(t^\sharp, \rsrew{\WDP(\RS) \cup \RS})$ by induction on $\dheight(t, {\rsrew{\RS}})$. Let $\ell = \dheight(t, {\rsrew{\RS}})$. If $\ell = 0$, the inequality is trivial. Suppose $\ell > 0$. Then there exists a term $u$ such that $t \rsrew{\RS} u$ and $\dheight(u, \rsrew{\RS}) = \ell - 1$. We distinguish two cases depending on the rewrite position $p$.
1. If $p$ is a position below the root, then clearly $\rt(u) = \rt(t) \in \DD$ and $t^\sharp \rsrew{\RS} u^\sharp$. Induction hypothesis yields $\dheight(u, {\rsrew{\RS}}) \leqslant
\dheight(u^\sharp, {\rsrew{\WDP(\RS) \cup \RS}})$, and we obtain $\ell \leqslant \dheight(t^\sharp, \rsrew{\WDP(\RS) \cup \RS})$.
2. If $p$ is a root position, then there exist a rewrite rule $l \to r \in \RS$ and a substitution $\sigma$ such that $t = l\sigma$ and $u = r\sigma$. There exists a context $C$ such that $r = \SC{\seq{u}}_{\DD \cup \VV}$ and thus by definition $l^\sharp \to \COM(u_1^\sharp,\ldots,u_n^\sharp) \in \WDP(\RS)$ such that $t^\sharp = l^\sharp\sigma$. Now, either $u_i \in \VV$ or $\rt(u_i) \in \DD$ for every $1 \leqslant i \leqslant n$. Suppose $u_i \in \VV$. Then $u_i^\sharp\sigma = u_i\sigma$ and clearly no dependency pair symbol can occur and thus, $$\dheight(u_i\sigma, \rsrew{\RS})
= \dheight(u_i^\sharp\sigma, \rsrew{\RS})
= \dheight(u_i^\sharp\sigma, \rsrew{\WDP(\RS) \cup \RS}) \tpkt$$ Otherwise, if $\rt(u_i) \in \DD$ then $u_i^\sharp\sigma = (u_i\sigma)^\sharp$. Hence $\dheight(u_i\sigma, \rsrew{\RS}) \leqslant \dheight(u, \rsrew{\RS}) < \ell$, and we conclude $\dheight(u_i\sigma, \rsrew{\RS}) \leqslant
\dheight(u_i^\sharp\sigma, \rsrew{\WDP(\RS) \cup \RS})$ from the induction hypothesis. Therefore, $$\begin{aligned}
\ell
& = \dheight(u, \rsrew{\RS}) + 1
\\
& = \sum_{1 \leqslant i \leqslant n} \dheight(u_i\sigma, \rsrew{\RS}) + 1
\leqslant \sum_{1 \leqslant i \leqslant n} \dheight(u_i^\sharp\sigma, \rsrew{\WDP(\RS) \cup \RS}) + 1\\
& = \dheight(\COM(u_1^\sharp,\ldots,u_n^\sharp)\sigma,
\rsrew{\WDP(\RS) \cup \RS}) + 1
\leqslant \dheight(t^\sharp, \rsrew{\WDP(\RS) \cup \RS}) \tpkt\end{aligned}$$ Here we used Lemma \[l:1\] for the second equality.
Note that $t$ is $\RS$-reducible if and only if $t^\sharp$ is $\WDP(\RS)\cup\RS$-reducible. Hence as $t$ is terminating, $t^\sharp$ is terminating on $\rsrew{\WDP(\RS)\cup\RS}$. Thus, similarly, $\dheight(t, \rsrew{\RS}) \geqslant
\dheight(t^\sharp, \rsrew{\WDP(\RS) \cup \RS})$ is shown by induction on $\dheight(t^{\sharp}, \rsrew{\WDP(\RS)\cup\RS})$.
In the case of innermost rewriting we need not include collapsing dependency pairs as in Definition \[d:WDP\]. This is guaranteed by the next lemma.
Let $t$ be a terminating term and $\sigma$ a substitution such that $x\sigma$ is a normal form of $\RS$ for all $x \in \Var(t)$. Then $\dheight(t\sigma, \rsrew{\RS}) = \sum_{1 \leqslant i \leqslant n} \dheight(t_i\sigma, \rsrew{\RS})$, whenever $t = \SC{t_1, \ldots, t_n}_\DD$.
Let $\RS$ be a TRS. If $l \rew r \in \RS$ and $r = \SC{\seq{u}}_\DD$ then the rewrite rule $l^\sharp \to \COM(u_1^\sharp,\ldots,u_n^\sharp)$ is called a *weak innermost dependency pair* of $\RS$. The set of all weak innermost dependency pairs is denoted by $\WIDP(\RS)$.
\[ex:1:WIDP\] The set $\WIDP(\RSdiv)$ consists of the next three weak innermost dependency pairs (with respect to $\ito$): $$\begin{aligned}
{4}
&\;& \m{s}(x) -^\sharp \m{s}(y) & \to x -^\sharp y &
&\;& \m{0} \div^\sharp \m{s}(y) & \to \m{c} \\
&& \m{s}(x) \div^\sharp \m{s}(y) &\to (x - y) \div^\sharp \m{s}(y)
\tpkt
& \qquad &&&\end{aligned}$$
The next lemma adapts Lemma \[l:2\] to innermost rewriting.
\[l:3\] Let $t$ be an innermost terminating term in $\TA(\FS,\VS)$ with $\rt(t) \in \DD$. We have $\dheight(t, \irew{\RS}) = \dheight(t^\sharp, \irew{\WIDP(\RS) \cup \RS})$.
Looking at the simulated version of the derivation on page , rules 1 and 2 are used, but neither rule 3 nor 4 is used in the $\RS$-steps. In general we can approximate a subsystem of a TRS that can be used in derivations from basic terms, by employing the notion of usable rules in the dependency pair method (cf. [@ArtsGiesl:2000; @GTSF06; @HirokawaMiddeldorp:2007]).
We write ${f} \depends {g}$ if there exists a rewrite rule $l \to r \in \RS$ such that $f = \rt(l)$ and $g$ is a defined function symbol in $\Fun(r)$. For a set $\GG$ of defined function symbols we denote by $\RS{\restriction}\GG$ the set of rewrite rules $l \to r \in \RS$ with $\rt(l) \in \GG$. The set $\UU(t)$ of *usable rules* of a term $t$ is defined as $\RS{\restriction}\{ g \mid \text{${f} \depends^* {g}$ for some $f \in \Fun(t)$} \}$. Finally, if $\PP$ is a set of (weak) dependency pairs then $\UU(\PP) = \bigcup_{l \to r \in \PP} \UU(r)$.
\[ex:1:usable\] The set $\UU(\WDP(\RSdiv))$ of usable rules for the weak dependency pairs consists of the two rules: $$\begin{aligned}
{4}
1\colon &\;& x - \m{0} & \to x & \qquad
2\colon &\;& \m{s}(x) - \m{s}(y) &\to x -y
\tpkt\end{aligned}$$ Note that we have that $\UU(\WDP(\RSdiv)) = \UU(\WIDP(\RSdiv))$.
We show a usable rule criterion for complexity analysis by exploiting the property that the starting terms are basic. Recall that $\TB$ denotes the set of basic terms; we set $\TBS = \{t^{\sharp} \mid t \in \TB \}$.
\[l:5\] Let $\PP$ be a set of weak dependency pairs and let $(t_i)_{i = 0, 1, \ldots}$ be a (finite or infinite) derivation of $\PP \cup \RS$. If $t_0 \in \TBS$ then $(t_i)_{i = 0, 1, \ldots}$ is a derivation of $\PP \cup \UU(\PP)$.
Let $\GG$ be the set of all non-usable symbols with respect to $\PP$. We write $P(t)$ if ${\atpos{t}{q}} \in {\NF(\RS)}$ for all $q \in \Pos_\GG(t)$. First we prove by induction on $i$ that $P(t_i)$ holds for all $i$.
1. Assume $i = 0$. Since $t_0 \in \TBS$, we have $t_0 \in \NF(\RS)$ and thus ${\atpos{t}{p}} \in {\NF(\RS)}$ for all positions $p$. The assertion $P$ follows trivially.
2. Suppose $i > 0$. By induction hypothesis, $P(t_{i-1})$ holds, i.e., there exist $p \in \Pos(t_{i-1})$, a substitution $\sigma$, and $l \rew r \in \UU(\PP) \cup \PP$, such that ${\atpos{t_{i-1}}{p}} = l\sigma$ and $\atpos{t_i}{p} = r\sigma$. In order to show property $P$ for $t_i$, we fix a position $q \in \Pos_\GG(t)$. We have to show $\atpos{t_i}{q} \in \NF(\RS)$. We distinguish three subcases:
- Suppose that $q$ is above $p$. Then $\atpos{t_{i-1}}{q}$ is reducible, but this contradicts the induction hypothesis $P(t_{i-1})$.
- Suppose $p$ and $q$ are parallel but distinct. Since $\atpos{t_{i-1}}{q} = \atpos{t_i}{q} \in \NF(\RS)$ holds, we obtain $P(t_i)$.
- Otherwise, $q$ is below $p$. Then, $\atpos{t_i}{q}$ is a subterm of $r\sigma$. Because $r$ contains no $\GG$-symbols by the definition of usable symbols, $\atpos{t_i}{q}$ is a subterm of $x\sigma$ for some $x \in \Var(r) \subseteq \Var(l)$. Therefore, $\atpos{t_i}{q}$ is also a subterm of $\atpos{t_{i-1}}{q}$, from which $\atpos{t_i}{q} \in \NF(\RS)$ follows. We obtain $P(t_i)$.
Hence property $P$ holds for all $t_i$ in the assumed derivation. Thus any reduction step $t_i \rsrew{\RS \cup \PP} t_{i+1}$ can be simulated by a step $t_i \rsrew{\UU(\PP) \cup \PP} t_{i+1}$. From this the lemma follows.
Note that the proof technique adopted for termination analysis [@GTSF06; @HirokawaMiddeldorp:2007] cannot be directly used in this context. The technique transforms terms in a derivation to exclude non-usable rules. However, since the size of the initial term increases, this technique does not suit to our use. On the other hand, the transformation employed in [@HirokawaMiddeldorp:2007] is adaptable to a complexity analysis in the large, cf. [@MS:2010]. The next theorem follows from Lemmas \[l:2\] and \[l:3\] in conjunction with the above Lemma \[l:5\]. It adapts the usable rule criteria to complexity analysis.
\[t:dp:usable\] Let $\RS$ be a TRS and let $t \in \TB$. If $t$ is terminating with respect to $\rew$ then $\dheight(t, \rew) = \dheight(t^{\sharp},\rsrew{\PP \cup \UU(\PP)})$, where $\rew$ denotes $\rsrew{\RS}$ or $\irew{\RS}$ depending on whether $\PP = \WDP(\RS)$ or $\PP = \WIDP(\RS)$.
To clarify the applicability of the theorem in complexity analysis, we instantiate the theorem by considering RMIs.
\[c:dp:usable\] Let $\RS$ be a TRS, let $\mu$ be the (innermost) usable replacement map and let $\PP = \WDP(\RS)$ (or $\PP = \WIDP(\RS)$). If $\PP \cup \UU(\PP)$ is compatible with a $d$-degree $\mu$-monotone RMI $\A$, then the (innermost) runtime complexity function $\rc^{(\m{i})}_{\RS}$ with respect to $\RS$ is bounded by a $d$-degree polynomial.
For simplicity we suppose $\PP = \WDP(\RS)$ and let $\A$ be a $\mu$-monotone RMI of degree $d$. Compatibility of $\A$ with $\PP \cup \UU(\PP)$ implies the well-foundedness of the relation $\rsrew{\PP \cup \UU(\PP)}$ on the set of terms $\TBS$, cf. Theorem \[t:mu-inclusion\]. This in turn implies the well-foundedness of $\rsrew{\RS}$, cf. Lemma \[l:5\]. Hence Theorem \[t:dp:usable\] is applicable and we conclude $\dheight(t,\rsrew{\RS}) = \dheight(t^\sharp, \rsrew{\PP \cup \UU(\PP)})$. On the other hand, due to Theorem \[t:rmi\] compatibility with $\A$ implies that $\dheight(t^\sharp, \rsrew{\PP \cup \UU(\PP)}) = \bO(\size{t^\sharp}^d)$. As $\size{t^\sharp} = \size{t}$, we can combine these equalities to conclude polynomial runtime complexity of $\RS$.
The below given example applies Corollary \[c:dp:usable\] to the motivating Example \[ex:1\] introduced in Section \[Introduction\].
\[ex:3\] Consider the TRS $\RSdiv$ for division used as running example; the weak dependency pairs $\PP \defsym \WDP(\RSdiv)$ are given in Example \[ex:1:WDP\]. We have $\UU(\PP) = \{ 1, 2 \}$ and let $\SS = \PP \cup \UU(\PP)$. The usable replacement map $\mu \defsym \URM{\SS}$ is defined as follows: $$\begin{aligned}
{3}
\mu(\m{s}) & = \mu(\m{-}) = \mu(\m{-}^\sharp) = \varnothing
& \qquad &
\mu(\div^\sharp) & = \{1\}
\tpkt\end{aligned}$$ Note that $\URM{\SS}$ is smaller than $\URM{\RS}$ on $\FF$ (see Example \[ex:1:ua\]). Consider the $1$-dimensional RMI $\A$ with $\m{0}_\A = \m{c}_\A = \m{d}_\A = 0$, $\m{s}_\A(x) = x + 2$, $\m{-}_\A(x, y) = \m{-}^\sharp_\A(x, y) = x + 1$, and $\div^\sharp_\A(x, y) = x + 1$. The algebra $\A$ is strictly monotone on all usable argument positions and the rules in $\SS$ are interpreted and ordered as follows: $$\begin{aligned}
{9}
& 1\colon\quad & x + 1 & > x
& \qquad
& 5\colon\quad & 1 & > 0
& \qquad
& 7\colon\quad & 1 & > 0
\\
& 2\colon\quad & x + 3 & > x + 1
& \qquad
& 6\colon\quad & x + 3 & > x + 1
& \qquad
& 8\colon\quad & x + 3 & > x + 2
\tpkt\end{aligned}$$ Therefore, $\SS$ is compatible with $\A$ and the runtime complexity function $\Rc{\RS}$ is linear. Remark that by looking at the coefficients of the interpretations more precise bound can be inferred. Since all coefficients are at most one, we obtain $\Rc{\RS}(n) \leqslant n + c$ for some $c \in \NN$.
It is worth stressing that it is (often) easier to analyse the complexity of $\PP \cup \UU(\PP)$ than the complexity of $\RS$. This is exemplified by the next example.
\[ex:8\] Consider the TRS $\RSdiff$ $$\begin{aligned}
\m{D}(\m{c}) & \to \m{0}
&
\m{D}(x + y) & \to \m{D}(x) + \m{D}(y)
&
\m{D}(x \times y) & \to
(y \times \m{D}(x)) + (x \times \m{D}(y))
\\
\m{D}(\m{t}) & \to \m{1}
&
\m{D}(x - y) & \to \m{D}(x) - \m{D}(y)
\tpkt\end{aligned}$$ There is no $1$-dimensional $\tmu$-monotone RMI compatible with $\RSdiff$. On the other hand $\WDP(\RSdiff)$ consists of the five pairs $$\begin{aligned}
\m{D}^\sharp(\m{c}) & \to \m{c_1}
&
\m{D}^\sharp(x + y) & \to \m{c_3}(\m{D}^\sharp(x), \m{D}^\sharp(y))
&
\m{D}^\sharp(x \times y) & \to
\m{c_5}(y, \m{D}^\sharp(x), x, \m{D}^\sharp(y))
\\
\m{D}^\sharp(\m{t}) & \to \m{c_2}
&
\m{D}^\sharp(x - y) & \to \m{c_4}(\m{D}^\sharp(x), \m{D}^\sharp(y))
\tkom\end{aligned}$$ and $\UU(\WDP(\RSdiff)) = \varnothing$. The usable replacement map $\tmu$ for $\WDP(\RSdiff) \cup \UU(\RSdiff)$ is defined as $\tmu(\m{c_3}) = \tmu(\m{c_4}) = \{1,2\}$, $\tmu(\m{c_5}) = \{2,4\}$, and $\tmu(f) = \varnothing$ for all other symbols $f$. Since the $1$-dimensional $\tmu$-monotone RMI $\A$ with $$\begin{aligned}
&
\m{D}^\sharp_\A(x) = 2x
\qquad \m{c}_\A = \m{t}_\A = 1
\qquad {+}_\A(x,y) = {-}_\A(x,y) = {\times}_\A(x,y) = x + y + 1
\\
&
\m{c_1}_\A = \m{c_2}_\A = 0
\qquad \m{c_3}_\A(x,y) = \m{c_4}_\A(x,y) = x + y
\qquad \m{c_5}_\A(x,y,z,w) = y + w
\tkom\end{aligned}$$ is compatible with $\RSdiff$, linear runtime complexity of $\RSdiff$ is concluded. Remark that this bound is optimal.
We conclude this section by discussing the (in-)applicability of standard dependency pairs (see [@ArtsGiesl:2000]) in complexity analysis. For that we recall the definition of standard dependency pairs.
\[d:DP\] The set $\DP(\RS)$ of (standard) *dependency pairs* of a TRS $\RS$ is defined as $\{ l^\sharp \to u^{\sharp} \mid l \to r \in \RS,
\text{$u \subterm r$, $\rt(u)$ is defined, and $u \not\prsubterm l$} \}$.
The next example shows that Lemma \[l:2\] (Lemma \[l:3\]) does not hold if we replace weak (innermost) dependency pairs with standard dependency pairs.
\[ex:6\] Consider the one-rule TRS $\RS$: $\m{f}(\m{s}(x)) \to \m{g}(\m{f}(x), \m{f}(x))$. $\DP(\RS)$ is the singleton of $\m{f}^\sharp(\m{s}(x)) \to \m{f}^\sharp(x)$. Let $t_n = \m{f}(\m{s}^n(x))$ for each $n \geqslant 0$. Since $t_{n+1} \rsrew{\RS} \m{g}(t_n,t_n)$ holds for all $n \geqslant 0$, it is easy to see $\dheight(t_{n+1}, \rsrew{\RS}) \geqslant 2^n$, while $\dheight(t_{n+1}^\sharp, \rsrew{\DP(\RS) \cup \RS}) = n$.
The Weight Gap Principle {#semantical gap}
========================
Let $\PP = \WDP(\RSdiv)$ and recall the derivation over $\PP \cup \RSdiv$ on page . This derivation can be represented as derivation of $\PP$ modulo $\UU(\PP)$: \[eq:relative\] $$\mathbf{4} \div^\sharp \mathbf{2}
~\rsrew{\PP/\UU(\PP)}~ \mathbf{2} \div^\sharp \mathbf{2}
~\rsrew{\PP/\UU(\PP)}~ \m{0} \div^\sharp \mathbf{2}
~\rsrew{\PP/\UU(\PP)}~ \m{c} \tpkt$$ As we see later linear runtime complexity of $\UU(\PP)$ and $\PP/\UU(\PP)$ can be easily obtained. If linear runtime complexity of $\PP \cup \UU(\PP)$ would follow from them, linear runtime complexity of $\RS$ could be established in a modular way.
In order to bound complexity of relative TRSs we define a variant of a reduction pair [@ArtsGiesl:2000]. Note that $\Slow$ is associated to a given collapsible order.
A $\mu$-*complexity pair* for a relative TRS $\RS/\RSS$ is a pair $({\gtrsim},{\succ})$ such that $\gtrsim$ is a $\mu$-monotone proper order and $\succ$ is a strict order. Moreover ${\gtrsim}$ and ${\succ}$ are compatible, that is, ${\gtrsim \cdot \succ} \subseteq {\succ}$ or ${\succ \cdot \gtrsim} \subseteq {\succ}$. Finally $\succ$ is collapsible on $\rsrew{\RS/\RSS}$ and all compound symbols are $\mu$-monotone with respect to $\succ$.
Let $\PP = \WDP(\RS)$ and $({\gtrsim},{\succ})$ a $\tmu^{\PP \cup \UU(\PP)}$-complexity pair for $\PP/\UU(\PP)$. If $\PP \subseteq {\succ}$ and $\UU(\PP) \subseteq {\gtrsim}$ then $\dheight(t,\rsrew{\PP/\UU(\PP)}) \leqslant \Slow(t)$ for any $t \in \TBS$.
\[ex:relative\] Consider the $1$-dimensional RMI $\A$ with
[3]{} \_& = \_= \_= 0 & \_(x) & = x + 1 & [-]{}\_(x, y) & = [-]{}\^\_(x, y) = \^\_(x, y) = x
which yields the complexity pair $({\geqord[\geqslant]{\A}},{\gord[>]{\A}})$ for $\PP/\UU(\PP)$. Since ${\PP} \subseteq {\gord[>]{\A}}$ and ${\UU(\PP)} \subseteq {\geqord[\geqslant]{\A}}$ hold, $\comp(n, \TBS, \rsrew{\PP/\UU(\PP)}) = \bO(n)$.
First we show the main theorem of this section.
Let $\A$ be a matrix interpretation and let $\RS/\SS$ be a relative TRS. A *weight gap* on a set $T$ of terms is a number $\Delta \in \NN$ such that $s \in {\to^*_{\RS \cup \SS}}(T)$ and $s \to_\RS t$ implies $[t]_1 - [s]_1 \leqslant \Delta$.
Let $T$ be a set of terms and let $\RS/\SS$ be a relative TRS.
\[t:wgp\] If $\RS/\SS$ is terminating, $\A$ admits a *weight gap* $\Delta$ on $T$, and $\A$ is a matrix interpretation of degree $d$ such that $\SS$ is compatible with $\A$, then there exists $c \in \NN$ such that $
\dheight(t,{\to_{\RS \cup \SS}}) \leqslant
(1 + \Delta) \cdot \dheight(t,{\to_{\RS/\SS}}) + c \cdot |t|^d
$ for all $t \in T$. Consequently, $
\comp(n,T,{\rsrew{\RS \cup \SS}}) =
\bO(\comp(n,T,{\rsrew{\RS/\SS}}) + n^d)
$ holds.
Let $m = \dheight(s, {\rsrew{\RS/\SS}})$ and $n = \size{s}$. Any derivation of $\rsrew{\RS \cup \RSS}$ is representable as follows: $$s = s_0 \to_{\RSS}^{k_0}
t_0 \to_\RS
s_1 \to_\RSS^{k_1}
t_1 \to_\RS \cdots \to_\RSS^{k_m}
t_m \tpkt$$ Without loss of generality we may assume that the derivation is maximal and ground. We observe:
1. \[en:relative:i\] $k_i \leqslant [s_i]_1 - [t_i]_1$ holds for all $0 \leqslant i \leqslant m$. This is because $[s]_1 > [t]_1$, whenever $s \rsrew{\RSS} t$ by the assumption $\SS$ is compatible with $\A$. By definition of $>$, we conclude $[s]_1 \geqslant [t]_1 + 1$ whenever $s \rsrew{\RSS} t$. From the fact that $s_i \to_{\RSS}^{k_i} t_i$ we thus obtain $k_i \leqslant [s_i]_1 - [t_i]_1$.
2. \[en:relative:ii\] $([s_{i+1}])_1 \leqslant ([t_i])_1 + \Delta$ holds for all $0 \leqslant i < m$ by the assumption.
3. \[en:relative:iii\] There exists a number $c$ such that for any term $s \in T$, $[s]_1 \leqslant c \cdot \size{s}^d$. This follows by the degree of $\A$.
We obtain the following inequalities: $$\begin{aligned}
\dheight(s_0, \rsrew{\RS \cup \SS}) & = m + k_0 + \dots + k_m \\
& \leqslant m + ([s_0]_1 - [t_0]_1) + \dots + ([s_m]_1 - [t_m]_1) \\
& = m + [s_0]_1 + ([s_1]_1 - [t_0]_1) + \dots + ([s_m]_1 - [t_{m-1}]_1) - [t_m]_1 \\
& \leqslant m + [s_0]_1 + ([t_0]_1 + \Delta - [t_0]_1) + \dots
- [t_m]_1\\
& \leqslant m + [s_0]_1 + m\Delta - [t_m]_1\\
& \leqslant m + [s_0]_1 + m\Delta\\
& \leqslant (1+\Delta) m + c \cdot \size{s_0}^d \tpkt\end{aligned}$$ Here we use property \[en:relative:i\]) $m$-times in the second line. We used property \[en:relative:ii\]) in the third line and property \[en:relative:iii\]) in the last line.
A question is when a weight gap is admitted. We present two conditions. We start with a simple version for derivational complexity, and then we adapt it for runtime complexity.
We employ a very restrictive form of TMIs. Every $f \in \FS$ is interpreted by the following restricted linear function: $$f_{\A} \colon (\vec{v}_1,\ldots,\vec{v}_n)
\mapsto \mathbf{1} \vec{v}_1 + \ldots + \mathbf{1} \vec{v}_n + \vec{f}
\tpkt$$ I.e., the only matrix employed in this interpretation is the unit matrix $\mathbf{1}$. Such a matrix interpretation is called *strongly linear* (*SLMI* for short).
\[l:weightgap:i\] If $\RS$ is non-duplicating and $\A$ is an SLMI, then $\RS/\SS$ and $\A$ admit a weight gap on all terms.
Let $\Delta \defsym \max \{ [r]_1 \modminus [l]_1 \mid l \to r \in \RS \}$. We show that $\Delta$ gives a weight gap. In proof, we first show the following equality. $$\label{eq:1}
\Delta = \max \{(\eval{\alpha}{\A}(r))_1 \modminus (\eval{\alpha}{\A}(l))_1 \mid
l \to r \in \RS, \alpha \colon \VS \to \A \} \tpkt$$ Although the proof is not difficult, we give the full account in order to utilise it later. Observe that for any matrix interpretation $\A$ and rule ${l \to r} \in {\RS}$, there exist matrices (over $\N$) $L_1,\dots,L_k$, $R_1,\dots,R_k$ and vectors $\vec{l}$, $\vec{r}$ such that: $$\eval{\alpha}{\A}(l) = \sum_{i=1}^k L_i \cdot \alpha(x_i) + \vec{l} \hspace{10ex}
\eval{\alpha}{\A}(r) = \sum_{i=1}^k R_i \cdot \alpha(x_i) + \vec{r} \tkom$$ where $k$ denotes the cardinality of $\Var(l) \supseteq \Var(r)$. Conclusively, we obtain: $$\label{eq:2}
\eval{\alpha}{\A}(r) \modminus \eval{\alpha}{\A}(l) =
\sum_{i=1}^k (R_i \modminus L_i) \alpha(x_i) + (\vec{r} \modminus \vec{l})
\tpkt$$ Here $\modminus$ denotes the natural component-wise extension of the modified minus to vectors.
As $\A$ is an SLMI the matrices $L_i$, $R_i$ are obtained by multiplying or adding unit matrices, where the latter case can only happen if (at least one) of the variables $x_i$ occurs multiple times in $l$ or $r$. Due to the fact that $l \to r$ is non-duplicating, this effect is canceled out. Thus the right-hand side of is independent on the assignment $\alpha$ and we conclude: $$[r]_1 \modminus [l]_1 = (\eval{\alpha}{\A}(r) \modminus \eval{\alpha}{\A}(l))_1 = (\vec{r} \modminus \vec{l})_1 \tpkt$$ By definition $\Delta = \max\{[r]_1 \modminus [l]_1 \mid l \to r \in \RS \}$ and thus follows.
Let $C[\ctx]$ denote a (possible empty) context such that $s = C[l\sigma] \rsrew{\RS} C[r\sigma] = t$, where ${l \rew r} \in {\RS}$ and $\sigma$ a substitution. We prove the lemma by induction on $C$.
1. \[en:weightgap:i:i\] Suppose $C[\ctx] = \ctx$, that is, $s = l\sigma$ and $t = r\sigma$. There exists an assignment $\alpha_1$ such that $[l\sigma] = \eval{\alpha_1}{\A}(l)$ and $[r\sigma] = \eval{\alpha_1}{\A}(r)$. By we conclude for the assignment $\alpha_1$: $(\eval{\alpha_1}{\A}(l))_1 + \Delta \geqslant (\eval{\alpha_1}{\A}(r))_1$. Therefore in sum we obtain $[s]_1 + \Delta \geqslant [t]_1$.
2. \[en:weightgap:i:ii\] Suppose $C[\ctx] = f(t_1,\dots,t_{i-1},C'[\ctx],t_{i+1},\dots,t_n)$. Hence, we obtain: $$\begin{aligned}
& [f(t_1,\dots,C'[l\sigma],\dots,t_n)]_1 + \Delta \\
= {}
& [t_1]_1 + \dots + ([C'[l\sigma]]_1 + \Delta) + \dots +
[t_n]_1 + (\vec{f})_1 \\
\geqslant {}
& [t_1]_1 + \dots + [C'[r\sigma]]_1 + \dots +
[t_n]_1 + (\vec{f})_1 \\
= {}
& [f(t_1,\dots,C'[r\sigma],\dots,t_n)]_1 \tkom\end{aligned}$$ for some vector $\vec{f} \in \N^d$. In the first and last line, we employ the fact that $\A$ is strongly linear. In the second line the induction hypothesis is applied together with the (trivial) fact that $\A$ is strictly monotone on all arguments of $f$ by definition.
Note that the combination of Theorem \[t:wgp\] and Lemma \[l:weightgap:i\] corresponds to (the corrected version of) Theorem 24 in [@HM:2008]. In [@HM:2008] 1-dimensional SLMIs are called *strongly linear interpretations* (*SLIs* for short).
Consider the TRS $\RS$ $$\begin{aligned}
1\colon~ \m{f}(\m{s}(x)) & \to \m{f}(x - \m{s}(\m{0})) &
2\colon~ x - \m{0} & \to x &
3\colon~ \m{s}(x) - \m{s}(y) & \to x - y
\tpkt\end{aligned}$$ $\PP \defsym \WDP(\RS)$ consists of the three pairs $$\begin{aligned}
\m{f}^\sharp(\m{s}(x)) & \to \m{f}^\sharp(x - \m{s}(\m{0})) &
x -^\sharp \m{0} & \to x &
\m{s}(x) -^\sharp \m{s}(y) & \to x -^\sharp y
\tkom\end{aligned}$$ and $\UU(\PP) = \{ 2,3 \}$. Obviously $\PP$ is non-duplicating and there exists an SLI $\A$ with $\UU(\PP) \subseteq {\gord{\A}}$. Thus, Lemma \[l:weightgap:i\] yields a weight gap for $\PP/\UU(\PP)$. By taking the $1$-dimensional RMI $\BB$ with
[3]{} \_(x) & = x + 1 & [-]{}\_(x,y) & = x & \_(x) & = \^\_(x) = x\
\_& = 0 & [-\^]{}\_(x,y) & = x + 1
we obtain $\PP \subseteq {\gord{\BB}}$ and $\UU(\PP) \subseteq {\geqord{\BB}}$. Therefore, $\comp(n, \TBS, {\rsrew{\PP/\UU(\PP)}}) = \bO(n)$. Hence, $\Rc{\RS}(n) = \comp(n, \TBS, {\rsrew{\PP \cup \UU(\PP)}}) = \bO(n)$ is concluded by Theorem \[t:wgp\].
The next lemma shows that there is no advantage to consider SLMIs of dimension $k \geqslant 2$.
If $\RSS$ is compatible with some SLMI $\A$ then $\RSS$ is compatible with some SLI $\BB$.
Let $\A$ be an SLMI of dimension $k$. Further, let $\alpha : \VS \to \N$ denote an arbitrary assignment. We define $\widehat{\alpha} \colon \VS \to \N^k$ as $\widehat{\alpha}(x) = (\alpha(x),0,\dots,0)^\top$ for each variable $x$. We define the SLI $\BB$ by $f_\BB(x_1,\dots,x_n) = x_1 + \cdots + x_n + \vec{f}_1$. Then, $$\begin{aligned}
f_\BB(x_1,\dots,x_n)
& = \left(
(x_1,0,\dots,0)^\top + \cdots + (x_n,0,\dots,0)^\top + \vec{f}
\right)_1 \\
& = \left(
f_\A((x_1,0,\dots,0)^\top,\dots,(x_n,0,\dots,0)^\top))
\right)_1\end{aligned}$$ Therefore, easy structural induction shows that $\eval{\alpha}{\BB}(t) = (\eval{\widehat\alpha}{\A}(t))_1$ for all terms $t$. Hence, $\RSS \subseteq {\gord{\BB}}$ whenever $\RSS \subseteq {\gord{\A}}$.
The next example shows that in Lemma \[l:weightgap:i\] SLMIs cannot be simply replaced by RMIs.
\[ex:7\] Consider the TRSs $\RSexp$ $$\begin{aligned}
\m{exp}(\mN) & \to \ms(\mN) & \m{d}(\mN) & \to \mN \\
\m{exp}(\m{r}(x)) & \to \m{d}(\m{exp}(x)) & \m{d}(\ms(x)) & \to \ms(\ms(\m{d}(x)))
\tpkt\end{aligned}$$ This TRS formalises the exponentiation function. Setting $t_n = \m{exp}(\m{r}^n(\mN))$ we obtain $\dheight(t_n, \rsrew{\RSexp}) \geqslant 2^n$ for each $n \geqslant 0$. Thus the runtime complexity of $\RSexp$ is exponential.
In order to show the claim, we split $\RSexp$ into two TRSs $\RS = \{\m{exp}(\mN) \to \ms(0), \m{exp}(\m{r}(x)) \to \m{d}(\m{exp}(x))\}$ and $\RSS = \{\m{d}(\mN) \to \mN, \m{d}(\ms(x)) \to \ms(\ms(\m{d}(x))) \}$. Then it is easy to verify that the next $1$-dimensional RMI $\A$ is compatible with $\RSS$: $$\mN_{\A} = 0 \qquad \m{d}_{\A}(x) = 3x \qquad \ms_{\A}(x) = x + 1 \tpkt$$ Moreover an upper-bound of $\dheight(t_n ,{\rsrew{\RS/\RSS}})$ can be estimated by using the following $1$-dimensional TMI $\BB$: $$\mN_{\BB} = 0 \qquad \m{d}_{\BB}(x) = \ms_{\BB}(x) = x \qquad
\m{exp}_{\BB}(x) = \m{r}_{\BB}(x) = x + 1 \tpkt$$ Since ${\rsrew{\RS}} \subseteq {\gord[>]{\BB}}$ and ${\rssrew{\RSS}} \subseteq {\geqord[\geqslant]{\BB}}$ hold, we have ${\rsrew{{\RS}/{\RSS}}} \subseteq {\gord[>]{\BB}}$. Hence $\dheight(t_n, \rsrew{{\RS}/{\RSS}}) \leqslant \eval{\alpha_0}{\BB}(t_n) = n+2$. But clearly from this we cannot conclude a polynomial bound on the derivation length of $\RS \cup \RSS = \RSexp$, as the runtime complexity of $\RSexp$ is exponential.
Furthermore, non-duplication of $\RS$ is also essential for Lemma \[l:weightgap:i\].[^4]
Consider the following $\RS \cup \SS$ $$\begin{aligned}
{4}
1\colon &\;& \mf(\ms(x),y) &\to \mf(x,\md(y,y,y)) & \qquad
2\colon &\;& \md(\mN,\mN,x) &\to x
\\
&& && 3\colon &\;& \md(\ms(x),\ms(y),z) &\to \md(x,y,\ms(z))
\tpkt\end{aligned}$$ Let $\RS = \{1\}$ and let $\SS = \{2,3\}$. The following SLI $\A$ is compatible with $\SS$: $$\md_\A(x,y,z) = x + y + z + 1 \qquad
\ms_\A(x) = x + 1 \qquad
\mN_\A = 0 \tpkt$$ Furthermore, the following $\URM{\RS \cup \SS}$-monotone 1-dimensional RMI $\BB$ orients the rule in $\RS$ strictly, while the rules in $\SS$ are weakly oriented. $$\mf_\BB(x,y) = x \qquad \md_\BB(x,y,z) = x+y+z \qquad
\ms_\BB(x) = x + 1 \qquad \mN_\BB = 0 \tpkt$$ Thus, $\comp(n, \TB, {\to_{\RS/\SS}}) = \bO(n)$ is obtained. If the restriction that $\RS$ is non-duplicating could be dropped from Lemma \[l:weightgap:i\], we would conclude $\Rc{\RS \cup \SS}(n) = \bO(n)$. However, it is easy to see that $\Rc{\RS \cup \SS}$ is at least exponential. Setting $t_n \defsym \mf(\ms^n(\mN),\ms(\mN))$, we obtain $\dheight(t_n,\rsrew{\RS \cup \RSS}) \geqslant 2^n$ for any $n \geqslant 1$.
We present a weight gap condition for runtime complexity analysis. When considering the derivation in the beginning of this section (on page ), every step by a weak dependency pair only takes place as an outermost step. Exploiting this fact we can relax the restriction that was imposed in the above examples. To this end, we introduce a generalised notion of non-duplicating TRSs. Below $
\max\,\{\, ([\alpha]_\A(r))_1 \modminus ([\alpha]_\A(l))_1
\mid \text{$l \to r \in \PP$ and $\alpha : \VV \to \A$} \,\}
$ is referred to as $\WG(\A,\PP)$. We say that a $\mu$-monotone RMI is *adequate* if all compound symbols are interpreted as $\mu$-monotone SLMI.
\[l:weightgap:ii\] Let $\PP = \WDP(\RS)$ and let $\A$ be an adequate $\tmu^{\PP \cup \UU(\PP)}$-monotone RMI. Suppose $\WG(\A,\PP)$ is well-defined on $\NN$. Then, $\PP/\UU(\PP)$ and $\A$ admit a weight gap on $\TBS$.
The proof follows the proof of Lemma \[l:weightgap:i\]. We set $\Delta = \WG(\A,\PP)$. Let $s \rsrew{\PP} t$ with $s \in {\to_{\PP \cup \UU(\PP)}}(\TBS)$. One may write $s = C[l\sigma]$ and $t = C[r\sigma]$ with $l \rew r \in \PP$, where $C$ denotes a context. Note that due to $s \in {\to_{\PP \cup \UU(\PP)}}(\TBS)$ all function symbols above the hole in $C$ are compound symbols. We perform induction on $C$.
1. If $C = \Box$ then $[t]_1 - [s]_1 \leqslant \Delta$ by the definition of $\WG(\A,\PP)$.
2. For inductive step, $C$ must be of the form $c(u_1,\ldots,u_{i-1},C',u_{i+1},\ldots,u_n)$ with $i \in \mu(c)$. Since $\A$ is adequate, $c_\A$ is a SLMI. The rest of reasoning is same with \[en:weightgap:i:ii\]) in the proof of Lemma \[l:weightgap:i\].
Consider the following adequate $\URM{\PP \cup \UU(\PP)}$-monotone $1$-dimensional RMI $\BB$:
[4]{} \_& = \_= \_= 0 & \_(x) & = x + 2 & \_(x, y) & = \^\_(x, y) = \^\_(x, y) = x + 1
Since $\Delta(\BB,\PP)$ is well-defined (indeed $1$), $\BB$ admits the weight gap of Lemma \[l:weightgap:ii\]. Moreover, $\UU(\PP)$ is compatible with ${\gord{\BB}}$. As $\comp(n, \TBS, {\rsrew{\PP/\UU(\PP)}}) = \bO(n)$ was shown in Example \[ex:relative\], Theorem \[t:wgp\] deduces linear runtime complexity for $\RSdiv$.
In Lemma \[l:weightgap:ii\] $\WG(\A,\PP)$ must be well-defined.
Consider the following TRS $\RS$ $$\begin{aligned}
{4}
1\colon~ && \m{f}([\,]) & \to [\,] &
3\colon~ && \m{g}([\,], z) & \to z \\
2\colon~ && \m{f}(x : y) & \to x : \m{f}(\m{g}(y, [\,])) \qquad &
4\colon~ && \m{g}(x : y, z) & \to \m{g}(y, x : z) \end{aligned}$$ whose optimal innermost runtime complexity is quadratic. The weak innermost dependency pairs $\PP \defsym \WIDP(\RS)$ are $$\begin{aligned}
{4}
5\colon~ && \m{f}^\sharp([\,]) & \to \m{c} &
7\colon~ && \m{g}^\sharp([\,], z) & \to \m{d} \\
6\colon~ && \m{f}^\sharp(x : y) & \to \m{f}^\sharp(\m{g}(y, [\,])) \qquad &
8\colon~ && \m{g}^\sharp(x : y, z) & \to \m{g}^\sharp(y, x : z) \end{aligned}$$ and $\UU(\PP) = \{3,4\}$. It is not difficult to show $\comp(n, \TBS, {\rsirew{\PP/\UU(\PP)}}) = \bO(n)$ with a $1$-dimensional RMI. Moreover, the $\IURM{\PP \cup \UU(\PP)}$-monotone $1$-dimensional RMI $\A$ with $$\begin{aligned}
[\,]_\A & = 0 &
{:}_\A(x,y) & = y + 1 &
\m{g}_\A(x,y) & = 2x + y + 1 \\
\m{f}_\A(x) & = \m{f}^\sharp_\A(x) = x &
\m{g}^\sharp_\A(x,y) & = 0 &
\m{c}_\A & = \m{d}_\A = 0\end{aligned}$$ is compatible with $\UU(\PP)$. If Lemma \[l:weightgap:ii\] would be applicable without its well-definedness, linear innermost runtime complexity of $\RR$ would be concluded falsely. Note that $\WG(\A,\PP)$ is *not* well-defined on $\NN$ due to pair 6.
\[c:main\] Let $\RS$ be a TRS, $\PP$ the set of weak (innermost) dependency pairs, and $\mu$ be the (innermost) usable replacement map. Suppose $\BB$ is a RMI such that $(\geqord{\BB},\gord{\BB})$ forms a $\mu$-complexity pair with $\UU(\PP) \subseteq {\geqord{\BB}}$ and $\PP \subseteq {\gord{\BB}}$. Further, suppose $\A$ is an adequate $\mu$-monotone RMI such that $\WG(\A,\PP)$ is well-defined on $\NN$ and $\PP$ is compatible with $\UU(\PP)$.
Then the (innermost) runtime complexity function $\rc^{(\m{i})}_{\RS}$ with respect to $\RS$ is polynomial. Here the degree of the polynomial is given by the maximum of the degrees of the used RMIs.
Let $\A$ be an RMI as in the corollary. In order to verify that $\WG(\A,\PP)$ is well-defined, we use the following simple trick in the implementation. Let $l \to r \in \PP$ and let $k$ denotes the cardinality of $\Var(l) \supseteq \Var(r)$. Recall the existence of matrices (over $\N$) $L_1,\dots,L_k$, $R_1,\dots,R_k$ and vectors $\vec{l}$, $\vec{r}$ such that $
\eval{\alpha}{\A}(l) \modminus \eval{\alpha}{\A}(r) =
\sum_{i=1}^k (R_i \modminus L_i) \alpha(x_i) + (\vec{r} \modminus \vec{l})
$. Then $\WG(\A,\PP)$ is well-defined if $(R_i \modminus L_i) \leqslant \mathbf{0}$.
Weak Dependency Graphs {#DG}
======================
In this section we extend the above refinements by revisiting dependency graphs in the context of complexity analysis. Let $\PP = \WDP(\RSdiv)$ and recall the derivation over $\PP \cup \UU(\PP)$ on page . Looking more closely at this derivation we observe that we do not make use of all weak dependency pairs in $\PP$, but we only employ the pairs $7$ and $8$: \[eq:wdg\] $$\mathbf{4} \div^\sharp \mathbf{2}
~\rsrew{\{8\}/\UU(\PP)}~ \mathbf{2} \div^\sharp \mathbf{2}
~\rsrew{\{8\}/\UU(\PP)}~ \m{0} \div^\sharp \mathbf{2}
~\rsrew{\{7\}/\UU(\PP)}~ \m{c} \tpkt$$ Therefore it is a natural idea to modularise our complexity analysis and apply the previously obtained techniques only to those pairs that are relevant. Dependencies among weak dependency pairs are formulated by the notion of weak dependency graphs, which is an easy variant of *dependency graphs* [@ArtsGiesl:2000].
\[d:DG\] Let $\RS$ be a TRS over a signature $\FS$ and let $\PP$ be the set of weak, weak innermost, or (standard) dependency pairs. The nodes of the *weak dependency graph* $\WDG(\RS)$, *weak innermost dependency graph* $\WIDG(\RS)$, or *dependency graph* $\DG(\RS)$ are the elements of $\PP$ and there is an arrow from $s \to t$ to $u \to v$ if and only if there exist a context $C$ and substitutions $\sigma, \tau \colon \VV \to \TT(\FS, \VV)$ such that $t\sigma \rew^* C[u\tau]$, where $\rew$ denotes $\rsrew{\RS}$ or $\irew{\RS}$ depending on whether $\PP = \WDP(\RS)$, $\PP = \DP(\RS)$, or $\PP = \WIDP(\RS)$, respectively.
\[ex:1:wdg\] The weak dependency graph $\WDG(\RSdiv)$ has the following form.
\(6) [6]{} ; (5) \[right=of 6\] [5]{} ;
\(6) edge (5) ; (6) edge \[in=120,out=60,loop\] (6) ;
\(8) \[right=of 5\] [8]{} ; (7) \[right=of 8\] [7]{} ;
\(8) edge (7) ; (8) edge \[in=120,out=60,loop\] (8) ;
Since weak dependency graphs represent call graphs of functions, grouping mutual parts helps analysis. A graph is called *strongly connected* if any node is connected with every other node by a (possibly empty) path. A *strongly connected component* (*SCC* for short) is a maximal strongly connected subgraph.[^5]
\[d:1\] Let $\GG$ be a graph, let $\equiv$ denote the equivalence relation induced by SCCs, and let $\PP$ be a SCC in $\GG$. Consider the *congruence graph* $\PG{\GG}$ induced by the equivalence relation $\equiv$. The set of all source nodes in $\PG{\GG}$ is denoted by $\Src(\PG{\GG})$. Let $\KK \in \PG{\GG}$ and let $\CC$ denote the SCC represented by $\KK$. Then we write $l \to r \in \KK$ if $l \to r \in \CC$. For nodes $\KK$ and $\LL$ in $\PG{\GG}$ we write $\KK \edge \LL$, if $\KK$ and $\LL$ are connected by an edge. The reflexive (transitive, reflexive-transitive) closure of $\edge$ is denoted as $\redge$ ($\tedge$, $\rtedge$).
Let $\GG$ denote $\WDG(\RSdiv)$. There are 4 SCCs in $\GG$: $\{5\}$, $\{6\}$, $\{7\}$, and $\{8\}$. Thus the congruence graph $\PG{\GG}$ has the following form:
\(6) [6]{} ; (5) \[right=of 6\] [5]{} ; (6) edge (5) ;
\(8) \[right=of 5\] [8]{} ; (7) \[right=of 8\] [7]{} ; (8) edge (7) ;
Here $\Src(\PG{\GG}) = \{ \{ 6 \}, \{ 8 \} \}$.
\[ex:2\] Consider the TRS $\RSgcd$ which computes the greatest common divisor.[^6] $$\begin{aligned}
{4}
1\colon && \m{0} \leqslant y & \to \m{true} &
6\colon && \m{gcd}(\m{0}, y) & \to y \\
2\colon && \m{s}(x) \leqslant \m{0} & \to \m{false} &
7\colon && \m{gcd}(\m{s}(x), \m{0}) & \to \m{s}(x) \\
3\colon && \m{s}(x) \leqslant \m{s}(y) & \to x \leqslant y
& \hspace{3ex}
8\colon && \m{gcd}(\m{s}(x), \m{s}(y))
& \to \m{if_{gcd}}(y \leqslant x, \m{s}(x), \m{s}(y))\\
4\colon && x - \m{0} & \to x &
9\colon && \m{if_{gcd}}(\m{true}, \m{s}(x), \m{s}(y))
& \to \m{gcd}(x - y, \m{s}(y)) \\
5\colon && \m{s}(x) - \m{s}(y) & \to x - y &
10\colon && \m{if_{gcd}}(\m{false}, \m{s}(x), \m{s}(y))
& \to \m{gcd}(y - x, \m{s}(x))
\tpkt
\intertext{The set $\WDP(\RSgcd)$ consists of the next ten weak dependency pairs:}
11\colon && \m{0} \leqslant^\sharp y & \to \m{c_1}
& \hspace{3ex}
16\colon && \m{gcd}^\sharp(\m{0}, y) & \to y
\\
12\colon && \m{s}(x) \leqslant^\sharp \m{0} & \to \m{c_2} &
17\colon && \m{gcd}^\sharp(\m{s}(x), \m{0}) & \to x
\\
13\colon && \m{s}(x) \leqslant^\sharp \m{s}(y) & \to x \leqslant^\sharp y &
18\colon && \m{gcd}^\sharp(\m{s}(x), \m{s}(y)) & \to \m{if_{gcd}}^\sharp(y \leqslant x, \m{s}(x), \m{s}(y))
\\
14\colon && \m{s}(x) -^\sharp \m{0} & \to x &
19\colon && \m{if_{gcd}}^\sharp(\m{true}, \m{s}(x), \m{s}(y))
& \to \m{gcd}^\sharp(x - y, \m{s}(y))
\\
15\colon && \m{s}(x) -^\sharp \m{s}(y) & \to x -^\sharp y &
20\colon && \m{if_{gcd}}^\sharp(\m{false}, \m{s}(x), \m{s}(y)) & \to \m{gcd}^\sharp(y - x, \m{s}(x))
\tpkt\end{aligned}$$ The congruence graph $\PG{\GG}$ of $\GG \defsym \WDG(\RSgcd)$ has the following form:
\(11) [11]{} ; (13) \[right=of 11\] [13]{} ; (12) \[right=of 13\] [12]{} ; (13) edge (12) ; (13) edge (11) ; (15) \[right=of 12\] [15]{} ; (14) \[right=of 15\] [14]{} ; (15) edge (14) ; (18) \[right=of 14\] [{18,19,20}]{} ; (16) \[right=of 18\] [16]{} ; (17) \[right=of 16\] [17]{} ; (18) edge (16) ;
Here $\Src(\PG{\GG}) = \{ \{ 13 \}, \{ 15 \}, \{17\}, \{18,19,20\} \}$.
The main result in this section is stated as follows: Let $\RS$ be a TRS, $\PP = \WDP(\RS)$, $\GG = \WDG(\RS)$, and furthermore \[eq:path\] $$\Path(t) \defsym \max\{ \dheight(t, \rsparenirew{\QQ \cup \UU(\QQ)}) \mid
\text{$(\PP_1,\ldots,\PP_k)$ is a path in $\PG{\GG}$ and $\PP_1 \in \Src(\PG{\GG})$} \} \tkom$$ where $\QQ = \bigcup_{i=1}^k \PP_i$. Then, $
\dheight(t,{\rsrew{\RS}}) = \bO(\Path(t))
$ holds for all basic term $t$. This means that one may decompose $\PP \cup \UU(\PP)$ into several smaller fragments and analyse these fragments separately.
Reconsider the derivation on page . The only dependency pairs are from the set $\{7,8\}$. Observe that the order these pairs are applied is representable by the path $(\{8\},\{7\})$ in the congruence graph. This observation is cast into the following definition.
\[d:pathbased\] Let $\PP$ be the set of weak (innermost) dependency pairs and let $\GG$ denote the weak (innermost) dependency graph. Suppose $A \colon {s} \rsparenisrew{\PP/\UU(\PP)} {t}$ denote a derivation, such that $s \in \TBS$. If $A$ can be written in the following form: $${s} \rsparenisrew{\PP_1/\UU(\PP)} \cdots \rsparenisrew{\PP_k/\UU(\PP)} {t} \tkom$$ then $A$ is *based on the sequence of nodes $(\PP_1,\ldots,\PP_k)$ (in $\PG{\GG}$)*.
The next lemma is an easy generalisation of the above example.
\[l:15\] Let $\RS$ be a TRS, let $\PP$ be the set of weak (innermost) dependency pairs and let $\GG$ denote the weak (innermost) dependency graph. Suppose that all compound symbols are nullary. Then any derivation $A \colon {s} \rsparenisrew{\PP/\UU(\PP)} {t}$ such that $s \in \TBS$ is based on a path in $\PG{\GG}$.
From Lemma \[l:15\] we see that the above mentioned modularity result easily follows as long as the arity of the compound symbols is restricted. We lift the assumption that all compound symbols are nullary. Perhaps surprisingly this generalisation complicates the matter. As exemplified by the next example, Lemma \[l:15\] fails if there exist non-nullary compound symbols.
\[ex:dg:1\] Consider the TRS $\RS = \{\m{f}(\m{0}) \to \m{a},
\m{f}(\m{s}(x)) \to \m{b}(\m{f}(x), \m{f}(x))\}$. The set $\WDP(\RS)$ consists of the two weak dependency pairs: $1\colon \m{f}^\sharp(\m{0}) \to \m{c}$ and $2\colon \m{f}^\sharp(\m{s}(x)) \to \m{d}(\m{f}^\sharp(x), \m{f}^\sharp(x))$. The corresponding congruence graph only contains the single edge from $\{2\}$ to $\{1\}$. Writing $t_n$ for $\m{f}^\sharp(\m{s}^n(\m{0}))$, we have the sequence $$\begin{aligned}
t_2 & \to_{\{2\}}^2 \m{d}(\m{d}(t_0, t_0), t_1)
\rsrew{\{1\}} \m{d}(\m{d}(\m{c}, t_0), t_1) \\
& \rsrew{\{2\}} \m{d}(\m{c}(\m{c}, t_0), \m{d}(t_0, t_0))
\to_{\{1\}}^3 \m{d}(\m{d}(\m{c}, \m{c}), \m{d}(\m{c},\m{c})) \tpkt\end{aligned}$$ whereas $(\{2\},\{1\},\{2\},\{1\})$ is not a path in the graph.
Note that the derivation in Example \[ex:dg:1\] can be reordered (without affecting its length) such that the derivation becomes based on the path $(\{2\},\{1\})$. More generally, we observe that a weak (innermost) dependency pair containing an $m$-ary ($m > 1$) compound symbol can induce $m$ *independent* derivations. This allows us to reorder (sub-)derivations. We show this via the following sequence of lemmas.
Let $\RS$ be a TRS, let $\PP$ denote the set of weak (innermost) dependency pairs, and let $\GG$ denote the weak (innermost) dependency graph. The set $\TBC$ is inductively defined as follows (i) $\TTs \cup \TT \subseteq \TBC$, where $\TTs = \{t^{\sharp} \mid t \in \TT \}$ and (ii) $c(t_1,\ldots,t_n) \in \TBC$, whenever $t_1,\ldots,t_n \in \TBC$ and $c$ a compound symbol. The next lemma formalises an easy observation.
\[l:12\] Let $\CC$ be a set of nodes in $\GG$ and let $A \colon {t = t_0} \rsparenisrew{\CC/\UU(\PP)} {t_n}$ denote a derivation based on $\CC$ with $t \in \TBC$. Then $A$ has the following form: $t = t_{0} \rsparenirew{\CC/\UU(\PP)} t_{1}
\rsparenirew{\CC/\UU(\PP)} \dots
\rsparenirew{\CC/\UU(\PP)} t_{n}$ where each $t_i \in \TBC$.
A key is that consecutive two weak dependency pairs may be swappable.
\[l:13\] Let $\KK$ and $\LL$ denote two different nodes in $\PG{\GG}$ such that there is no edge from $\KK$ to $\LL$. Let $s \in \TBC$ and suppose the existence of a derivation $A$ of the following form: $${s} \rsparenirew{\KK/\UU(\PP)} \cdot \rsparenirew{\LL/\UU(\PP)} t \tpkt$$ Then there exists a derivation $B$ $${s} \rsparenirew{\LL/\UU(\PP)} \cdot \rsparenirew{\KK/\UU(\PP)} {t} \tkom$$ such that $\card{A} = \card{B}$.
We only show the full rewriting case since the innermost case is analogous. According to Lemma \[l:12\] an arbitrary terms $u$ reachable from $s$ belongs to $\TBC$. Writing $\SC{u_1,\ldots,u_i,\ldots,u_m}_{\FS \cup \FS^\sharp}$ for $u$, the $m$-hole context $C$ consists of compound symbols and variables, $u_1,\ldots,u_m \in \TT \cup \TT^\sharp$. Therefore, $A$ can be written in the following form: $$\begin{aligned}
{3}
s
& ~\to_{\UU(\PP)}^{n_1}~
&& \SC{u_1,\ldots,u_i,\ldots,u_m}_{\FS \cup \FS^\sharp}
&& =: u
\\
& ~\to_\LL~
&& C[u_1,\ldots,u_i',\ldots,u_m]
\\
& ~\to_{\UU(\PP)}^{n_2}~
&& C[v_1,\ldots,v_i,\ldots,v_j,\ldots,v_m]
\\
& ~\to_\KK~
&& C[v_1,\ldots,v_i,\ldots,v_j',\ldots,v_m]
&& ~\to_{\UU(\PP)}^{n_3}~ t \tkom\end{aligned}$$ with $u_i' \to_{\UU(\PP)}^k v_i$. Here $i \neq j$ holds, because $i = j$ induces $\LL \leadsto \KK$. Easy induction on $n_2$ shows $$\begin{aligned}
{2}
s
& ~\to_{\UU(\PP)}^{n_1}~ u ~=~
&& C[u_1,\ldots,u_i,\ldots,u_j,\ldots,u_m]
\\
& ~\to_{\UU(\PP)}^{n_2 - k}~
&& C[v_1,\ldots,u_i,\ldots,v_j,\ldots,v_m]
\\
& ~\to_\KK~
&& C[v_1,\ldots,u_i,\ldots,v_j',\ldots,v_m]
\\
& ~\to_\LL~
&& C[v_1,\ldots,u_i',\ldots,v_j',\ldots,v_m]
\\
& ~\to_{\UU(\PP)}^k~
&& C[v_1,\ldots,v_i,\ldots,v_j',\ldots,v_m]
~\to_{\UU(\PP)}^{n_3} t~ \tkom\end{aligned}$$ which is the desired derivation $B$.
The next lemma states that reordering is partly possible.
\[l:14\] Let $s \in \TBC$, and let $A \colon {s} \rsparenisrew{\PP/\UU(\PP)} {t}$ be a derivation based on a sequence of nodes $(\PP_1,\ldots,\PP_k)$ such that $\PP_1 \in \Src(\PG{\GG})$, and let $(\QQ_1,\ldots,\QQ_{\ell})$ be a path in $\PG{\GG}$ with $\{\PP_1,\dots,\PP_k\} = \{\QQ_1,\dots,\QQ_\ell\}$. Then there exists a derivation $B \colon {s} \rsparenisrew{\PP/\UU(\PP)} {t}$ based on $(\QQ_1,\ldots,\QQ_{\ell})$ such that $\card{A} = \card{B}$ and $\PP_1 = \QQ_1$.
According to Lemma \[l:12\], for any derivation $A$ $$s \rsparenisrew{\PP_1/\UU(\PP)} \cdots \rsparenisrew{\PP_n/\UU(\PP)} t \tkom$$ if $\PP_i \edge \PP_{i+1}$ does not hold, there is a derivation $B$ $$s \rsparenisrew{\PP_1/\UU(\PP)} \cdots
\rsparenisrew{\PP_{i+1}/\UU(\PP)} \cdot
\rsparenisrew{\PP_i/\UU(\PP)} \cdots
\rsparenisrew{\PP_n/\UU(\PP)} t
\tkom$$ with $\card{A} = \card{B}$. By assumption $(\QQ_1,\ldots,\QQ_{\ell})$ is a path, whence we obtain $\QQ_1 \edge \cdots \edge \QQ_{\ell}$. By performing bubble sort with respect to $\tedge$, $A$ is transformed into the derivation $B$: $$s \rsparenisrew{\QQ_1/\UU(\PP)} \cdots
\rsparenisrew{\QQ_m/\UU(\PP)} t \tkom$$ such that $\card{A} = \card{B}$.
The next example shows that there is a derivation that cannot be transformed into a derivation based on a path.
\[ex:dg:2\] Consider the TRS $\RS = \{\m{f} \to \m{b}(\m{g},\m{h}), \m{g} \to \m{a}, \m{h} \to \m{a}\}$. Thus $\WDP(\RS)$ consists of three dependency pairs: $1\colon \m{f}^\sharp \to \m{c}(\m{g}^\sharp,\m{h}^\sharp)$, $2\colon \m{g}^\sharp \to \m{d}$, and $3\colon \m{h}^\sharp \to \m{e}$. Let $\PP \defsym \WDP(\RS)$ and let $\GG \defsym \WDG(\RS)$. Note that $\PG{\GG}$ are identical to $\GG$. We witness that the derivation $$\m{f}^\sharp
\rsrew{\PP} \m{c}(\m{g}^\sharp, \m{h}^\sharp)
\rsrew{\PP} \m{c}(\m{d}, \m{h}^\sharp)
\rsrew{\PP} \m{c}(\m{d}, \m{e}) \tkom$$ is based neither on the path $(\{1\},\{2\})$, nor on the path $(\{1\},\{3\})$.
Lemma \[l:14\] shows that we can reorder a given derivation $A$ that is based on a sequence of nodes that would in principle form a path in the congruence graph $\PG{\GG}$. The next lemma shows that we can guarantee that any derivation is based on sequence of different paths.
\[l:21\] Let $s \in \TBC$ and let $A \colon {s} \rsparenisrew{\PP/\UU(\PP)} {t}$ be a derivation based on $(\PP_1,\ldots,\PP_k, \QQ_1,\ldots,\QQ_\ell)$, such that $(\PP_1,\ldots,\PP_k)$ and $(\QQ_1,\ldots,\QQ_\ell)$ form two disjoint paths in $\GG$. Then there exists a derivation $B \colon {s} \rsparenisrew{\PP/\UU(\PP)} {t}$ based on the sequence of nodes $(\QQ_1,\ldots,\QQ_\ell,\PP_1,\ldots,\PP_k)$ such that $\card{A} = \card{B}$.
The lemma follows by an adaptation of the technique in the proof of Lemma \[l:14\].
Lemma \[l:21\] shows that the maximal length of any derivation only differs from the maximal length of any derivation based on a path by a linear factor, depending on the size of the congruence graph $\PG{\GG}$. We arrive at the main result of this section. Recall the definition of $\Path(\cdot)$ on page .
\[t:dg\] Let $\RS$ be a TRS and $\PP$ the set of weak (innermost) dependency pairs. Then, $\dheight(t, \rsparenirew{\RS}) = \bO(\Path(t))$ holds for all $t \in \TBS$.
Let $a$ denotes the maximum arity of compound symbols and $K$ denotes the number of SCCs in the weak (innermost) dependency graph $\GG$. We show $\dheight(s, \rsparenirew{\RS}) \leqslant a^{K} \cdot \Path(s)$ holds for all $s \in \TBS$. Theorem \[t:dp:usable\] yields that ${\dheight(s,\rsparenirew{\RS})} = {\dheight(s,\rew)}$, where $\rew$ either denotes $\rsrew{\PP \cup \UU(\PP)}$ or $\irew{\PP \cup \UU(\PP)}$.
Let $A\colon {s} \rss {t}$ be a derivation over $\PP \cup \UU(\PP)$ such that $s \in \TBS$. Then $A$ is based on a sequence of nodes in the congruence graph $\PG{\GG}$ such that there exists a maximal (with respect to subset inclusion) components of $\PG{\GG}$ that includes all these nodes. Let $T$ denote this maximal component. $T$ forms a directed acyclic graph. In order to (over-)estimate the number of nodes in this graph we can assume without loss of generality that $T$ is a tree with root in $\Src(\PG{\GG})$. Note that $K$ bounds the height of this tree. Thus the number of nodes in the component $T$ is less than $$\frac{a^{K} - 1}{a-1} \leqslant a^K \tpkt$$ Due to Lemma \[l:21\] the derivation $A$ is conceivable as a sequence of subderivations based on paths in $\PG{\GG}$. As the number of nodes in $T$ is bounded from above by $a^K$, there exist at most be $a^K$ different paths through $T$.
Hence in order to estimate $\card{A}$, it suffices to estimate the length of any subderivation $B$ of $A$, based on a specific path. Let $(\PP_1,\ldots,\PP_k)$ be a path in $\PG{\PP}$ such that $\PP_1 \in \Src(\PG{\GG})$ and let $B \colon u \rew^n v$, denote a derivation based on this path. Let $\QQ \defsym \bigcup_{i=1}^k \PP_i$. By Definition \[d:pathbased\] and the definition of usable rules, the derivation $B$ can be written as: $$u=u_0 \rsparenirew{\PP_1/\UU(\QQ)}
u_{n_1} \rsparenirew{\PP_2/{\UU(\QQ)}}
\cdots \rsparenirew{\PP_k/{\UU(\QQ)}} u_n = v \tkom$$ where $u \in \TBS$ each $u_i \in \TBC$. Hence $B$ is contained in $u \rsparenisrew{\QQ \cup \UU(\QQ)} v$ and thus $\card{B} \leqslant \Path(u)$ by definition.
As the length of a derivation $B$ based on a specific path can be estimated by $\Path(s)$, we obtain that the length of an arbitrary derivation is less than $a^K \cdot \Path(s)$. This completes the proof of the theorem.
\[c:dg\] Let $\RS$ be a TRS and let $\GG$ denote the weak (innermost) dependency graph. For every path $\bar{P} \defsym (\PP_1,\ldots,\PP_k)$ in $\PG{\GG}$ such that $\PP_1 \in \Src(\PG{\GG})$, we set $\QQ \defsym \bigcup_{i=1}^k \PP_i$ and suppose
1. there exist a $\URM{\QQ \cup \UU(\QQ)}$-monotone ($\IURM{\QQ \cup \UU(\QQ)}$-monotone) and adequate RMI $\A_{\bar{P}}$ that admits the weight gap $\EWG(\A_{\bar{P}},\QQ)$ on $\TBS$ and $\A_{\bar{P}}$ is compatible with the usable rules $\UU(\QQ)$,
2. there exists a $\URM{\QQ \cup \UU(\QQ)}$-monotone ($\IURM{\QQ \cup \UU(\QQ)}$-monotone) RMI $\BB_{\bar{P}}$ such that $(\geqord{\BB_{\bar{P}}},\gord{\BB_{\bar{P}}})$ forms a complexity pair for $\PP_k/{\PP_1 \cup \cdots \cup \PP_{k-1} \cup \UU(\QQ)}$, and
Then the (innermost) runtime complexity of a TRS $\RS$ is polynomial. Here the degree of the polynomial is given by the maximum of the degrees of the used RMIs.
We restrict our attention to weak dependency pairs and full rewriting. First observe that the assumptions imply that any basic term $t \in \TB$ is terminating with respect to $\RS$. Let $\PP$ be the set of weak dependency pairs. (Note that $\PP \supseteq \QQ$.) By Lemma \[l:5\] any infinite derivation with respect to $\RS$ starting in $t$ can be translated into an infinite derivation with respect to $\UU(\PP) \cup \PP$. Moreover, as the number of paths in $\PG{\GG}$ is finite, there exist a path $(\PP_1,\ldots,\PP_k)$ in $\PG{\GG}$ and an infinite rewrite sequence based on this path. This is a contradiction. Hence we can employ Theorem \[t:wgp\] in the following.
Let $(\PP_1,\ldots,\PP_k)$ be an arbitrary, but fixed path in the congruence graph $\PG{\GG}$, let $\QQ = \bigcup_{i=1}^k \PP_i$, and let $d$ denote the maximum of the degrees of the used RMIs. Due to Theorem \[t:wgp\] there exists $c \in \N$ such that: $$\dheight(t^\sharp,\rsrew{\QQ \cup \UU(\QQ)}) \leqslant
(1 + \EWG(\A_{\bar{P}},\QQ)) \cdot \dheight(t^\sharp,\rsrew{\QQ/\UU(\QQ)}) +
c \cdot \size{t}^d \tpkt$$ Due to Theorem \[t:dg\] it suffices to consider a derivation $A$ based on the path $(\PP_1,\ldots,\PP_k)$. Suppose $A \colon s \rsnrew{\QQ/\UU(\QQ)}{n} t$. Then $A$ can be represented as follows: $$s=s_0 \rsnrew{\PP_1/\UU(\PP_1)}{n_1}
s_{n_1} \rsnrew{\PP_2/{\UU(\PP_1) \cup \UU(\PP_2)}}{n_2}
\cdots \rsnrew{\PP_k/{\UU(\PP_1) \cup \cdots \cup \UU(\PP_k)}}{n_k} s_n = t \tkom$$ such that $n = \sum_{i=1}^k n_i$. It is sufficient to bound each $n_i$ from the above. Fix $i \in \{1,\dots,k\}$. Consider the subderivation $$A'\colon
s=s_0 \rsnrew{\PP_1/\UU(\PP_1)}{n_1} s_{n_1} \cdots
\rsnrew{\PP_k/{\UU(\PP_1) \cup \cdots \cup \UU(\PP_i)}}{n_i} s_{n_i}
\tpkt$$ Then $A'$ is contained in $A'' \colon
s \rssrew{\PP_1 \cup \cdots \cup \PP_{i-1} \cup
\UU(\PP_1) \cup \cdots \UU(\PP_i)} \cdot \rsnrew{\PP_k/{\UU(\PP_1) \cup \cdots \cup \UU(\PP_i)}}{n_i} s_{n_i}$. Let $\hat{P_i} \defsym (\PP_1, \ldots, \PP_{i})$. By assumption there exists a $\mu$-monotone complexity pair $(\geqord{\BB_{\hat{P_i}}},\gord{\BB_{\hat{P_i}}})$ such that $\PP_1 \cup \cdots \cup \PP_{i-1} \cup \UU(\PP_1 \cup \cdots \cup \PP_i)
\subseteq {\geqord{\BB_{\hat{P_i}}}}$ and $\PP_i \subseteq {\gord{\BB_{\hat{P_i}}}}$. Hence, we obtain $n_i \leqslant (\eval{\alpha_0}{\BB_{\hat{P_i}}}(s))_1$ and in sum ${n} \leqslant {k \cdot \size{s}^d}$. Finally, defining the polynomial $p$ as follows: $$p(x) \defsym (1 + \EWG(\A_{\bar{P}},\QQ)) \cdot k \cdot x^d + c \cdot x^d \tkom$$ we conclude $\dheight(t^\sharp, \rsrew{\QQ \cup \UU(\QQ)}) \leqslant p(\size{t})$. Note that the polynomial $p$ depends only on the algebras $\A_{\bar{P}}$ and $\BB_{\hat{P_1}}$, …, $\BB_{\bar{P_k}}$.
As the path $(\PP_1,\ldots,\PP_k)$ was chosen arbitrarily, there exists a polynomial $q$, depending only on the employed RMIs such that $\Path(t) \leqslant q(\size{t})$. Thus the corollary follows due to Theorem \[t:dg\].
Let $t$ be an arbitrary term. By definition the set in $\Path(t)$ may consider $2^{\bO(n)}$-many paths, where $n$ denotes the number of nodes in $\PG{\GG}$. However, it suffices to restrict the definition on page to *maximal* paths. For this refinement $\Path(t)$ contains at most $n^2$ paths. This fact we employ in implementing the WDG method.
For $\PG{\WDG(\RSgcd)}$ the above set consists of 8 paths: $(\{13\})$, $(\{13\},\{11\})$, $(\{13\},\{12\})$, $(\{15\})$, $(\{15\},\{14\})$, $(\{17\})$, $(\{18,19,20\})$, and $(\{18,19,20\},\{16\})$. In the following we only consider the last three paths, since all other paths are similarly handled.
- Consider $(\{17\})$. Note $\UU(\{17\}) = \varnothing$. By taking an arbitrary SLI $\A$ and the linear restricted interpretation $\BB$ with $\m{gcd}^\sharp_\BB(x,y) = x$ and $\m{s}_\BB(x) = x + 1$, we have $\varnothing \subseteq {>_\A}$, $\varnothing \subseteq {\geqslant_\BB}$, and $\{17\} \subseteq {>_\BB}$.
- Consider $(\{18,19,20\})$. Note $\UU(\{18,19,20\}) = \{1,\ldots,5\}$. The following RMI $\A$ is adequate for $(\{18,19,20\})$ and strictly monotone on $\URM{\PP \cup \UU(\PP)}$. The presentation of $\A$ is succinct as only the signature of the usable rules $\{1,\ldots,5\}$ is of interest. $$\begin{aligned}
\m{true}_\A &= \m{false}_\A = \m{0}_\A = \vec{0}
&
\ms_\A(\vec{x}) & =
\begin{pmatrix}
1 & 1 \\
0 & 1
\end{pmatrix}
\vec x
+
\begin{pmatrix}
3\\
1
\end{pmatrix}
\\
{\leqslant}_\A(\vec{x}, \vec{y})
&=
\begin{pmatrix}
0 & 1\\
0 & 0
\end{pmatrix}
\vec{y} +
\begin{pmatrix}
1\\
3
\end{pmatrix}
&
{-}_\A(\vec{x},\vec{y} )
&= \vec{x} +
\begin{pmatrix}
2\\
3
\end{pmatrix}
\tpkt\end{aligned}$$ Further, consider the RMI $\BB$ giving rise to the complexity pair $({\geqord{\BB}},{\gord{\BB}})$. $$\begin{aligned}
\m{0}_\BB &=
\makebox[0mm][l]{$\m{true}_\BB = \m{false}_\BB =
\m{\leqslant}_\BB (\vec x, \vec y) = \vec{0}$}
\\
\m{s}_\BB(\vec x) &=
\begin{pmatrix}
1 & 3\\
0 & 0
\end{pmatrix}
\vec{x} +
\begin{pmatrix}
3 \\
0
\end{pmatrix}
&&
{-}_\BB(\vec{x}, \vec{y})
& =
\begin{pmatrix}
1 & 0\\
2 & 2
\end{pmatrix}
\vec{x} +
\begin{pmatrix}
0 & 0 \\
1 & 0
\end{pmatrix}
\\
\m{if_{gcd}}^\sharp_\BB(x,y,z) & =
\begin{pmatrix}
3 & 0\\
0 & 0
\end{pmatrix}
\vec y +
\begin{pmatrix}
3 & 0\\
0 & 0
\end{pmatrix}
\vec z
\\
\m{gcd}^\sharp_\BB(x,y) & =
\makebox[0mm][l]{$
\begin{pmatrix}
3 & 0\\
0 & 0
\end{pmatrix}
\vec x +
\begin{pmatrix}
3 & 0\\
0 & 0
\end{pmatrix}
\vec y +
\begin{pmatrix}
2\\
0
\end{pmatrix}
$
\tpkt
}\end{aligned}$$ We obtain $\{1,\ldots,5\} \subseteq {\gord{\A}}$, $\{1,\ldots,5\} \subseteq {\geqord{\BB}}$, and $\{18,19,20\} \subseteq {\gord{\BB}}$.
- Consider $(\{18,19,20\},\{16\})$. Note $\UU(\{16\}) = \varnothing$. By taking the same $\A$ and also $\BB$ as above, we have $\{1,\ldots,5\} \subseteq {\gord{\A}}$, $\{1,\ldots,5,18,19,20\} \subseteq {\geqord{\BB}}$, and $\{16\} \subseteq {\gord{\BB}}$.
Thus, all path constraints are handled by suitably defined RMIs of dimension 2. Hence, the runtime complexity function of $\RSgcd$ is at most quadratic, which is unfortunately not optimal, as $\Rc{\RSgcd}$ is linear.
Corollary \[c:dg\] is more powerful than Corollary \[c:main\]. We illustrate it with a small example.
Consider the TRS $\RS$ $$\begin{aligned}
\m{f}(\m{a},\m{s}(x),y) & \to \m{f}(\m{a},x,\m{s}(y)) &
\m{f}(\m{b},x,\m{s}(y)) & \to \m{f}(\m{b},\m{s}(x),y)
\tpkt\end{aligned}$$ Its weak dependency pairs $\WDP(\RS)$ are $$\begin{aligned}
1\colon~ \m{f}^\sharp(\m{a},\m{s}(x),y) & \to \m{f}^\sharp(\m{a},x,\m{s}(y)) &
2\colon~ \m{f}^\sharp(\m{b},x,\m{s}(y)) & \to \m{f}^\sharp(\m{b},\m{s}(x),y)
\tpkt\end{aligned}$$ The corresponding congruence graph consists of the two isolated nodes $\{1\}$ and $\{2\}$. It is not difficult to find suitable $1$-dimensional RMIs for the nodes, and therefore $\Rc{\RS}(n) = \bO(n)$ is concluded. On the other hand, it can be verified that the linear runtime complexity cannot be obtained by Corollary \[c:main\] with a $1$-dimensional RMI.
We conclude this section with a brief comparison of the path analysis developed here and the use of the dependency graph refinement in termination analysis. First we recall a theorem on the dependency graph refinement in conjunction with usable rules and innermost rewriting (see [@GAO:2002], but also [@HirokawaMiddeldorp:2005]). Similar results hold in the context of full rewriting, see [@GTSF06; @HirokawaMiddeldorp:2007].
\[t:GAO02\] A TRS $\RS$ is innermost terminating if for every maximal cycle $\CC$ in the dependency graph $\DG(\RS)$ there exists a reduction pair $(\gtrsim,\succ)$ such that ${\UU(\CC)} \subseteq {\gtrsim}$ and ${\CC} \subseteq {\succ}$.
The following example shows that in the context of complexity analysis it is *not* sufficient to consider each cycle individually.
\[ex:exp\] Consider the TRS $\RSexp$ introduced in Example \[ex:7\]. $$\begin{aligned}
\m{exp}(\mN) & \to \ms(\mN) & \m{d}(\mN) & \to \mN \\
\m{exp}(\m{r}(x)) & \to \m{d}(\m{exp}(x))
& \m{d}(\ms(x)) & \to \ms(\ms(\m{d}(x)))
\tpkt\end{aligned}$$ Recall that the (innermost) runtime complexity of $\RSexp$ is exponential. Let $\PP$ denote the (standard) dependency pairs with respect to $\RSexp$. Then $\PP$ consists of three pairs: $1\colon \m{exp}^\sharp(\m{r}(x)) \to \m{d}^\sharp(\m{exp}(x))$, $2\colon \m{exp}^\sharp(\m{r}(x)) \to \m{exp}^\sharp(x)$, and $3\colon \m{d}^\sharp(\ms(x)) \to \m{d}^\sharp(x)$. Hence the dependency graph $\DG(\RSexp)$ contains two maximal cycles: $\{2\}$ and $\{3\}$.
We define two reduction pairs $(\geqord{\A},\gord{\A})$ and $(\geqord{\BB},\gord{\BB})$ such that the conditions of the theorem are fulfilled. Let $\A$ and $\BB$ be SLIs such that $\m{exp}^\sharp_{\A}(x) = x$, $\m{r}_{\A}(x) = x+1$ and $\m{d}^\sharp_{\BB}(x) = x$, $\ms_{\A}(x) = x+1$. Hence for any term $t \in \TB$, we have that the derivation heights $\dheight(t^\sharp,\rsirew{\{2\}/\UU(\PP)})$ and $\dheight(t^\sharp,\rsirew{\{3\}/\UU(\PP)})$ are linear in $\size{t}$, while $\dheight(t,\rsirew{\RS})$ is (at least) exponential in $\size{t}$.
Observe that the problem exemplified by Example \[ex:exp\] cannot be circumvented by replacing the dependency graph employed in Theorem \[t:GAO02\] with weak (innermost) dependency graphs. The exponential derivation height of terms $t_n$ in Example \[ex:exp\] is not controlled by the cycles $\{2\}$ or $\{3\}$, but achieved through the non-cyclic pair $1$ and its usable rules.
Example \[ex:exp\] shows an exponential speed-up between the maximal number of dependency pair steps within a cycle in the dependency graph and the runtime complexity of the initial TRS. In the context of derivational complexity this speed-up may even increase to a primitive recursive function, cf. [@MS:2010].
While Example \[ex:exp\] shows that the usable rules need to be taken into account fully for any complexity analysis, it is perhaps tempting to think that it should suffice to demand that at least one weak (innermost) dependency pair in each cycle decreases strictly. However this intuition is deceiving as shown by the next example.
Consider the TRS $\RS$ of $\m{f}(\m{s}(x),\m{0}) \to \m{f}(x, \m{s}(0))$ and $\m{f}(x,\m{s}(y)) \to \m{f}(x, y)$. $\WDP(\RS)$ consists of $1\colon \m{f}^\sharp(\m{s}(x),\m{0}) \to \m{f}^\sharp(x, \m{s}(x))$ and $2\colon \m{f}^\sharp(x,\m{s}(y)) \to \m{f}^\sharp(x,y)$, and the weak dependency graph $\WDG(\RS)$ contains two cycles $\{1,2\}$ and $\{2\}$. There are two linear restricted interpretations $\A$ and $\BB$ such that $\{1,2\} \subseteq {\geqslant_\A} \cup {>_\A}$, $\{1\} \subseteq {>_\A}$, and $\{2\} \subseteq {>_\BB}$. Here, however, we must not conclude linear runtime complexity, because the runtime complexity of $\RS$ is at least quadratic.
Experiments {#Experiments}
===========
All described techniques have been incorporated into the *Tyrolean Complexity Tool* $\TCT$, an open source complexity analyser[^7]. The testbed is based on version 8.0.2 of the *Termination Problems Database* (*TPDB* for short). We consider TRSs without theory annotation, where the runtime complexity analysis is non-trivial, that is the set of basic terms is infinite. This testbed comprises 1695 TRSs. All experiments were conducted on a machine that is identical to the official competition server ($8$ AMD Opteron${}^\text{\textregistered}$ 885 dual-core processors with 2.8GHz, $8\text{x}8$ GB memory). As timeout we use 60 seconds. The complete experimental data can be found at <http://cl-informatik.uibk.ac.at/software/tct/experiments>, where also the testbed employed is detailed.
Table \[tab:1\] summarises the experimental results of the here presented techniques for full runtime complexity analysis in a restricted setting. The tests are based on the use of one- and two-dimensional RMIs with coefficients over $\{0,1,\ldots, 7\}$ as direct technique (compare Theorem \[t:rmi\]) as well as in combination with the WDP method (compare Corollaries \[c:dp:usable\] and \[c:main\]) and the WDG method (compare Corollary \[c:dg\]). Weak dependency graphs are estimated by the $\TCAP$-based technique ([@GTS05]). The tests indicate the power of the transformation techniques introduced. Note that for linear and quadratic runtime complexity the latter techniques are more powerful than the direct approach. Furthermore note that the WDG method provides overall better bounds than the WDP method.
[@l@r@r@r@r@r@r@]{}\
&\
result & direct(1) & direct(2) & WDP(1) & WDP(2) & WDG(1) & WDG(2)\
\
$\OO(1)$ & 16 & 18 & 0 & 0 & 10 & 10\
$\OO(n)$ & 106 & 113 & 123 & 70 & 130 & 67\
$\OO(n^2)$ & 106 & 148 & 123 & 157 & 130 & 158\
timeout (60s) & 20 & 88 & 55 & 127 & 103 & 261\
However if we consider RMIs upto dimension 3 the picture becomes less clear, cf. Table \[tab:2\]. Again we compare the direct approach, the WDP and WDG method and restrict to coefficients over $\{0,1,\ldots, 7\}$. Consider for example the test results for cubic runtime complexity with respect to full rewriting. While the transformation techniques are still more powerful than the direct approach, the difference is less significant than in Table \[tab:1\]. On one hand this is due to the fact that RMIs employing matrices of dimension $k$ may have a degree strictly smaller than $k$, compare Theorem \[t:rmi\] and on the other hand note the increase in timeouts for the more advanced techniques.
Moreover note the seemingly strange behaviour of the WDG method for innermost rewriting: already for quadratic runtime the WDP method performs better, if we only consider the number of yes-instances. This seems to contradict the fact that the WDG method is in theory more powerful than the WDP method. However, the explanation is simple: first the sets of yes-instances are incomparable and second the more advanced technique requires more computation power. If we would use (much) longer timeout the set of yes-instances for WDP would become a *proper* subset of the set of yes-instances for WDG. For example the WDG method can prove cubic runtime complexity of the TRS `AProVE_04/Liveness 6.2` from the TPDB, while the WDP method fails to give its bound.
[@l@r@r@r@r@r@r@]{}\
& &\
result & direct & WDP & WDG & direct & WDP & WDG\
\
$\OO(1)$ & 18 & 0 & 10 & 20 & 0 & 10\
$\OO(n)$ & 135 & 141 & 140 & 135 & 142 & 145\
$\OO(n^2)$ & 161 & 163 & 162 & 173 & 181 & 172\
$\OO(n^3)$ & 163 & 167 & 169 & 179 & 185 & 178\
timeout (60s) & 310 & 459 & 715 & 311 & 458 & 718\
In order to assess the advances of this paper in contrast to the conference versions (see [@HM:2008; @HM:2008b]), we present in Table \[tab:3\] a comparison between RMIs with/without the use of usable arguments and a comparison of the WDP or WDG method with/without the use of the extended weight gap principle. Again we restrict our attention to full rewriting, as the case for innermost rewriting provides a similar picture (see <http://cl-informatik.uibk.ac.at/software/tct/experiments> for the full data).
[@l@r@r@r@r@r@r@]{}\
&\
result & direct($-$) & direct($+$) & WDP($-$) & WDP($+$) & WDG($-$) & WDG($+$)\
\
$\OO(1)$ & 4 & 18 & 5 & 0 & 10 & 10\
$\OO(n)$ & 105 & 135 & 102 & 141 & 105 & 140\
$\OO(n^2)$ & 127 & 161 & 118 & 163 & 119 & 162\
$\OO(n^3)$ & 130 & 163 & 120 & 167 & 122 & 169\
timeout (60s) & 306 & 310 & 505 & 459 & 655 & 715\
Finally, in Table \[tab:4\] we present the overall power obtained for the automated runtime complexity analysis. Here we test the version of that run for the international annual termination competition (TERMCOMP)[^8] in 2010 in comparison to the most recent version of incorporating all techniques developed in this paper. In addition we compare with a recent version of .[^9]
[@l@r@r@r@r@r@r@]{}\
& &\
result & (old) & (new) & & (old) & (new) &\
\
$\OO(1)$ & 10 & 3 & 0 & 10 & 3 & 0\
$\OO(n)$ & 393 & 486 & 439 & 401 & 488 & 439\
$\OO(n^2)$ & 394 & 493 & 452 & 403 & 502 & 452\
$\OO(n^3)$ & 397 & 495 & 453 & 407 & 505 & 453\
$\OO(n^4)$ & 397 & 495 & 454 & 407 & 505 & 454\
The results in Table \[tab:4\] clearly show the increase in power in , which is due to the fact that the techniques developed in this paper have been incorporated.
Conclusion {#Conclusion}
==========
In this article we are concerned with automated complexity analysis of TRSs. More precisely, we establish new and powerful results that allow the assessment of polynomial runtime complexity of TRSs fully automatically. We established the following results: Adapting techniques from context-sensitive rewriting, we introduced *usable replacement maps* that allow to increase the applicability of direct methods. Furthermore we established the *weak dependency pair method* as a suitable analog of the dependency pair method in the context of (runtime) complexity analysis. Refinements of this method have been presented by the use of the *weight gap principle* and *weak dependency graphs*. In the experiments of Section \[Experiments\] we assessed the viability of these techniques. It is perhaps worthy of note to mention that our motivating examples (Examples \[ex:1\], \[ex:8\], and \[ex:2\]) could not be handled by any known technique prior to our results.
To conclude, we briefly mention related work. Based on earlier work by Arai and the second author (see [@fsttcs:2005]) Avanzini and the second author introduced a restriction of the recursive path order (RPO) that induces polynomial innermost runtime complexity (see [@AM:2008; @AM:2009]). With respect to derivational complexity, Zankl and Korp generalised a simple variant of our weight gap principle to achieve a modular derivational complexity analysis (see [@ZK:2010; @ZK:2010c]). Neurauter et al. refined in [@NZM:2010] matrix interpretations in the context of derivational complexity derivational complexity (see also [@MSW:2008]). Furthermore, Waldmann studied in [@W:2010] the use of weighted automata in this setting. Based on [@HM:2008; @HM:2008b] Noschinski et al. incorporated a variant of weak dependency pairs (not yet published) into the termination prover .[^10] Currently this method is restricted to innermost runtime complexity, but allows for a complexity analysis in the spirit of the dependency pair framework. Preliminary evidence suggests that this technique is orthogonal to the methods presented here. While all mentioned results are concerned with *polynomial* upper bounds on the derivational or runtime complexity of a rewrite system, Schnabl and the second author provided in [@MS:2009; @MS:2010; @MS:2011] an analysis of the dependency pair method and its framework from a complexity point of view. The upshot of this work is that the dependency pair framework may induce multiple recursive derivational complexity, even if only simple processors are considered.
Investigations into the complexity of TRSs are strongly influenced by research in the field of ICC, which contributed the use of restricted forms of polynomial interpretations to estimate the complexity, cf. [@BCMT:2001]. Related results have also been provided in the study of term rewriting characterisations of complexity classes (compare [@CichonWeiermann:1997]). Inspired by Bellantoni and Cook’s recursion theoretic characterisation of the class of all polynomial time computable functions in [@BellantoniCook:1992], Marion [@Marion:2003] defined LMPO, a variant of RPO whose compatibility with a TRS implies that the functions computed by the TRS is polytime computable (compare [@CL:1992]). A remarkable milestone on this line is the quasi-interpretation method by Bonfante et al. [@BMM:2009:tcs]. The method makes use of standard termination methods in conjunction with special polynomial interpretation to characterise the class of polytime computable functions. In conjunction with *sup-interpretations* this method is even capable of making use of *standard* dependency pairs (see [@MP:2009]).
In principle we cannot directly compare our result on *polynomial* runtime complexity of TRSs with the results provided in the setting of ICC: the notion of complexity studied is different. However, due to a recent result by Avanzini and the second author (see [@AM:2010], but compare also [@LM:2009; @LM:2009b]) we know that the runtime complexity of a TRS is an *invariant* cost model. Whenever we have polynomial runtime complexity of a TRS $\RS$, the functions computed by this $\RS$ can be implemented on a Turing machine that runs in polynomial time. In this context, our results provide automated techniques that can be (almost directly) employed in the context of ICC. The qualification only refers to the fact that our results are presented for an abstract form of programs, viz. rewrite systems.
[10]{} url \#1[`#1`]{}urlprefixhref \#1\#2[\#2]{} \#1[\#1]{}
C. Choppy, S. Kaplan, M. Soria, Complexity analysis of term-rewriting systems, Theor. Comput. Sci. 67 (2–3) (1989) 261–282.
D. Hofbauer, C. Lautemann, Termination proofs and the length of derivations, in: Proc. 3rd International Conference on Rewriting Techniques and Applications, no. 355 in LNCS, Springer Verlag, 1989, pp. 167–177.
E.-A. Cichon, P. Lescanne, Polynomial interpretations and the complexity of algorithms, in: Proc. 11th International Conference on Automated Deduction, Vol. 607 of LNCS, 1992, pp. 139–147.
N. Hirokawa, G. Moser, Automated complexity analysis based on the dependency pair method, in: Proc. 4th International Joint Conference on Automated Reasoning, no. 5195 in LNAI, Springer Verlag, 2008, pp. 364–380.
P. Baillot, J.-Y. Marion, S. R. D. Rocca, Guest editorial: Special issue on implicit computational complexity, ACM Trans. Comput. Log. 10 (4).
T. Arts, J. Giesl, Termination of term rewriting using dependency pairs, Theor. Comput. Sci. 236 (2000) 133–178.
N. Hirokawa, G. Moser, Complexity, graphs, and the dependency pair method, in: Proc. 15th International Conference on Logic for Programming Artificial Intelligence and Reasoning, no. 5330 in LNCS, Springer Verlag, 2008, pp. 652–666.
F. Baader, T. Nipkow, Term [R]{}ewriting and [A]{}ll [T]{}hat, Cambridge University Press, 1998.
Te[R]{}e[S]{}e, Term Rewriting Systems, Vol. 55 of Cambridge Tracks in Theoretical Computer Science, Cambridge University Press, 2003.
A. Geser, Relative termination, Ph.D. thesis, Universit[ä]{}t Passau (1990).
R. Thiemann, The [DP]{} framework for proving termination of term rewriting, Ph.D. thesis, University of Aachen, Department of Computer Science (2007).
J. Endrullis, J. Waldmann, H. Zantema, Matrix interpretations for proving termination of term rewriting, J. Automated Reasoning 40 (3) (2008) 195–220.
D. Hofbauer, J. Waldmann, Termination of string rewriting with matrix interpretations, in: Proc. 17th International Conference on Rewriting Techniques and Applications, Vol. 4098 of LNCS, 2006, pp. 328–342.
T. Arts, J. Giesl, A collection of examples for termination of term rewriting using dependency pairs, Tech. Rep. AIB-2001-09, RWTH Aachen (2001).
M. Avanzini, G. Moser, Dependency pairs and polynomial path orders, in: Proc. 20th International Conference on Rewriting Techniques and Applications, Vol. 5595 of LNCS, 2009, pp. 48–62.
F. Neurauter, H. Zankl, A. Middeldorp, Revisiting matrix interpretations for polynomial derivational complexity of term rewriting, in: Proc. 17th International Conference on Logic for Programming Artificial Intelligence and Reasoning, Vol. 6397 of LNCS (ARCoSS), 2010, pp. 550–564.
J. Waldmann, Polynomially bounded matrix interpretations, in: Proc. 21st International Conference on Rewriting Techniques and Applications, Vol. 6 of LIPIcs, 2010, pp. 357–372.
G. Bonfante, A. Cichon, J.-Y. Marion, H. Touzet, Algorithms with polynomial interpretation termination proof, J. Funct. Program. 11 (1) (2001) 33–53.
M. L. Fernández, Relaxing monotonicity for innermost termination, Inform. Proc. Lett. 93 (1) (2005) 117–123.
J. Giesl, R. Thiemann, P. Schneider-Kamp, Proving and disproving termination of higher-order functions, in: Proc. 5th International Workshop on Frontiers of Combining Systems, 5th International Workshop, Vol. 3717 of LNAI, 2005, pp. 216–231.
J. Giesl, R. Thiemann, P. Schneider-Kamp, S. Falke, Mechanizing and improving dependency pairs, J. Automated Reasoning 37 (3) (2006) 155–203.
N. Hirokawa, A. Middeldorp, Tyrolean termination tool: Techniques and features, Inform. and Comput. 205 (2007) 474–511.
G. Moser, A. Schnabl, The derivational complexity induced by the dependency pair method, Logical Methods in Computer ScienceAccepted for publication.
J. Giesl, T. Arts, E. Ohlebusch, Modular termination proofs for rewriting using dependency pairs, J. Symbolic Comput. 34 (2002) 21–58.
N. Hirokawa, A. Middeldorp, Automating the dependency pair method, Inform. and Comput. 199 (1,2) (2005) 172–199.
T. Arai, G. Moser, Proofs of termination of rewrite systems for polytime functions, in: Proc. 25th Conference on Foundations of Software Technology and Theoretical Computer Science, no. 3821 in LNCS, Springer Verlag, 2005, pp. 529–540.
M. Avanzini, G. Moser, Complexity analysis by rewriting, in: Proc. 9th International Symposium on Functional and Logic Programming, no. 4989 in LNCS, Springer Verlag, 2008, pp. 130–146.
H. Zankl, M. Korp, Modular complexity analysis via relative complexity, in: Proc. 21st International Conference on Rewriting Techniques and Applications, Vol. 6 of LIPIcs, 2010, pp. 385–400.
H. Zankl, M. Korp, Modular complexity analysis via relative complexity, Logical Methods in Computer ScienceSubmitted.
G. Moser, A. Schnabl, J. Waldmann, Complexity analysis of term rewriting based on matrix and context dependent interpretations, in: Proc. 28th Conference on Foundations of Software Technology and Theoretical Computer Science, LIPIcs, 2008, pp. 304–315.
G. Moser, A. Schnabl, The derivational complexity induced by the dependency pair method, in: Proc. 20th International Conference on Rewriting Techniques and Applications, Vol. 5595 of LNCS, 2009, pp. 255–269.
G. Moser, A. Schnabl, Termination proofs in the dependency pair framework may induce multiply recursive derivational complexities, in: Proc. 22nd International Conference on Rewriting Techniques and Applications, Vol. 10 of LIPIcs, 2011, pp. 235–250.
E.-A. Cichon, A. Weiermann, Term rewriting theory for the primitive recursive functions., Ann. Pure Appl. Logic 83 (3) (1997) 199–223.
S. Bellantoni, S. Cook, A new recursion-theoretic characterization of the polytime functions, Comput. Complexity 2 (2) (1992) 97–110.
J.-Y. Marion, Analysing the implicit complexity of programs, Inform. and Comput. 183 (2003) 2–18.
G. Bonfante, J.-Y. Marion, J.-Y. Moyen, Quasi-interpretations: A way to control resources, Theor. Comput. Sci.To appear.
J.-Y. Marion, R. P[é]{}choux, Sup-interpretations, a semantic method for static analysis of program resources, ACM Trans. Comput. Log. 10 (4).
M. Avanzini, G. Moser, Closing the gap between runtime complexity and polytime computability, in: Proc. 21st International Conference on Rewriting Techniques and Applications, Vol. 6 of LIPIcs, 2010, pp. 33–48.
U. [Dal Lago]{}, S. Martini, On constructor rewrite systems and the lambda-calculus, in: Proc. 36th ICALP, Vol. 5556 of LNCS, Springer Verlag, 2009, pp. 163–174.
U. [Dal Lago]{}, S. Martini, [D]{}erivational [C]{}omplexity is an [I]{}nvariant [C]{}ost [M]{}odel, in: Proc. 1st FOPARA, 2009.
[^1]: This research is partly supported by FWF (Austrian Science Fund) project P20133, the Grant-in-Aid for Young Scientists Nos. 20800022 and 22700009 of the Japan Society for the Promotion of Science, and Leading Project e-Society (MEXT of Japan), and STARC.
[^2]: <http://cl-informatik.uibk.ac.at/software/tct/>.
[^3]: This is Example 3.1 in Arts and Giesl’s collection of TRSs [@ArtsGiesl:2001].
[^4]: This example is due to Dieter Hofbauer and Andreas Schnabl.
[^5]: We use SCCs in the standard graph theoretic sense, while in the literature SCCs are sometimes defined as *maximal cycles* (e.g. [@GAO:2002; @HirokawaMiddeldorp:2005; @T07]). This alternative definition is of limited use in our context.
[^6]: This is Example 3.6a in Arts and Giesl’s collection of TRSs [@ArtsGiesl:2001].
[^7]: Available at <http://cl-informatik.uibk.ac.at/software/tct>.
[^8]: <http://termcomp.uibk.ac.at/termcomp/>.
[^9]: <http://cl-informatik.uibk.ac.at/software/cat/>.
[^10]: This novel version of (see <http://aprove.informatik.rwth-aachen.de/>) for (innermost) runtime complexity took part in TERMCOMP in 2010.
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- 'Ammler-von Eiff, M.'
- 'Reiners, A.'
bibliography:
- 'diffrot.bib'
date: 'Received September 15, 1996; accepted March 16, 1997'
title: 'New measurements of rotation and differential rotation in A-F stars: Are there two populations of differentially rotating stars? [^1][^2]'
---
[The Sun displays differential rotation that is intimately connected to the solar dynamo and hence related to solar activity, the solar cycle, and the solar wind. Considering the detectability and habitability of planets around other stars it is important to understand the role of differential rotation in other stars. ]{} [We present projected rotational velocities and new measurements of the rotational profile of some 180 nearby stars with spectral types A-F. The results are consolidated by a homogeneous compilation of basic stellar data from photometry and the identification of multiple stellar systems. New and previous measurements of rotation by line profile analysis are compiled and made available. ]{} [The overall broadening profile is derived analysing spectral line shape from hundreds of spectral lines by the method of least-squares deconvolution, reducing spectral noise to a minimum. The effect of differential rotation on the broadening profile is best measured in inverse wavelength space by the first two zeros of its Fourier transform. ]{} [Projected rotational velocity $\vsini$ is measured for more than 110 of the sample stars. Rigid and differential rotation can be distinguished in 56 cases where $\vsini>12\,\kmps$. We detect differential rotation rates of $\frac{\delta\Omega}{\Omega}=5\,\%$ and more. Ten stars with significant differential rotation rates are identified. The line shapes of 43 stars are consistent with rigid rotation, even though differential rotation at very low rates might still be possible in these cases. The strongest amount of relative differential rotation (54%) detected by line profile analysis is found among F stars. ]{} [As of now, 33 differential rotators detected by line profile analysis have been confirmed. The frequency of differential rotators decreases towards high effective temperature and rapid rotation. There is evidence for two populations of differential rotators, one of rapidly rotating A stars at the granulation boundary with strong horizontal shear and one of mid- to late-F type stars with moderate rates of rotation and less shear. The gap in between can only partly be explained by an upper bound found for the horizontal shear of F stars. Apparently, the physical conditions change at early-F spectral types. The range of horizontal shear observed for mid-type F stars is reproduced by theoretical calculations while there seems to be a discrepancy in period dependence for late-F stars.]{}
Introduction
============
The Sun does not rotate like a rigid body. Instead, angular rotational velocity varies with latitude (latitudinal differential rotation) and with distance from the center (radial differential rotation). Differential rotation is thought to emerge from the interaction between the turbulent motions in the convective envelope and Coriolis forces which are due to rotation. It is believed to be an important ingredient of the widely accepted solar $\alpha$-$\Omega$ dynamo[^3]. Thus, differential rotation is closely connected to magnetic activity.
While latitudinal differential rotation can be observed directly on the solar surface, e.g. by tracing solar spots, radial differential rotation in the Sun is assessed indirectly by helioseismology . The amount of latitudinal surface differential rotation is expressed by angular velocity $\Omega$ as function of latitude $l$. The solar differential rotation is well described by the surface rotation law $$\label{eq:rotlaw}
{\Omega}(l)=\Omega_\mathrm{Equator}(1-\alpha\,{\sin}^2{l})$$ with the parameter $\alpha=0.2$ of relative differential rotation. While relative differential rotation $\alpha$ is the quantity measured in the present work, it is the horizontal shear ${\Delta\Omega}$ which connects the measurements to the description of stellar physics: $$\label{eq:shear}
{\Delta\Omega}=\Omega_{\mathrm{Equ}}-\Omega_{\mathrm{Pol}}=\alpha\Omega_{\mathrm{Equ}}$$
The potential to assess stellar differential rotation by asteroseismology has been studied e.g. for white dwarfs , solar-type stars , and B stars using rotationally split frequencies of stellar oscillations.
Stellar differential rotation cannot be observed directly since the surface of stars can usually not be imaged with sufficient resolution. Yet, one may expect active stars to be rotating differentially assuming that Sun-like dynamos also work on other stars. On the other hand, a magnetic field may also inhibit differential rotation through Lorentz forces so that it is not clear what degree of differential rotation to expect in active stars.
In the case of the solar rotation pattern, the angular momentum transport by the gas flow in the convective envelope offers an explanation. , , and studied Sun-like differential rotation in main-sequence and giant stars based on mean-field hydrodynamics. According to these models, horizontal shear varies with spectral type and rotational period and is expected to increase strongly towards earlier spectral type while there is a weaker variation with rotational period. Horizontal shear vanishes for very short and very long periods and becomes highest at a rotational period of 10days for spectral type F8 and 25days in the case of a model of the Sun. A strong period dependence is also seen in models of stars with earlier spectral type. reproduce observed values of horizontal shear of mid-F type stars of more than $1\radpd$.
Although stars are unresolved, differential rotation can be assessed by indirect techniques through photometry and spectroscopy. Photometric techniques use the imprints of stellar spots in the stellar light curve. If the star rotates differentially, spots at different latitudes will cause different photometric periods which are detected in the light curve and allow one to derive the amount of differential rotation. Stellar spots further leave marks in rotationally-broadened absorption lines of stellar spectra. These features change their wavelength position when the spot is moving towards or receding from the observer due to stellar rotation. Then, Doppler imaging recovers the distribution of surface inhomogeneities by observing the star at different rotational phases. Using results from Doppler imaging, studied differential rotation of rapidly rotating late-type stars and confirmed that horizontal shear strongly increases with effective temperature.
Both photometry and Doppler imaging need to follow stellar spots with time and thus are time-consuming. Another method, line profile analysis, takes advantage of the rotational line profile which has a different shape in the presence of differential rotation. While rigid rotation produces line profiles with an almost elliptical shape, solar-like differential rotation results in cuspy line shapes.
In contrast to the methods mentioned above, line profile analysis does not need to follow the rotational modulation of spots. Instead, only one spectrum is sufficient. Therefore, hundreds of stars can be efficiently checked for the presence of differential rotation. Line profile analysis is complementary to other methods in the sense that it works only for non-spotted stars with symmetric line profiles.
The line profile analysis used in the present work was introduced by . The noise of the stellar line profile virtually vanishes by least squares deconvolution of a whole spectral range averaging the profiles of hundreds of absorption lines. The Fourier transform of the rotational profile displays characteristic zeros whose positions are characteristic of the rotational law. Thus, the amount of differential rotation can be easily measured in Fourier space. In principle, the influence of all broadeners like turbulent broadening, instrumental profile and rotational profile simply is a multiplication in Fourier space. Therefore, turbulent broadening and instrumental profile leave the zeros of the rotational profile unaffected.
Among hundreds of stars with spectral type A to early K, 31 differential rotators have so far been identified by line profile analysis . Actually, many more differential rotators might be present which escaped detection because of low projected rotational velocity $\vsini$, low amounts of differential rotation below the detection limit, shallow or strongly blended line profiles, asymmetric line profiles, or multiplicity.
The amount of differential rotation depends on spectral type and $\vsini$. note that the frequency of differential rotators as well as the amount of differential rotation tends to decrease towards earlier spectral types. Convective shear seems strongest close to the convective boundary. While many slow Sun-like rotators ($\vsini\approx10\,$km/s) with substantial differential rotation are found, only few rapid rotators and hot stars are known to display differential rotation.
The main goal of the present work is to better understand the frequency and strength of differential rotation throughout the HR diagram, in particular the rapid rotators and the hotter stars close to the convective boundary. New measurements are combined with previous assessments of line profile analysis and presented in a catalogue containing the measurements of rotational velocities $\vsini$, characteristics of the broadening profile, differential rotation parameters $\alpha$, and pertinent photometric data (Sect. \[sect:results\]). Furthermore, stellar rotational velocities and differential rotation is studied in the HR diagram (Sect. \[sect:HRD\]) and compared to theoretical predictions (Sect. \[sect:shear\]).
Observations {#sect:obs}
============
The techniques applied in the present work are based on high-resolution spectroscopy with high signal-to-noise ratio. Therefore, this work is restricted to the brightest stars. In order to improve the number statistics of differential rotators throughout the HR diagram, the sample of the present work completes the work of previous years and consists of 182 stars of the southern sky known to rotate faster than $\approx10\,$km/s, to be brighter than $V=6$, and with colours $0.3 < (B-V) < 0.9$. Emphasis was given to suitable targets rather than achieving a complete or unbiased sample.
The spectra were obtained with the FEROS and CES spectrographs at ESO, La Silla (Chile). Five stars were observed with both FEROS and CES. The data was reduced in the same way as by .
Spectra of 158 stars studied in the present work were taken with FEROS at the ESO/MPG-2.2m telescope between October 2004 and March 2005 (ESO programme 074.D-0008). The resolving power of FEROS is fixed to 48,000 covering the visual wavelength range (3600[Å]{}-9200[Å]{}). Spectra of 24 objects with $\vsini\lesssim45\,$km/s were obtained with the CES spectrograph at the ESO-3.6m telescope, May 6-7, 2005 (ESO programme 075.D-0340). A spectral resolving power of 220,000 was used.
Methods
=======
Least-squares deconvolution
---------------------------
The reduced spectra are deconvolved by least squares deconvolution of a wide spectral region following the procedure of .
The wavelength region 5440-5880[Å]{} has already been used successfully by for FEROS spectra of FGK stars and is thus used in the present work, too. In the case of A stars, the number of useful lines in this range is insufficient and one has to move to the blue spectral range. analyzed A stars in the range $4210-4500$[Å]{} using spectra taken with ECHELEC at ESO, La Silla. This wavelength range inconveniently includes the H$\gamma$ line right in the centre. In the present work, we take advantage of the wide wavelength coverage of FEROS and find that the range $4400-4800\,$[Å]{} between H$\gamma$ and H$\beta$ contains a sufficient number of useful spectral lines. The CES spectra span a clearly narrower wavelength range of about 40[Å]{}. succeeded with CES spectra in the ranges 5770-5810[Å]{} and $6225-6270$[Å]{}. Within the present work the full range covered by our CES spectra of $6140-6175$[Å]{} is used.
The deconvolution process derives the average line profile from a multitude of observed line profiles. The spectral regions used with the FEROS and CES spectra display a sufficient number of useful absorption lines and are unaffected by strong features and telluric bands. Although the deconvolution algorithm successfully disentangles lines blended by rotation, the deblending degenerates if lines are too close to each other. However, the deconvolution was found to be robust in the wavelength ranges used. The regions can be used homogeneously for all the stars analyzed .
A template is generated as input to the processing, containing information about the approximate strength and wavelength positions of spectral lines. The line list is drawn from the Vienna Astrophysical Line Data-Base [VALD; @Kupka00; @Kupka99; @Ryabchikova97; @Piskunov95] based on a temperature estimate derived from spectral type. VALD is queried with the [*extract stellar*]{} feature and a detection limit of 0.02.
The optimization of the line profile (by regularised least square fitting of each pixel of the line profile) alternates with the optimization of the equivalent widths (by Levenberg-Marquardt minimization – see – of the difference of observed and template equivalent widths). Only fast rotators with $\vsini\gtrsim40\,$km/s are accessible with the FEROS spectra. Then, the rotational profile dominates all other broadening agents. In the case of the CES spectra, much narrower profiles are traced at $\vsini$ as low as $\approx10\,$km/s. Therefore, these spectra are deconvolved using hysical east quares econvolution ([*PLSD*]{}, ), accounting for the different thermal broadening of the spectral lines of the involved atomic species.
Analysis of the overall broadening profile in Fourier space {#sect:fourier}
-----------------------------------------------------------
The shape of the deconvolved rotational profile is characteristic of the type of rotation. Although differential rotation may be distinguished in wavelength space, the analysis works best in the frequency domain . The Fourier transform of the rotational profile displays characteristic zeros. The ratio of the first two zeros depends on the amount of differential rotation. The convolution with other broadening agents, e.g. the instrumental profile, corresponds to a multiplication in Fourier space and thus reproduces the zeros. The projected rotational speed has to be sufficiently high to resolve the frequency of the second zero. The Nyquist frequency of [*FEROS*]{} spectra with a resolving power of 48,000 is 0.08s/km and for CES it is 0.37s/km. In order to properly trace the broadening profile to the maximum of the second side lobe, $\vsini$ has to be at least 45$\kmps$ (FEROS) and 12$\kmps$ (CES), respectively . Actually, zero positions can be measured for even lower $\vsini$ but then they are beyond the actual sampling limit and affected by noise features. Then, an interpretation is not possible.
Derivation of the projected rotational velocity $\vsini$
--------------------------------------------------------
The projected rotational velocity is taken as the average of the values derived from each one of the two zero positions in the Fourier transform. If the second zero is no longer resolved reliably (according to the $\vsini$ limits given above), $\vsini$ is obtained from the first zero only.
The formal error bar does not reflect systematic uncertainties which correspond to a relative error of $\approx5\%$ . In order to obtain a conservative error estimate, the error of 5% is adopted. The formal error is taken if it is larger than 5%.
Derivation of the parameter of differential rotation $\alpha$ {#sect:alpha}
-------------------------------------------------------------
The parameter $\alpha$ of differential rotation is calculated from the ratio of the zeros of the Fourier transform $\qfrac$ using the modeled relation of $\qfrac$ and $\alpha$ given in . Unknown inclination represents the largest uncertainty when interpreting measurements of rotational broadening from line profile analysis. In order to reflect this uncertainty, the parameter $\alpha$ of differential rotation was calculated in the present work for inclination angles $10\deg$ and $90\deg$. Period is assessed accordingly, assuming inclinations of $10\deg$ and $90\deg$. Additional uncertainties originate from the unknown limb darkening. Values of the differential rotation parameter are also derived from the calibrations given in for comparison and consistency with previous work. There, the function $\alphasini$ is obtained which minimizes the scatter due to unknown inclination.
A detection limit of $\alpha=5\,\%$ originates from the unknown limb darkening and the consequent inability to pin down differential rotation rates lower than $\approx5\,\%$. In the following, all stars with $\alpha\gtrsim5\,\%$ are considered Sun-like differential rotators in agreement with . Adopting a Sun-like rotation law (Eq. \[eq:rotlaw\]), the maximum possible value of Sun-like differential rotation is $\alpha=100\,\%$ implying that the difference of the rotation rates of the poles and the equator equals the angular velocity at the equator. Negative values of $\alpha$ would imply anti-solar differential rotation with angular velocities being higher at higher latitudes. However, negative values of $\alpha$ are not interpreted in terms of true differential rotation. Rather it may be understood as rigid rotation in the presence of a cool polar spot mimicking differential rotation . In principle, the cuspy line profile of solar-like differential rotation might also be mimicked by spots, namely in the configuration of a belt of cool spots around the equator. However, this is a hypothetic case which has never been observed.
There are further effects causing spurious signatures of differential rotation. In the case of very rapid rotation ($\vsini>200\kmps$), the spherical shape of the stellar surface is distorted by centrifugal forces and the resulting temperature variations modify the surface flux distribution. This so-called gravitational darkening occurs in particular in the case of rapidly-rotating A and F stars . In the case of close binaries, the spherical shape of the star can be modified by tidal elongation and thus change the surface flux distribution.
It is actually the absolute horizontal shear $\Delta\Omega$ which is related to the stellar interior physics while it is only the relative value $\alpha=\frac{\Delta\Omega}{\Omega}=\frac{\Delta{\Omega}P}{2\pi}$ that can be measured by line profile analysis. Rotational periods are derived from photometric radius and measured $\vsini$, assuming inclinations of $10\deg$ and $90\deg$ to be consistent with the values of $\alpha$ used. Thus, values of horizontal shear are used which were obtained for both inclination angles to account for uncertainty about inclination. Because of the relation to $\alpha$, very low values of horizontal shear may thus be detected in the case of very slow rotators which however is limited by the minimum $\vsini$ required for line profile analysis. In the case of very rapid rotators, however, differential rotators might be undetectable even at strong horizontal shear.
Peculiar line shapes and multiplicity {#sect:peculiar}
=====================================
The deconvolved profiles of many stars show peculiarities like asymmetries, spots, or multiple components. We widely treat these profiles according to . Instead of skipping all stars with peculiar profiles, we tried to get as much information as possible from the line profile, tagging the results with a flag which indicates the type of the peculiarity.
Many profiles are asymmetric in that the shape of the blue wing of the overall profile is different from the red wing. In these cases, the analysis is repeated for each of the wings separately. The average of the results is adopted with conservative error bars.
In the case of distortions caused by multiples, three cases are distinguished. In the first case, the components are separated well and can be analyzed each. In the second case, the profile is blended but dominated by one component. If the secondary component can be fully identified it is removed or a line wing unaffected by the secondary is taken. Third, if the components cannot be disentangled, then only an upper limit on $\vsini$ is derived from the overall broadening profile.
In some cases, the deconvolved profile is symmetric, even though the star is a known or suspected multiple. This may affect both the assessment of global stellar properties and the interpretation of line profiles:
- In the case of two very similar but spatially unresolved components, the colours and effective temperature will be the same as for a single object. However, luminosity will be multiplied by the number of components. This results in a considerable shift in the HR diagram and thus to a wrong assessment of luminosity class (and ${\log}g$).
- In the case of very different, unresolved components, the derived parameters will be dominated by the primary, i.e. the brightest component, but a considerable shift in the HR diagram is still possible.
- In favourable cases, spatially unresolved components show up as separate spectral components and can be separated for the rotational analysis – as was discussed above. In the worst-case scenario, there is no relative shift between the spectral components forming an apparently single symmetric profile. This might be the case if components of a spectroscopic multiple are almost on the line of sight at the time of observation or if the components form a wide but spatially unresolved multiple involving slow orbital motion.
Therefore, in order to detect analyses which might be affected by multiplicity, the catalogue of was searched. Suspicious analyses are identified by one of two flags. There is one ’photometric’ flag ’x’ and one ’spectroscopic’ flag ’y’. The photometric data is considered spurious and flagged by ’x’ if the star has spatially unresolved components. In some cases, the spectral types of the components are similar but no individual brightness measurements are given in the literature. In such a case, we derive an estimate of the primary magnitude by applying an offset of +075 to the given total magnitude of the system. If the star is a known or suspected spectroscopic multiple, results from apparent single-component line profiles might be spurious and are thus flagged by ’y’. The analysis also gets the ’y’ flag if there are known visual components closer than $3\farcs0$ and at a brightness difference of less than $5\,$mag in the $V$ band. These values are based on experience with slit and fibre spectrographs under typical seeing conditions without adaptive optics.
Discussion of measurements {#sect:results}
==========================
\[sect:basic\]
Results for all stars of the sample are presented in Table \[tab:results\]. For clarity, only one peculiarity flag is given with the results. Our choice is that flags indicating multiplicity override flags denoting asymmetries or bumps. In particular, spectral components of multiples might also be affected by asymmetries. Therefore, the reader is advised to check the other flag, too, which indicates the type of the measurement.
![\[fig:vsini\] Measurements of $\vsini$ and $\qfrac$ of the present work. Open symbols indicate measurement from FEROS spectra and filled symbols data obtained from CES spectra. Squares show measurements of fully resolved rotational profiles. Circles denote the cases where only $\vsini$ was measured reliably. The triangles show measurements of upper limits to rotational velocities. The vertical lines indicate the sampling limits of CES (solid) and FEROS (dashed).](figs/vsini){width="\figwidth"}
Figure \[fig:vsini\] displays the measurements of $\qfrac$ and $\vsini$ obtained in the present work. $\vsini$ measurements range from $4\,\kmps$ to $\approx300\,\kmps$ while assessments of $\qfrac$ can be as low as $\approx\,1$ and reach values of up to 2.7. As is discussed in Sect. \[sect:fourier\], the measurements of $\qfrac$ are indicative of the mode of rotation only at $\vsini\gtrsim12\,\kmps$ (CES) and $\vsini\gtrsim45\,\kmps$ (FEROS), in total 56 objects. If one zero of the Fourier transform is beyond the sampling limit, $\vsini$ can still be measured. In total, $\vsini$ was measured for 114 objects. If both zeros of the Fourier transform are beyond the sampling limits of CES and FEROS, resp., the $\vsini$ derived must be interpreted as an upper limit. Indeed, these values represent the lowest measured, are not much below $4\,\kmps$, and cluster at the lowest measured velocities. The few upper limits at $\qfrac\approx2.5$ are all due to peculiar line profiles caused by binarity or spots. Upper limits on $\vsini$ were derived in 68 cases.
Measurements of multiple systems {#sect:mult}
--------------------------------
Among the sample, there are 21 stars with indications of multiplicity detected in the overall broadening profile. Although multiplicity complicates the line profile analysis severely, a number of interesting objects could be measured.
One major achievement of the present work is the analysis of resolved profiles of spectroscopic binaries. Among the 21 stars in the sample with line profiles indicative of multiplicity, rigid rotation could be assessed for , , and .
In addition, there are nine spectroscopic binaries with both spectroscopic components analysed: , , , , , , , , and . The line wings could be analysed separately and the results are shown in Table \[tab:results\].
One of these objects, HD64185, displays signatures of differential rotation. It is a binary system, possibly even triple. Two spectroscopic components are detected in the present work and analysed separately. The stronger broad component (considered in the present work) indicates differential rotation with the strongest absolute horizontal shear encountered in the present work’s sample while the weaker and narrow component () is consistent with rigid rotation. However, as HD64185 is possibly a triple, it cannot be excluded that the line profile of HD64185A is a blend mimicking the signatures of differential rotation. Individual stellar parameters of the components are not available so that the position in the HR diagram might be erroneous. Therefore, the measured relative differential rotation and horizontal shear are excluded from the studies in the present work. Nevertheless, additional studies of this star are encouraged since HD64185A shows the strongest horizontal shear among the differentially rotating F-type stars.
Among the nine spectroscopic binaries with both components measured, HD147787 and HD155555 are particularly interesting since they have been studied by line profile analysis before and are analysed once more in the present work. This means that line profiles are compared obtained at different orbital phase. HD147787 was found to be asymmetric by . The present work resolved the profile based on a CES spectrum and resolved two spectroscopic components for which $\vsini$ is measured. HD147787 is a spectroscopic binary with a period of 40 days according to @2008MNRAS.389..869E. HD155555 did not stand out particularly in the work of but the present work detects two spectroscopic components. It is a short-period pre-main sequence spectroscopic binary according to . They also derived $\vsini$ of both components which agree with the values assessed in the present work within the error bars. @2008MNRAS.387.1525D measured significant rates of differential rotation for both components from Doppler imaging. As these translate to amounts of relative differential rotation of a few percent only, they are below the detection limit of line profile analysis. In the present work, the quality of the deconvolved profiles is insufficient and affected by asymmetries, so that the rotational profile could not be measured anyway, even though the rotational velocities are sufficiently large. The asymmetries probably are due to the spottedness which on the other side enabled the Doppler imaging analyses by @2008MNRAS.387.1525D.
We also mention which does not display resolved spectroscopic components but has been studied before. It is a blended multiple suspected by . In fact, it is a spectroscopic binary with a close visual companion at a separation of $0\farcs066$ and a period of 3 years [@2000AJ....119.3084H; @2008MNRAS.389..869E]. Based on CES spectroscopy, the present work detects two blended spectroscopic components. Only an upper limit on $\vsini$ of the primary is derived.
It is equally important to also discuss known multiples with an apparent single line profile. Signatures of differential rotation have to be regarded with care if the star does not show resolved spectroscopic components although it is a known multiple. The catalogue of was searched systematically for multiples among the stars of the sample. Such evidence indicates the need of further studies but is incapable of firmly rejecting candidates of differential rotation by itself. The findings in the work of are discussed in the following in what concerns stars with signatures of differential rotation:
- is a visual binary with similar components and a separation of 1433. It cannot be excluded that the light of both components entered the slit of the spectrograph which would produce a composite spectrum. The spectrum does not indicate any spectroscopic multiplicity but the line profile could still be composed of two components. Additional studies of this object are encouraged since it is located right in the lack of differential rotators in the HR diagram at early-F spectral types (Sect. \[sect:populations\]).
- Signatures of differential rotation were detected for by . This cannot be reproduced in the present work, the spectrum displays an asymmetric profile instead. HD105452 is a known spectroscopic binary. Possibly, the blended line profile mimicked differential rotation in older spectra at an orbital phase when there is no displacement of spectral lines. Another explanation could be that the emitted flux is modified by tidal elongation depending on orbital phase (see Sect. \[sect:alpha\]).
- is single according to but displays an asymmetric line profile in the present work. The line wings are analysed separately and both wings indicate differential rotation.
- is a spectroscopic binary with a short period of $2.696\,$d. Although, it appears single in the spectrum available to this work, one cannot exclude that the broadening profile is the sum of two blended profiles or affected by tidal elongation. The star deserves particular attention since the line profile indicates the second largest amount of differential rotation ($\alpha=44\,\%$) among the F stars of the sample.
We also mention measurements of rigid rotators which might be affected by multiplicity. There is one group of stars, , , and , which appear single in the spectrum but are listed as binaries with unknown component parameters by . This is also the case for which shows signatures expected for anti-solar differential rotation or a polar cap.
The multiples are excluded from the studies of relative differential rotation and horizontal shear presented in Sect. \[sect:HRD\] and \[sect:shear\] since their position in the HR diagram and the line profile measurements might be erroneous. This also concerns differential rotators detected in previous work. These stars are: , , , , , and . Nevertheless, the data might still be correct but this needs to be proven by further studies. The measurements obtained for these stars are tabulated in Table \[tab:results\]. These data flagged with ’m’, ’x’, or ’y’ offer a starting point for future studies.
Comparison with previous work and discussion of systematic uncertainties {#sect:uncertainties}
------------------------------------------------------------------------
Systematic errors may occur which can not be assessed a priori. This particularly concerns instrumental effects introduced by the spectrographs used. A comparison to the previous results obtained by Reiners et al. for stars in common is helpful as these results are based on data taken at another epoch with different instrumentation.
In the present work, new CES and FEROS spectra have been taken of stars which have been analysed before by and . The more recent work of is mostly based on the same spectra. analysed ECHELEC spectra with a resolution even lower than those of FEROS and FOCES.
Figure \[fig:vsini\_resid\] displays differences between $\vsini$ measurements of the present work and previous work and of measurements from different instruments. While most new measurements agree with previous assessments within a few $\kmps$, discrepancies of more than 5$\kmps$ appear in the cases of HD80671, HD105452, HD124425, and when comparing to and , measurements which are based on FEROS, FOCES, and ECHELEC spectra. HD80671 and HD105452 are particular in that they are spectroscopic binaries. The shape of the rotational profile varies because of the orbital motion of the components. Consequently, the $\vsini$ assessed from different spectra will be different. In particular, the spectra of HD105452 analysed in the present work display an asymmetric profile. While flagged HD80671 as an object with a blended profile, the profile is resolved in the present work but only an upper limit can be determined to the apparently very low intrinsic $\vsini$. HD124425 and HD198001 were measured at lower spectral resolution in previous work.
Figure \[fig:q2q1\_resid\_vsini\] shows discrepancies of $\qfrac$ measurements together with the corresponding $\vsini$ measurements since these tell us whether a rotational profile can actually be resolved. There are nine residuals larger than 0.10. Only three of them cannot be explained by peculiar line profiles or insufficient spectral resolution. One of these cases is HD105452 as can be expected from the comparison of $\vsini$ measurements above. Measurements for differ by 0.16 which might be related to the fact that the CES spectrum shows a slight asymmetry. The star is a candidate for spottedness according to . The profile of suffers from bad quality in the present work so that the discrepancy of 0.24 with respect to should not be given too much weight and instead the older measurement be preferred. The remaining residuals larger than 0.10 are probably due to the use of different spectrographs. and were analysed in the present work based on spectra with different resolution (CES and FEROS) and $\vsini$ is below the sampling limit of FEROS. , , and were analyzed at higher resolving power in previous work.
Generally, in case of profiles without pecularities, discrepancies between different measurements are less than $5\,\kmps$ in $\vsini$ and $0.10$ in $\qfrac$.
Distribution of measurements of differential rotation {#sect:distrib}
-----------------------------------------------------
Figures \[fig:vdistrib\] and \[fig:qdistrib\] show the distribution of projected rotational velocities $\vsini$ and of the indicator $\qfrac$ of differential rotation for all stars of the sample. In the present work, the rotational profile of 56 stars is studied. In total, there are 300 objects with rotational profile studied by line profile analysis, also yielding precise projected rotational velocities $\vsini$. Many of the stars studied in the present work display line profiles which cannot be studied by line profile analysis because of asymmetries caused by spots or multiplicity. However, information on $\vsini$ can still be obtained. $\vsini$ is inferred for 68 stars and upper limits for further 68 objects. These stars are no longer considered in the present work but the $\vsini$ measurements are tabulated in Table \[tab:results\].
In the present work, ten stars show clear signatures of differential rotation between $\alpha=10$% and 54%. While three of these are new discoveries, seven objects belong to the samples of and and are now identified or confirmed based on high-resolution CES spectra. Including previous work, there are 33 differential rotators detected by line profile analysis (excluding those possibly affected by multiplicity).
Three further stars with extremely fast rotation show line profiles indicative of differential rotation. However, in these cases, the feature is plausibly caused by gravitational darkening in the regime of rigid rotation (see Sect. \[sect:alpha\]).
The amount of differential rotation of six stars is below 6% which is below our detection limit and consistent with rigid rotation. Yet, as is the case for many other objects with undetected differential rotation, small rates of (relative) differential rotation – and even strong amounts of horizontal shear as is discussed later on – are still possible. Two F stars ( and HD124425) with $T_\mathrm{eff}\approx6500\,$K and $\vsini\lesssim30\,$kms$^\mathrm{-1}$ display the lowest values of $\frac{q_\mathrm{2}}{q_\mathrm{1}}$ measured so far by line profile analysis corresponding to differential rotation rates of 54% and 44%, respectively. Five stars show signatures which can be explained by anti-solar differential rotation or more plausibly by rigid rotation with cool polar caps. In total, there are 43 stars with spectra consistent with rigid rotation.
Rotation and differential rotation in the HR diagram {#sect:HRD}
====================================================
The distribution of differential rotators in the HR diagram is of particular interest since it relates the frequency of differential rotation to their mass and evolutionary status.
To construct the HR diagram, homogeneous photometric and stellar data are used for the sample of the present work. $(B-V)$ colours, $V$ band magnitudes and parallaxes were drawn from the Hipparcos catalogue when available and from Tycho-2 otherwise. Bolometric corrections and effective temperatures $\Teff$ were calculated from $(B-V)$ using the calibration of . Absolute $V$ band magnitudes $M_\mathrm{V}$ were inferred from the apparent magnitudes and the parallaxes. Radii are calculated from effective temperature and bolometric magnitude which is derived from absolute $V$ band magnitude.
The study of differential rotation throughout the HR diagram done in the present work includes previous analyses by Reiners (2003-2006). Not all photometric and basic stellar data used in the present work are equally available from there. Therefore, data needs to be complemented as consistently as possible and without changing results previously obtained. Therefore, photometry was adopted from Reiners (2003-2006) when available and otherwise completed according to the present work. Stellar parameters of previously analyzed stars are adopted from Reiners when available. Otherwise, they are derived in the same way as described above for the data of the present sample. Table \[tab:comp\_diffrot\] presents rotational data and basic stellar data for all stars studied by line profile analysis.
Sometimes the stars have been studied more than once by line profile analysis. It is not always the most recent result that has been adopted in Table \[tab:comp\_diffrot\]. If possible, the most plausible and conclusive result is used, for example in cases of peculiar profiles when the line profile could be resolved in separate components later on. Generally, data are preferentially adopted which are based on spectra with higher resolution. Also, data with more supplemental information given by flags are preferred.
Approximate surface gravities for all stars are inferred from photometry and from mass estimates which in turn are derived from effective temperatures using the calibration for dwarf stars of @Gray05 [appendix B]. Thus, the surface gravities will be upper limits only for giants and lower limits in the case of unresolved binaries. The distribution of effective temperature and surface gravity is shown in Fig. \[fig:tgdistrib\].
![\[fig:hrd\_vsini\]HR diagram with the stars from previous (circles) and the present (squares and triangles) work. Symbol size scales with projected rotational velocity measured by the FT method. The granulation boundary according to is indicated by the hashed region. Evolutionary tracks for 1.0, 1.5, and 2.0$\Msun$ and the early-main sequence of are added, using bolometric corrections by . Stars with photometry possibly affected by multiplicity (flags ’x’ and ’m’ in Tables \[tab:results\] and \[tab:comp\_diffrot\]) are not included.](figs/hrd_vsini){width="\figwidth"}
Not all parts of the HR diagram are equally accessible to line profile analysis. Figure \[fig:hrd\_vsini\] shows the distribution of the analyzed stars in the HR diagram in terms of rotational speed. The location of the stars is compared to evolutionary models of and the granulation boundary according to . The granulation boundary indicates the location where deep convective envelopes form and the approximate onset of magnetic braking. Accordingly, the figure illustrates that Sun-like and cooler main-sequence stars are efficiently braked. Also the late-type giant stars display slow rotation. Consequently, these and most cool dwarf stars are not accessible to the study of the rotational profile. Therefore, the present study is generally restricted to main-sequence and slightly evolved stars of spectral types A and F.
Two populations of differential rotators {#sect:populations}
----------------------------------------
{width="\textwidth"}
The HR diagram in Fig. \[fig:hrd\_diffrot\] shows that the present work adds several rigid rotators in the hot star region between 8000 and 10000K. Three stars are detected – the A stars HD30739, HD43940, and HD129422 at the transition to spectral type F – with line shapes indicative of differential rotation. However, the low values of $\qfrac$ might be caused to a certain extent by gravitational darkening in the regime of rigid body rotation since all three objects rotate very rapidly with $\vsini>200\,\kmps$. This scenario is suggested by for the rapid rotator HD44892 which is the hottest and most luminous differential rotator in Fig. \[fig:hrd\_diffrot\]. In the cases of HD6869, HD60555, and HD109238 they argue, however, that this cannot be the sole explanation for the low values of $\qfrac$ since rotational speed would exceed breakup velocity. These stars are all located at the granulation boundary and display the strongest absolute horizontal shear . There are two more differential rotators close to the granulation boundary at its cool side, Cl\*IC4665V102 and . V102 displays the strongest horizontal shear $\Delta\Omega$ of the whole sample. argue that a mechanism may be responsible for the strong shear at the granulation boundary which is different from the mechanism at work in stars with deeper convective envelopes at cooler effective temperatures.
At early-F spectral types, there is evidence for a lack of differential rotators (shaded area in Fig. \[fig:hrd\_diffrot\]). At later F and early-G spectral types, however, a dense population of differential rotators exists on the main sequence and a scattered population towards higher luminosity. At these later spectral types, there are HD104731 and HD124425, the stars with the highest values of relative differential rotation $\alpha$ measured by line profile analysis. The object with strongest horizontal shear in this region, however, is HD64185A and it is the only object there that displays a shear strength comparable to the stars at the granulation boundary. This result has to be regarded with care since HD64185A is a component of a spectroscopic binary, or possibly a triple (Sect. \[sect:mult\]).
Dependence of relative differential rotation on stellar parameters {#sect:depend}
------------------------------------------------------------------
Fig. \[fig:compt\] displays the basic quantity measured, i.e. the ratio of the zeros of the Fourier transform $\qfrac$, vs. effective temperature $\Teff$, including measurements of Reiners (2003-2006). Values of $\qfrac$ between 1.72 and 1.85 are consistent with rigid body rotation when accounting for unknown limb darkening. The value of 1.76 represents rigid body rotation when assuming a linear limb darkening law with a Sun-like limb darkening parameter $\varepsilon=0.6$. In the regime of rigid rotation and marginal solar/anti-solar differential rotation, the present work does not change the overall distribution found by Reiners but the lack of early-F type differential rotators between 6700 and 7000K becomes more pronounced. Nevertheless, the strongest differential rotators are located close to this gap.
The figure corroborates evidence of two different populations of differential rotators, one with moderate to rapid rotation at the cool side of this gap and one with extremely rapid rotation at the granulation boundary at the hot side. At the cool side, stars are generally rotating more slowly than stars on the hot side of the gap.
On average, the fraction of differential rotators among stars with known rotational profiles significantly decreases with increasing effective temperature (Fig. \[fig:tfrac\]). Those fractions are estimated from the number counts in bins of effective temperature. It is assumed that the number counts originate from a binomial probability distribution with an underlying parameter $p$ which is the probability that a randomly chosen star among the stars with measured rotational broadening is a differential rotator. From this distribution and the actual number counts of differential rotators in each temperature bin, $2\sigma$ confidence intervals on the true value of $p$ are derived [following @Hengst67].
Fig. \[fig:compv\] displays the ratio of the zeros of the Fourier transform $\qfrac$ vs. the projected rotational velocity $\vsini$ in a way similar to Fig. \[fig:compt\]. The dependence of the frequency of differential rotation on $\Teff$ and $\vsini$ cannot be studied independently since $\vsini$ and $\Teff$ are strongly correlated and thus degenerate in what concerns their effect on differential rotation . Therefore, it is not a surprise, that Fig. \[fig:compv\] looks very similar to Fig. \[fig:compt\]. Differential rotation is common among slow rotators and becomes rare at rapid rotation (see Fig. \[fig:vfrac\]). From Fig. \[fig:compt\] and \[fig:compv\] one notices that almost all differential rotators at the granulation boundary are fast rotators with $\vsini\gtrsim100\,\kmps$.
[lccc]{} &differential rotators&rigid rotators&combined\
fraction of hot stars ($T_\mathrm{eff}\,>\,7000\,$K)& $45^{+18}_{-17}$%&$88^{+4}_{-6}$%&$82^{+5}_{-6}$%\
fraction of giant stars ($\log{g}\,<\,3.5$)&$12^{+17}_{-8}$%&$11^{+6}_{-4}$%&$12^{+5}_{-4}$%\
fraction of rapid rotators ($\vsini\,>\,50$kms$^{-1}$)&$42^{+19}_{-17}$%&$88^{+4}_{-6}$%&$81^{+5}_{-6}$%\
The strength of differential rotation, in contrast to the frequency of differential rotation, does not vary for different values of effective temperature or projected rotational velocity. The lowest values of $\qfrac$ measured tend to be lower at cool effective temperature and slow rotation but these trends are not significant.
In summary, the fraction of differential rotators decreases significantly with increasing $\Teff$ and $\vsini$. In other words, the fraction of hot and rapidly rotating stars among differential rotators is significantly less than among rigid rotators (see Table \[tab:frac\]).
In terms of surface gravity, the fraction of differential rotators increases towards high gravity but with the data at hand, differences are not significant (Figs. \[fig:compg\], \[fig:gfrac\]). There are roughly as many giants among the differential rotators as among the rigid rotators and the whole sample. We finally recall the reader that the sample is comprehensive but incomplete and biased towards the selection of suitable targets.
Discussion of horizontal shear {#sect:shear}
==============================
Measured shear and rotational period {#sect:shear_period}
------------------------------------
![\[fig:alpha\_P\]The figure displays relative differential rotation $\alphaimax$ vs. estimated period for all differential rotators detected by profile analysis. The dotted and dashed-dotted lines indicate the dependence of $\alpha$ on period for constant absolute shear $\Delta\Omega=1.0\,\radpd$ and $0.1\,\radpd$, resp. Open symbols indicate dwarfs and filled symbols evolved stars with ${\log}g\leq3.5$. Symbol size scales with effective temperature as indicated in the legend. The straight solid lines relate the data points to the values derived for 10$\deg$ thus illustrating the error due to inclination. Errors intrinsic to $\alphaimax$ and $\alphaimin$ are omitted for clarity but might be substantial. The scaling of symbol size with effective temperature allows one to compare with the theoretical predictions for an F8 star (vertically hashed) and a G2 star (horizontally hashed) . The vertical lines indicate period limits accessible to measurement. The vertical solid line gives the shortest rotational period of the stars with a measurement of the rotational profile while the other vertical lines denote the periods corresponding to the $\vsini$ limit of CES for a dwarf star (short-dashed) of 1 solar radius and an evolved star (long-dashed) of 8 solar radii. The FEROS limit will be at even shorter periods. The location of the Sun is also shown ($\odot$). The grey solid line gives the upper envelope to mid and late-F type differential rotators discussed in the text.](figs/alphai_P){width="\figwidth"}
![\[fig:shear\_P\]The figure displays the absolute shear $\Delta\Omegaimax$ vs. estimated projected rotation period. The dash-dotted and dotted lines indicate absolute shear for constant $\alpha=0.05$ (detection limit) and 1.0, resp. The other curves and the symbols have have the same meaning as in Fig \[fig:alpha\_P\].](figs/sheari_P){width="\figwidth"}
In order to study the strength of differential rotation, the measured quantity $\qfrac$ is converted to relative differential rotation $\alpha$ as described in Sect. \[sect:alpha\] and via rotational period to absolute horizontal shear $\Delta\Omega$ (Eq. \[eq:shear\]). Rotation periods are given in Table \[tab:comp\_diffrot\] and are estimated from the $\vsini$ derived and the photometric radius.
When interpreting the measured relative differential rotation $\alpha$, unknown inclination certainly is one of the most important uncertainties. Unknown inclination effectively turns the measurements of $\alpha$ into upper limits because $\qfrac$ is an approximate function of $\alphasini$. However, assuming a uniform distribution of inclination angles, and according to the statistics of a $\sini$ distribution, much less stars will have low values of $\sini$ while most will have $\sini\lesssim1$. Although the uncertainties introduced by unknown limb darkening and the measurement error of $\qfrac$ might be substantial, only the error bar due to inclination is indicated here (represented by the values $\alphaimax$ and $\alphaimin$ introduced in Sect. \[sect:alpha\]).
Figs. \[fig:alpha\_P\] and \[fig:shear\_P\] present the relation of differential rotation with rotational period. The measurements from line profile analysis cover rather short periods of $\approx1-10$d. This is not surprising since the study only accesses rapid rotators. Relative differential rotation ranges from the detection limit at 5% to values of more than 60% when assuming an incliation angle of 90$\deg$. This corresponds to absolute horizontal shear of the order of $0.1-1\,\radpd$.
At each rotational period, a different range of horizontal shear is accessible as given by the parallel dotted and dash-dotted lines in Fig. \[fig:shear\_P\]. The detection limit of $\alpha$ corresponds to a minimum detectable shear which decreases with increasing period, e.g. a shear of $\approx0.01\,\radpd$ can be detected at a period of about 30d and above. The line indicating $\alpha=100\,$% corresponds to the extreme case that the difference of angular velocity between the poles and the equator is as large as the equatorial rate of rotation. If this is considered as maximum attainable relative differential rotation, no shear higher than $\approx0.2\,\radpd$ will be seen at periods larger than about 30d, for example. Substantial amounts of shear of the order of $1\,\radpd$ and more can only be seen at periods shorter than $\approx10\,$d. It is worth to note that line profile analysis allows one to detect weakest horizontal shear at long rotation periods. Then, the measurement is only limited by $\vsini$.
The accessible range in horizontal shear is not fully covered with measurements. Certainly, at long periods, the corresponding $\vsini$ will be too slow for the measurement by line profile analysis as is indicated by the vertical lines in Fig. \[fig:alpha\_P\] and \[fig:shear\_P\] for a dwarf and a giant star. At short periods of the order of 1d, some structure becomes apparent which is related to the frequency distribution in the HR diagram discussed above. In detail, two different groups of differential rotators stand out. The first group comprises the rapid differential rotators at the granulation boundary. The second group consists of the cooler differential rotators. Both groups cover values of relative differential rotation of similar magnitude (Figure \[fig:alpha\_P\]) but the second group is located at longer periods. Figure \[fig:shear\_P\], however, shows that the first group of stars displays stronger shear than the second group. already noticed that the objects at the granulation boundary have high $\vsini$ around $100\,\kmps$ and that all of these cluster at short periods at high horizontal shear. suggest that an alternative process might be at work that is responsible for the strong shear observed in the first group.
An upper envelope can be identified to the horizontal shear of the second group while the four established differential rotators at the granulation boundary remain clearly above. identifies an upper envelope to the cooler F-type stars which rises between periods of 0.5 and 3 days and then declines towards larger rotational periods at constant $\alpha$. point out that the rising part of the upper bound to the horizontal shear of the second group between periods of 0.5 and 3 days could also be described by a plateau at $\Delta\Omega\approx0.7\,\radpd$. Fig. \[fig:shear\_P\] strengthens this interpretation and shows an updated version of the upper bound of suggesting a bit higher plateau close to values of $\Delta\Omega\approx1\,\radpd$. The envelope is transformed to the $\alpha$ scale (Fig. \[fig:alpha\_P\]). The shear of all the F stars in the second group is below $1\,\radpd$ at periods of up to 3days while for longer periods, the shear is below the curve of maximum $\alpha=1$.
It should be noted here that values of inclination very different from $90\deg$ may allow for shear measurements greater than 1. However, the measured shear scales with $\sqrt\sini$ and the probability to find an object with $\sini$ very different from 1 is small.
There are three particular objects, HD17094, HD44892, and HD182640 that would be located at the upper envelope in Fig. \[fig:shear\_P\]. HD17094 and HD182640, however, are possibly affected by multiplicity while gravitational darkening might be important in the case of HD44892. Therefore, these objects are not considered.
In contrast to the F stars, the shear of the stars at the granulation boundary at the transition from spectral type A to F shows a clear period dependence – though based on a few data points only – and seems to line up roughly parallel to the curve $\alpha=1$. It cannot be excluded, however, that the line profiles of the hot stars with short rotational periods of less than a day are rather due to gravitational darkening, in parts at least, than due to differential rotation.
Comparison to theoretical predictions
-------------------------------------
Figures \[fig:alpha\_P\] and \[fig:shear\_P\] compare the measurements to models of who modeled differential rotation for an F8 and a G2 star and later for hotter F stars . They describe a strong dependence on rotational period. notice maxima in the period dependence of shear in the F8 and G2 models. The maximum of the F8 star is higher and at shorter rotational periods. They further point out that the increase of maximum horizontal shear with increasing effective temperature is much stronger than the variation due to rotational period.
The comparison to the predictions of is complicated since these cover longer periods of 4d and more. There is only a small overlap with the present work between 4 and 10 days. In this range, the measurements of late-F type stars scatter widely and are larger than the prediction for the F8 star. Only HD114642 agrees nicely with the prediction for an F8 star although the rotational period of this particular object is not much different from the other late-F differential rotators. None of the stars studied displays rotational shear below the shear predicted for an F8 star which is in agreement with the predictions as none of the differential rotators identified has spectral type G or later. A cool, slowly-rotating G-type star like the Sun (which is also highlighted in Figs. \[fig:alpha\_P\] and \[fig:shear\_P\]) is beyond the detection limit.
Although the observed $\alpha$ is larger than predicted at spectral type late-F, the total range of relative differential rotation $\alpha$ observed roughly agrees with the total range predicted by . A close inspection of Figs. \[fig:alpha\_P\] and \[fig:shear\_P\] shows that it is not necessarily due to underestimated shear that the observed shear seems higher. Instead, the apparent underestimation might also be due to disagreements in period. The observed $\alpha$ occur at much shorter rotational periods than the predicted data which translates to higher values of shear. The consequence is that the measured shear is higher than predicted shear. In principle, there is a large uncertainty in observed periods because of unknown inclination. However, inclinations very different from $90\deg$ will result in even shorter periods as is indicated by the straight solid lines in Figs. \[fig:alpha\_P\] and \[fig:shear\_P\].
While a clear trend with rotational period is not discernible among the observations of the F stars, the upper envelope to $\alpha$, at least, partly has a shape similar to that of the curves predicted by theory. It can be noticed from Fig. \[fig:alpha\_P\], that the rising parts of these curves roughly follow the lines of constant horizontal shear.
The situation looks different for hotter F stars. Among later-type stars, also modeled F stars up to $1.4\,\Msun$ corresponding to spectral type F5 on the main sequence. A total range in horizontal shear is predicted (not shown in the figures) which roughly agrees with the horizontal shear observed. Again, a strong variation will be involved due to the dependence on rotational period. The rotational periods assumed in the calculations are not available to us so that we cannot study the behaviour of relative differential rotation $\alpha$. The theoretical calculations agree with observations in that horizontal shear of $1.0\,\radpd$ is not exceeded. So far, there are no predictions from models of more massive, rapidly rotating which would be very interesting however.
There are basic principles, both physical and conceptual, which strongly constrain observable shear. According to , as early F-type stars have thin convective envelopes, strong horizontal shear of the order of $1\,\radpd$ can only be sustained at rotational periods of the order of 1 day. This means that rotation necessarily has to be fast in order to sustain strong shear. On the other hand shear that strong cannot appear at slow rotation. Fig. \[fig:shear\_P\] shows that only at short periods, relative differential rotation will not be above $100\,\%$. In other words, the relative differential rotation of a slower rotator with similarly strong shear would be larger than $100\,\%$.
Horizontal shear and effective temperature
------------------------------------------
![\[fig:shear\_teff\]The figure displays the absolute shear $\Delta\Omega$ vs. effective temperature for all measurements of differential rotation including previous measurements by other methods , compared to the location of the Sun ($\odot$). The dashed line represents the fit by . Circles identify results from line profile analysis as given in the caption to Fig. \[fig:alpha\_P\]. Values derived from the $\alphasini$ calibration instead of $\alphaimax$ or $\alphaimin$ are presented in order to be consistent with previously published versions of this figure. Error bars due to unknown inclination are omitted for clarity.](figs/shear_teff){width="\figwidth"}
Figure \[fig:shear\_teff\] updates fig. 5 of and displays the relation of horizontal shear and effective temperature. The measurements of the present work are added.
The trend detected by is generally followed by the results from line profile analysis which fill the plot at spectral types F and earlier although the hotter differential rotators clearly stand out and display stronger shear. At spectral type F8 corresponding to an effective temperature of $\approx6200\,$K, horizontal shear covers a wide range of $\approx0.1-1.0\,\mathrm{d}^{-1}$ while the stars at the granulation boundary are above that and reach $\Delta\Omega\approx3\radpd$.
Implications for the frequency of differential rotators
-------------------------------------------------------
An interesting question is how far the attainable values of horizontal shear and the rotational periods are related to the frequency of differential rotation seen across the HR diagram. The detectable values of $\alpha$ play a crucial role. Assuming the same shear $\Delta\Omega$ at different rotational periods, $\alpha$ will be lower at shorter periods and no longer detectable below a certain period that depends on shear. Therefore, stars with less shear will be more easily detected at slow rotation while they might be undetected at fast rotation . As temperature and rotation are correlated, this might explain the general trend of less detections of differential rotation at higher effective temperature and faster rotation.
[llccc]{} &&early-F&mid-F&\
&&$7050-6700$K&$6700-6350$K&\
P$<1$d &total&26&9&\
&diff. rot.&0&2&\
P$>1$d&total&48&55&\
&diff. rot&1&15&\
Together with the upper bound to horizontal shear of F stars, this might also explain the lack of differential rotators among early-F type stars. Adopting the upper bound to horizontal shear of F stars of $\Delta\Omega\approx1\,\radpd$ in Fig. \[fig:shear\_P\] and a detection limit of $\alpha=0.05$, we can only detect differential rotation in stars rotating slower than $\Omega=\frac{\Delta\Omega}{\alpha}\approx\frac{1\,\radpd}{0.05}$, i.e. periods longer than $P=\frac{2\pi}{\Omega}\approx0.31$ days. This means that no differentially rotating F star is expected at periods much less than a day. This limit cannot be accessed by the measurements since the shortest detected period among the stars with rotational profile measured is $\approx0.4\,$d. It is instructive to look more closely at stars with $6700<\Teff\leq7050\,$K corresponding to early-F spectral types and compare them to stars with $6350<\Teff\leq6700\,$K in the mid-F range. Furthermore, we distinguish in both groups stars with rotational periods shorter than a day and stars with longer period (Table \[tab:shear\]). First of all, the numbers in the table agree with the finding that hotter stars tend to rotate faster. In other words, the fraction of early-F type stars with rotational periods less than 1d is larger than the fraction of mid-F type stars at the same periods.
Indeed there is no single rapid differential rotator with a rotational period of less than a day among the early-F stars. However, there are two differential rotators among the mid-F stars, approximately one fourth. Similarly, there are 15 differentially rotating mid-F stars at periods larger than 1 day, again one fourth, but there is only one early-F star. While this agrees with the previous finding that the frequency of differential rotators decreases towards hotter stars, it is somewhat unexpected that the fractions are the same for both period bins. This indicates that the rotational periods and the implied detection limit of $\alpha$ do not fully explain the gap of differential rotation among early-F stars.
In a certain respect, the lack of differential rotation among early-F stars contrasts the results from modeling. According to models by , horizontal shear will be higher at hotter effective temperature that would facilitate the detection of differential rotation. Therefore, a substantial fraction of differential rotators must be expected for early-F type stars even though they rotate faster than mid-F stars and even though, consequently, the detection limit on shear is higher. This is not the case however. A possible explanation might be found in the details of the period dependence and in particular the exact rotational periods at the predicted maximum of horizontal shear. For example, there will be less detections of differential rotation among early-F stars if the corresponding maximum is below $1\,\radpd$ and located at periods shorter than 1 day and if at the same time the maximum for mid-F stars is at longer periods. More modeling of early-F stars at very short periods is clearly needed.
![\[fig:shear\_teff\_lim\] The figure enlarges the hot portion of Fig. \[fig:shear\_teff\] and displays the absolute shear $\Delta\Omega$ vs. effective temperature for measurements by line profile analysis. Symbols identify measurements of differential rotation from line profile analysis as given in the caption to Fig. \[fig:alpha\_P\], but here assuming an inclination of 90$\deg$. In addition, the ranges of shear accessible to observation are indicated by short horizontal lines. The cases of $\alpha=0.05$ and $\alpha=1$ are distinguished by black and grey, respectively. These ranges are determined by rotational period alone, in a way that is given in Fig. \[fig:shear\_P\]. Therefore, the detectable ranges can not only be given for those objects which display differential rotation. Instead they can be assessed for all objects with the rotational profile measured, including the stars that are considered rigid rotators. Among these limits indicated by short horizontal lines, stars with periods longer than a day are distinguished from stars with shorter periods by dotted lines. The two solid (black and grey) flexed lines represent envelopes to the bulk of the limits. These are based on the maximum $\vsini$ observed which is most probably due to $\sini{\lesssim}1$. Error bars are omitted for clarity. The dotted line denotes the upper bound on shear of F stars identified in Fig \[fig:shear\_P\]. The dashed line reproduces the fit of .](figs/sheari_teff_lim){width="\figwidth"}
Figure \[fig:shear\_teff\_lim\] enlarges the hot portion of Fig. \[fig:shear\_teff\] and shows the temperature effect more clearly. The range of shear accessible to line profile analysis is shown for all stars with rotational profiles measured. The lower limits to measurable shear are derived from the rotational period determined for each star assuming $\alpha=0.05$. The upper limits are inferred the same way assuming $\alpha=1.0$. Therefore, these two limits enclose the whole range of measurable shear for each star. In accordance to Fig. \[fig:shear\_P\], highest values of shear are accessed at very short rotational periods $P<1\,$d.
Error bars due to inclination are omitted. Lower values of inclination will essentially increase the shear measurements but also raise the limits and the upper bound $\Delta\Omega=1\,\radpd$ identified in Fig. \[fig:shear\_P\].
The distribution of the limits with respect to effective temperature actually reflects the correlation with $\vsini$ as periods were calculated from $\vsini$ and photometric radii. The detection limits generally decline towards cooler effective temperatures since rotational periods increase and thus smaller amounts of shear are detectable. At the same time, extremely large values like for the transition objects at the granulation boundary cannot be detected any more.
A major conclusion from Fig. \[fig:shear\_teff\_lim\] is that the fit of in Fig. \[fig:shear\_teff\] cannot be tested with our data. The detectable range of shear is constrained by selection effects. The lower bound results from the detection limit on $\alpha(=0.05)$ while the upper bound is given by $\alpha=1$. The correlation of faster rotation with higher $\Teff$ explains at least in parts the trend of increasing shear with hotter effective temperature.
The case of high inclination deserves closer inspection since Fig. \[fig:shear\_teff\_lim\] shows values of shear assuming an inclination of $90\deg$ to facilitate comparison with Fig. \[fig:shear\_P\]. It is a reasonable assumption that the stars with the highest $\vsini$ measured will be those with $\sini{\lesssim}1$. Consistently, these stars will have the shortest rotational periods so that the upper bound to $\vsini$ corresponds to a lower bound to periods. Shear is derived from periods and from $\alpha$ measured by line profile analysis such that a lower bound to periods corresponds to an upper envelope to the limits on shear. The upper bounds to the bulk of the limits are indicated by the blue and red flexed lines in Fig. \[fig:shear\_teff\_lim\].
These upper bounds consist of a rising part at temperatures of mid-F spectral types and by a flat part at hotter temperatures. The bound to the lower limit of detectable shear (black solid line) and the upper bound to the shear of F stars of $1\,\radpd$ identified in Sect. \[sect:shear\_period\] (dotted line) define a detectable range of shear which becomes very narrow towards hotter temperatures ($\approx0.5-1\,\radpd$ at $\Teff\approx6600\,$K). Therefore, differentially rotating F stars with horizontal shear below $0.5\,\radpd$ will not be detected if their projected rotational period is shorter than a day. However, F stars with shear higher than $\approx0.5\,\radpd$ should still be detectable even at rotational periods that short. Furthermore, the number of lower and upper limits in Fig. \[fig:shear\_teff\_lim\] at effective temperatures above $6600\,$K is large so that there are many suitable objects at early-F spectral types. However, all these seem to be rigid rotators or, at least, display undetectable rates of differential rotation. Therefore, the detection limits to differential rotation do not fully explain the lack of differential rotators at early-F spectral types.
In summary, three issues have been identified which play an important role in explaining the lack of differential rotators among early-F type stars: an apparent upper bound to horizontal shear of F-type stars, the detection limit on shear which depends on rotational period, and the correlation of rotational periods and effective temperature. However, these cannot fully explain the lack of differential rotators among the early-F type stars. A direct dependence on temperature is apparent. We speculate that the physical conditions and/or mechanisms of differential rotation change at early-F spectral types. This change might also be related to the detection of extremely large horizontal shear at the transition to A spectral types.
Summary and outlook
===================
In the present work, stellar rotation of some 180 stars has been studied through the analysis of the Fourier transform of the overall spectral line broadening profile, extending previous work of Reiners 2003-2006. In total, the rotation of 300 stars has been studied by line profile analysis. Among these, 33 differential rotators have been identified.
In order to study frequency and strength of differential rotation throughout the HR diagram, particular emphasis was given to the compilation of astrometric and photometric data which has been drawn homogeneously from the Hipparcos and Tycho-2 catalogues. Complications introduced by multiplicity were discussed in detail and affected measurements flagged. Multiplicity might explain exceptional amounts of differential rotation, e.g. in the case of HD64185A. The flagged objects were not included in the discussion. It will be worthwhile to further study these objects since they are are located in interesting regions of the HR diagram.
The results were found to be in good agreement with previous observations for stars in common. Substantial deviations can be explained by instrumental limitations or peculiar line profiles due to binarity or spots. Otherwise discrepancies exceed 5$\kmps$ in $\vsini$ and 0.10 in $\qfrac$ only when $\vsini$ is below the limits for line profile analysis (e.g. $12\,\kmps$ for CES and $45\,\kmps$ for FEROS). A compilation of all stars with new or previous rotation measurements by line profile analysis was presented.
The present work detected several rigid rotators among A stars. Some rapidly rotating A stars show line profiles probably affected by gravitational darkening. As of now, no differential rotators are known with effective temperatures higher than 7400K.
The frequency of detections of differential rotators generally decreases with increasing effective temperature and increasing projected rotational velocity. Effective temperature and rotational velocity are correlated because of the dependence of magnetic braking on spectral type. This stops us from identifying the quantity responsible - temperature or rotation.
On the other hand, the fraction of differential rotators does not vary notably with surface gravity. In the present work, surface gravity was estimated from photometry to be used as indicator of the luminosity class. The study of the rotational behavior of evolved stars will certainly benefit from a more accurate assessment of surface gravity and allow for a comparison with theoretical predictions . The analysis of giants will be particularly promising since at the same $\vsini$ longer rotation periods can be studied. Of course, the challenge is to find giants with sufficiently large $\vsini$ to be accessible to line profile analysis.
The lack of differential rotators around $T_\mathrm{eff}=6750\,$K (spectral type F2-F5) detected in previous work was consolidated. At the cool side of this gap, there is a large population of differential rotators. The differential rotator with the highest differential rotation parameter measured so far by line profile analysis (HD104731, $\alphasini=54$%) is right at the cool edge of this gap. This object and also other strong differential rotators in this region exhibit moderate rotational velocity. At the hot side of the gap at the location of the granulation boundary, all differential rotators are rapid rotators ($\vsini\gtrsim100\,\kmps$) and show strongest horizontal shear ($\Delta\Omega\gtrsim1\,\radpd$).
Theoretical calculations are at hand for F-stars and were compared to the observations. While the observed range of horizontal shear in mid-F type stars agrees with the predicted range, the observed values in late-F type stars were found to be much larger than predicted. This was explained rather by a discrepancy in period dependence than in absolute shear strength. Further observations with other techniques are needed to measure differential rotation at periods of 20d and more. Strong shear seems possible for very rapid rotators only. Models of more massive and rapidly rotating stars are clearly needed.
Even more, it was found that the observations of F stars do not follow a clear period dependence. Still, the upper envelope which traces the upper bound to horizontal shear and the curve of relative differential rotation $\alpha=1$ exhibit a shape similar to the predicted period dependence of an F8 and a G2 star.
Regarding the comparison with theoretical predictions, it is worth discussing the role of inclination. In the present work, the uncertainties of inclination were properly accounted for by deriving relative differential rotation and horizontal shear assuming different, extreme values of inclination. A remarkable fact is that a better assessment of inclination will not allow for a better agreement with the predicted period dependence. Instead, limb darkening and the assumed surface rotation law might be more critical sources of uncertainties in this respect. This does not mean that there is no need to better assess inclination, as it causes large part of the uncertainty about measurements of differential rotation.
The horizontal shear of F stars remains below a plateau of $\Delta\Omega\approx1\,\radpd$. This might explain the low frequency of differential rotation among hotter and rapidly rotating F stars considering the detection limit on relative differential rotation $\alpha$ but cannot fully account for the gap of differential rotation at early-F spectral types. We assume, that conditions change in early-F type stars so that differential rotation is inhibited while at somewhat hotter temperatures at the transition to spectral type A, differential rotation reappears with extreme rates of differential rotation.
The shear of rapidly rotating A stars was found to be higher than $1\,\radpd$ and possibly increases with decreasing rotational period. So far, there are no predictions from models which could explain the shear of A-type stars. Although set out to understand the strong shear detected at the granulation boundary by , the stellar models discussed are not beyond 1.4$\Msun$, i.e. mid-F type stars.
The strength of differential rotation generally follows the increasing trend with effective temperature identified by but the trend with effective temperature can actually not be tested with our data since the spread is large and the detection limits on shear also follow the trend with temperature as $\vsini$ and $\Teff$ are correlated.
As the surface rotation law is identified as a possible source of uncertainty, it is important to emphasize that the present work is based on the assumption of a very simple surface rotation law (Eq. \[eq:rotlaw\]). This assumption is reasonable since it is motivated by the solar case and has been consistently used by similar studies. However, @2008JPhCS.118a2029K point out that in the case of the Sun there are other measurements favouring a rotation law dominated by a ${\cos}^4$ term of co-latitude instead of the ${\cos}^2$ term [@1984SoPh...94...13S]. The adoption of such a rotation law will introduce a second parameter of differential rotation and might change the shear measurements from line profile analysis.
We would like to thank our referee, Dr. Pascal Petit, for a very clear and constructive report. M.A. thanks Dr. Theo Pribulla for discussion which helped to improve the analysis of the A stars among the sample. M.A. and A.R. acknowledge research funding granted by the [*Deutsche Forschungsgemeinschaft*]{} (DFG) under the project RE 1664/4-1. M.A. further acknowledges support by DLR under the projects 50OO1007 and 50OW0204. This research has made use of the [*extract stellar*]{} request type of the Vienna Atomic Line Database (VALD), the SIMBAD database, operated at CDS, Strasbourg, France, and NASA’s Astrophysics Data System Bibliographic Services.
[^1]: Based on observations made with ESO Telescopes at the La Silla Paranal Observatory under programme ID’s 074.D-0008 and 075.D-0340.
[^2]: Tables \[tab:results\] and \[tab:comp\_diffrot\] are available online only.
[^3]: The parameter $\alpha$ of the dynamo must not be confused with the parameter $\alpha$ of differential rotation!
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'The present knowledge of the structure of the photon is presented with emphasis on measurements of the photon structure obtained from deep inelastic electron-photon scattering at colliders. This review covers the leptonic and hadronic structure of quasi-real and also of highly virtual photons, based on measurements of structure functions and differential cross-sections. Future prospects of the investigation of the photon structure in view of the ongoing LEP2 programme and of a possible linear collider are addressed. The most relevant results in the context of measurements of the photon structure from photon-photon scattering at LEP and from photon-proton and electron-proton scattering at HERA are summarised.'
address: 'CERN, CH-1211 Genève 23, Switzerland'
author:
- Richard Nisius
title: 'The Photon Structure from Deep Inelastic Electron-Photon Scattering'
---
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We present a probabilistic deep learning methodology that enables the construction of predictive data-driven surrogates for stochastic systems. Leveraging recent advances in variational inference with implicit distributions, we put forth a statistical inference framework that enables the end-to-end training of surrogate models on paired input-output observations that may be stochastic in nature, originate from different information sources of variable fidelity, or be corrupted by complex noise processes. The resulting surrogates can accommodate high-dimensional inputs and outputs and are able to return predictions with quantified uncertainty. The effectiveness our approach is demonstrated through a series of canonical studies, including the regression of noisy data, multi-fidelity modeling of stochastic processes, and uncertainty propagation in high-dimensional dynamical systems.'
address: |
Department of Mechanical Engineering and Applied Mechanics,\
University of Pennsylvania,\
Philadelphia, PA 19104, USA
author:
- Yibo Yang
- Paris Perdikaris
bibliography:
- 'ref.bib'
title: 'Conditional deep surrogate models for stochastic, high-dimensional, and multi-fidelity systems'
---
Probabilistic deep learning ,Generative adversarial networks ,Variational inference ,Multi-fidelity modeling ,Data-driven surrogates
Introduction {#sec:Introduction}
============
The analysis of complex systems can often be significantly expedited through the use of surrogate models that aim to minimize the need of repeatedly evaluating the true data-generating process, let that be a costly experimental assay or a large-scale computational model. The task of building surrogate models in essence defines a [*supervised learning*]{} problem in which one aims to distill the governing relation between system inputs and outputs. A successful completion of this task yields a simple and cheap mechanism for predicting the system’s response for a new, previously unobserved input, which can be subsequently used to accelerate downstream tasks such as optimization loops, uncertainty quantification, and sensitivity analysis studies.
Fueled by recent developments in data analytics and machine learning, data-driven approaches to building surrogate models have been gaining great popularity among diverse scientific disciplines. We now have a collection of techniques that have enabled progress across a wide spectrum of applications, including design optimization [@forrester2007multi; @robinson2008surrogate; @alexandrov2001approximation], the design of materials [@sun2010two; @sun2011multi] and supply chains [@celik2010dddas], model calibration [@perdikaris2016model; @perdikaris2015data; @perdikaris2015calibration], and uncertainty quantification [@eldred2009comparison; @ng2012multifidelity; @padron2014multi; @biehler2015towards; @peherstorfer2016optimal; @peherstorfer2016multifidelity; @peherstorfer2016survey; @narayan2014stochastic; @zhu2014computational; @bilionis2013multi; @parussini2017multi; @perdikaris2016multifidelity]. Such approaches are built on the premise of treating the true data-generating process as a black-box, and try to construct parametric surrogates of some form $\bm{y}=f_{\theta}(\bm{x})$ directly from observed input-output pairs $\{\bm{x},\bm{y}\}$. Except perhaps for Gaussian process regression models [@rasmussen2004gaussian] that rely on a probabilistic formulation for quantifying predictive uncertainty, most existing approaches use theoretical error bounds to assess the accuracy of the surrogate model predictions and are formulated based on rather limiting assumptions on the form $f$ (e.g., linear or smooth nonlinear). Despite the growing popularity of data-driven surrogate models, a key challenge that still remains open pertains to cases where the entries in $\bm{x}$ and $\bm{y}$ are high-dimensional objects with multiple modalities: vectors with hundreds/thousands of entries, images with thousands/millions of pixels, graphs with thousands of nodes, or even continuous functions and vector fields. Even less well understood is how to build surrogate models for stochastic systems, and how to retain predictive robustness in cases where the observed data is corrupted by complex noise processes.
In this work we aim to formulate, implement, and study novel probabilistic surrogate models models in the context of probabilistic data fusion and multi-fidelity modeling of stochastic systems. Unlike existing approaches to surrogate and multi-fidelity modeling, the proposed methods scale well with respect to the dimension of the input and output data, as well as the total number of training data points. The resulting generative models provide enhanced capabilities in learning arbitrarily complex conditional distributions and cross-correlations between different data sources, and can accommodate data that may be corrupted by correlated and non-Gaussian noise processes. To achieve these goals, we put forth a regularized adversarial inference framework that goes beyond Gaussian and mean field approximations, and has the capacity to seamlessly model complex statistical and functional dependencies in the data, remain robust with respect to non-Gaussian measurement noise, discover nonlinear low-dimensional embeddings through the use of latent variables, and is applicable across a wide range of supervised tasks.
This paper is structured as follows. In section \[sec:VI\] we provide a comprehensive description of conditional generative models and recent advances in variational inference that have enabled their scalable training. In section \[sec:VAE\], we review recent findings that pinpoint the limitations of mean-field variational inference approaches and motivate the use of implicit parametrizations and adversarial learning schemes. Sections \[sec:density\_ratio\]-\[sec:predictions\] provide a comprehensive discussion on how such schemes can be trained on paired input-output observations $\{\bm{x},\bm{y}\}$ to yield effective approximations of the conditional density $p(\bm{y}|\bm{x})$. In section \[sec:results\] we will test effectiveness our approach on a series of canonical studies, including the regression of noisy data, multi-fidelity modeling of stochastic processes, and uncertainty propagation in high-dimensional dynamical systems. Finally, section \[sec:discussion\] summarizes our concluding remarks, while in \[sec:appendix\] we provide a comprehensive collection of systematic studies that aim to elucidate the robustness of the proposed algorithms with respect to different parameter settings. All data and code accompanying this manuscript will be made available at <https://github.com/PredictiveIntelligenceLab/CADGMs>.
Methods {#sec:Methods}
=======
The focal point of this work is formulating, learning, and exploiting general probabilistic models of the form $p(\bm{y}|\bm{x})$. One one hand, conditional probability models $p(\bm{y}|\bm{x})$ aim to capture the statistical dependence between realizations of deterministic or stochastic input/output pairs $\{\bm{x},\bm{y}\}$, and encapsulate a broad class of problems generally referred to as [*supervised learning*]{} problems. Take for example the setting in which we would like to characterize the properties of a material using molecular dynamics simulations. There, $\bm{x}\in\mathbb{R}^{3N}$ corresponds to a thermodynamically valid configuration of all the $N$ particles in the system, $p(\bm{x})$ is the Boltzmann distribution, and $\bm{y}\in\mathbb{R}^{M}$ is a collection of $M$ correlated quantities of interest that characterize the macroscopic behavior of the system (e.g., Young’s modulus, ion mobility, optical spectrum properties, etc.). Given some realizations $\{\bm{x}_s,\bm{y}_s\}$, $s=1,\dots,S$, our goal is to [*learn*]{} a conditional probability model $p(\bm{y}|\bm{x})$ that not only allows us to accurately predict $\bm{y}^{\ast}$ for a new $\bm{x}^{\ast}$ (e.g., by estimating the expectation $\mathbb{E}_{p}(\bm{y}^{\ast}|\bm{x},\bm{y}, \bm{x}^{\ast})$), but, more importantly, it characterizes the complete statistical dependence of $\bm{y}$ on $\bm{x}$, thus allowing us to quantify the uncertainty associated with our predictions. As $N$ is typically very large, this defines a challenging high-dimensional regression problem. Coming to our rescue, is our ability to extract a meaningful and robust representations of the original data that exploits its structure through the use of latent variables.
Variational inference for conditional deep generative models {#sec:VI}
------------------------------------------------------------
The introduction of latent variables allows us to express $p(\bm{y}|\bm{x})$ as an infinite mixture model,
$$\label{eq:PSC}
p(\bm{y}|\bm{x}) = \int p(\bm{y},\bm{z}|\bm{x}) d\bm{z} = \int p(\bm{y}|\bm{x},\bm{z})p(\bm{z}|\bm{x}) d\bm{z},$$
where $p(\bm{z}|\bm{x})$ is a prior distribution on the latent variables. Essentially, this construction postulates that every output $\bm{y}$ in the observed physical space is generated by a transformation of the inputs $\bm{x}$ and a set of latent variables $\bm{z}$, , i.e. $y = f_{\theta}(x,\bm{z})$, where $f_{\theta}$ is a parametrized nonlinear transformation (see figure \[fig:CDGM\]).This construction generalizes the classical observation model used in regression, namely $y = f_{\theta}(x) + \epsilon$, which can be viewed as a simplified case corresponding to an additive noise model.
Equation \[eq:PSC\] resembles a mixture model as for every possible value of $\bm{z}$, we add another conditional distribution to $p(\bm{y}|\bm{x})$, weighted by its probability. Now, it is interesting to ask what the latent variables $\bm{z}$ are, given an input/output pair $\{\bm{x},\bm{y}\}$. Namely, from a Bayesian standpoint, we would like to know the posterior distribution $p(\bm{z}|\bm{x},\bm{y})$. However, in general, the relationship between $\bm{z}$ and $\{\bm{x},\bm{y}\}$ can be highly non-linear and both the dimensionality of our observations $\{\bm{x},\bm{y}\}$, and the dimensionality of the latent variables $\bm{z}$, can be quite large. Since both marginal and posterior probability distributions require evaluation of the integral in equation \[eq:PSC\], they are intractable.
![[*Building probabilistic surrogates using conditional generative models:*]{} We assume that each observed data pair in the physical space $(x,y)$ is generated by a deterministic nonlinear transformation of the inputs $x$ and a set of latent variables $\bm{z}$, i.e. $y = f_{\theta}(x,\bm{z})$. This construction generalizes the classical observation model used in regression, namely $y = f_{\theta}(x) + \epsilon$, which can be viewed as a simplified case corresponding to an additive noise model.[]{data-label="fig:CDGM"}](conditional_dgm.pdf){width="\textwidth"}
The seminal work of Kingma and Welling [@kingma2013auto] introduced an effective framework for approximating the true underlying conditional $p(\bm{y}|\bm{x})$ with a parametrized distribution $p_{\theta}(\bm{y}|\bm{x})$ that depends on a set of parameters $\theta$. Specifically, they introduced a parametrized approximating distribution $q_{\phi}(\bm{z}|\bm{x},\bm{y})$ to approximate the true intractable posterior $p(\bm{z}|\bm{x},\bm{y})$, and derived a computable variational objective for estimating the model parameters $\{\theta, \phi\}$ using stochastic optimization [@kingma2013auto]. This objective, often referred to as the evidence lower bound (ELBO), provides a tractable lower bound to the marginal likelihood of the model, and takes the form [@sohn2015learning]
$$\label{eq:ELBO}
-\log p_{\theta}(\bm{y}|\bm{x}) \le \mathbb{KL}\left[q_{\phi}(\bm{z}|\bm{x},\bm{y})||p(\bm{z}|\bm{x})\right] -
\mathbb{E}_{\bm{z}\sim q_{\phi}(\bm{z}|\bm{x},\bm{y})} \left[\log p_{\theta}(\bm{y}|\bm{x},\bm{z})\right],$$
where $\mathbb{KL}\left[q_{\phi}(\bm{z}|\bm{x},\bm{y})||p(\bm{z}|\bm{x})\right]$ denotes the Kullback-Leibler divergence between the approximate posterior $q_{\phi}(\bm{z}|\bm{x},\bm{y})$ and the prior over the latent variables $p(\bm{z}|\bm{x})$ [@kingma2013auto; @sohn2015learning]. Due to the resemblance of this approach to neural network auto-encoders [@vincent2008extracting; @vincent2010stacked], the model proposed by Kingma and Welling has been coined as the variational auto-encoder, and the resulting approximate distributions $q_{\phi}(\bm{z}|\bm{x},\bm{y})$ and $p_{\theta}(\bm{y}|\bm{x},\bm{z})$ are usually referred to as the [*encoder*]{} and [*decoder*]{} distributions, respectively.
In a short period of time, this line of work has sparked great interest, and has led to remarkable results in very diverse applications – ranging from the design optimization of light emitting diodes [@gomez2016design], to the design of new molecules [@gomez2018automatic], to the calibration of cosmological surveys [@ravanbakhsh2017enabling], to RNA sequencing [@lopez2017deep], to analyzing cancer gene expressions [@way2017extracting] – all involving the approximation of very high-dimensional probability densities. It has also led to many fundamental studies that aim to further elucidate the capabilities and limitations of such models [@bousquet2017optimal; @pu2017symmetric; @rosca2018distribution; @zheng2018degeneration; @kingma2016improved; @rezende2015variational], enhance the interpretability of their results [@higgins2016beta; @zhao2017infovae; @chen2018isolating], as well as establish formal connections with well studied topics in mathematics and statistics, including importance sampling [@burda2015importance; @klys2018joint] and optimal transport [@genevay2017gan; @villani2008optimal; @el2012bayesian].
In the original work of Kingma and Welling [@kingma2013auto] the encoder and decoder distributions, $q_{\phi}(\bm{z}|\bm{x},\bm{y})$ and $p_{\theta}(\bm{y}|\bm{x},\bm{z})$, respectively, were both assumed to be Gaussian with a mean and a diagonal covariance that were parametrized using feed-forward neural networks. Although this facilitates a straightforward evaluation of the lower bound in equation \[eq:ELBO\], it can result in a poor approximation of the true posterior $p(\bm{z}|\bm{x}, \bm{y})$ when the latter is non-Gaussian and/or multi-modal, as well as a poor reconstruction of the observed data [@rezende2015variational]. To this end, several methods have been proposed to overcome these limitations, including more expressive likelihood models [@van2016conditional], more flexible variational approximations [@rezende2015variational; @kingma2016improved; @burda2015importance], as well as reformulations that aim to make the variational bound of equation \[eq:ELBO\] more tight [@liu2016stein; @mescheder2017adversarial; @makhzani2015adversarial; @tolstikhin2017wasserstein; @pu2017symmetric]. Overall, we must underline that such variational inference techniques are trading the rigorous asymptotic convergence guarantees that sampling-based methods like Markov Chain Monte Carlo enjoy, in favor of enhanced computational efficiency and performance, although new unifying ideas are aiming to bridge the gap between the two formulations [@titsias2017learning; @blei2017variational]. This trade-off becomes critical in tackling realistic large-scale problems, but it mandates careful validation of these tools to systematically assess their performance.
In the next section we will revisit recent ideas in adversarial learning that enable us to overcome the limitations of classical mean field approximations [@wainwright2008graphical; @blei2017variational], and allow us to perform variational inference with arbitrarily flexible approximating distributions. These developments are unifying two of the most pioneering contributions in modern machine learning, namely variational auto-encoders and generative adversarial networks [@mescheder2017adversarial; @pu2017symmetric; @goodfellow2014generative]. Then, we will show how these techniques can be adapted to form the foundations of the proposed work, namely probabilistic data fusion and multi-fidelity modeling, and demonstrate how these tools can be used to accelerate the computational modeling of complex systems.
Adversarial learning with implicit distributions {#sec:VAE}
------------------------------------------------
The recent works of Pu [*et. al.*]{} [@pu2017symmetric] and Rosca [*et. al.*]{} [@rosca2018distribution] revealed some drawbacks in the original formulation of Kingma and Welling [@kingma2013auto] are attributed to the form of the variational objective in equation \[eq:ELBO\]. Specifically, they showed that $\mathbb{KL}\left[q_{\phi}(\bm{z}|\bm{x})||p(\bm{z})\right]$ minimizes an upper bound on $\mathbb{KL}\left[q_{\phi}(\bm{z})||p(\bm{z})\right]$, where $q_{\phi}(\bm{z})=\int q_{\phi}(\bm{z}|\bm{x}) q(\bm{x})d\bm{x}$ is the marginal posterior over the latent variables $\bm{z}$, and $q(\bm{x})$ is the distribution of the observed data. By bringing $q_{\phi}(\bm{z})$ closer to $p(\bm{z})$, the model distribution $p_{\theta}(\bm{x}) = \int
p_{\theta}(\bm{x}|\bm{z})p(\bm{z})d\bm{z}$ is brought closer to the marginal reconstruction distribution $\int p_{\theta}(\bm{x}|\bm{z})q_{\phi}(\bm{z})d\bm{z}$. Variational inference models learn to sample by maximizing reconstruction quality – via the likelihood term $\mathbb{E}_{\bm{z}\sim q_{\phi}(\bm{z}|\bm{x})} \left[\log p_{\theta}(\bm{x}|\bm{z})\right]$ – and reducing the gap between samples and reconstructions – via the $\mathbb{KL}$ term in equation \[eq:ELBO\]. Failure to match $q_{\phi}(\bm{z})$ and $p(\bm{z})$ results in regions in latent space that have high mass under $p(\bm{z})$ but not under $q_{\phi}(\bm{z})$. This means that prior samples $\bm{z} \sim p(\bm{z})$ passed through the decoder to obtain a model sample, are likely to be far in latent space from inputs the decoder saw during training. It is this distribution mismatch that results in poor generalization performance from the decoder, and hence bad model samples.
Additional findings [@li2018learning] suggest that these shortcomings can be overcome by introducing a new variational objective that aims to match the joint distribution of the generative model $p_{\theta}(\bm{x},\bm{y})$ with the joint empirical distribution of the observed data $q(\bm{x},\bm{y})$. Matching the joint implies that that the respective marginal and conditional distributions are also encouraged to match. Here, we argue that matching the joint distribution of the generated data $p_{\theta}(\bm{x},\bm{y})$ with the joint distribution of the observed data $q(\bm{x},\bm{y})$ by minimizing the reverse Kullback-Leibler divergence $\mathbb{KL}[p_{\theta}(\bm{x},\bm{y})||q(\bm{x},\bm{y})]$ is a promising approach to train the conditional generative model presented in equation \[eq:PSC\]. To this end, the reverse Kullback-Leibler divergence reads as
$$\begin{aligned}
\label{eq:KL}
\mathbb{KL}[p_{\theta}(\bm{x},\bm{y})||q(\bm{x},\bm{y})] & = -\mathbb{H}(p_{\theta}(\bm{x},\bm{y}))) - \mathbb{E}_{p_{\theta}(\bm{x},\bm{y})}[\log(q(\bm{x},\bm{y}))],\end{aligned}$$
where $\mathbb{H}(p_{\theta}(\bm{x},\bm{y}))$ denotes the entropy of the conditional generative model. The second term can be further decomposed as
$$\begin{aligned}
\mathbb{E}_{p_{\theta}(\bm{x},\bm{y})}[\log(q(\bm{x},\bm{y}))] = & \int_{\mathcal{S}_{p_{\theta}\cap \mathcal{S}_q}} \log(q(\bm{x},\bm{y})) p_{\theta}(\bm{x},\bm{y}) d\bm{x}d\bm{y} \ + \label{eq:decomposition} \\
& \int_{\mathcal{S}_{p_{\theta}\cap \mathcal{S}_q^{o}}} \log(q(\bm{x},\bm{y})) p_{\theta}(\bm{x},\bm{y}) d\bm{x} d\bm{y} \nonumber,\end{aligned}$$
where $\mathcal{S}_{p_{\theta}}$ and $\mathcal{S}_q$ denote the support of the distributions $p_{\theta}(\bm{x},\bm{y})$ and $q(\bm{x},\bm{y})$, respectively, while $\mathcal{S}_q^{o}$ denotes the complement of $\mathcal{S}_q$. Notice that by minimizing the Kullback-Leibler divergence in equation \[eq:KL\] we introduce a mechanism that is trying to balance the effect of two competing objectives. Specifically, maximization of the entropy term $\mathbb{H}(p_{\theta}(\bm{x},\bm{y})))$ encourages $p_{\theta}(\bm{x},\bm{y})$ to spread over its support set as wide, while the second integral term in equation \[eq:decomposition\] introduces a strong (negative) penalty when the support of $p_{\theta}(\bm{x},\bm{y})$ and $q(\bm{x},\bm{y})$ do not overlap. Hence, the support of $p_{\theta}(\bm{x},\bm{y})$ is encouraged to spread only up to the point that $\mathcal{S}_{p_{\theta}}\cap \mathcal{S}_{q^{o}} = \emptyset$, implying that $\mathcal{S}_{p_{\theta}}\subseteq \mathcal{S}_{q^{o}}$. When $\mathcal{S}_{p_{\theta}}\subset \mathcal{S}_{q^{o}}$ the pathological issue of “mode-collapse" (commonly encountered in the training of generative adversarial networks [@goodfellow2014generative]) is manifested [@salimans2016improved]. A visual sketch of this argument is illustrated in figure \[fig:joint\_distribution\_matching\].
The issue of mode collapse will also be present if one seeks to directly minimize the reverse Kullback-Leibler objective in equation \[eq:KL\], as this provides no control on the relative importance of the two terms in the right hand side of equation \[eq:KL\]. As discussed in [@li2018learning], we may rather minimize $-\lambda \mathbb{H}(p_{\theta}(\bm{x},\bm{y}))) - \mathbb{E}_{p_{\theta}(\bm{x},\bm{y})}[\log(q(\bm{x},\bm{y}))]$, with $\lambda \ge 1$ to allow for control of how much emphasis is placed on mitigating mode collapse. It is then clear that the entropic regularization introduced by $\mathbb{H}(p_{\theta}(\bm{x},\bm{y})))$ provides an effective mechanism for controlling and mitigating the effect of mode collapse, and, therefore, potentially enhancing the robustness adversarial inference procedures for learning $p_{\theta}(\bm{x},\bm{y})$.
![[*Joint distribution matching:*]{} Schematic illustration of the proposed inference objective for joint distribution matching via minimization of the reverse KL-divergence. Penalizing a lower bound of the generative model entropy $\mathbb{H}(p_{\theta}(\bm{x},\bm{y})))$ provides a mechanism for mitigating the pathology of mode collapse in training adversarial generative models.[]{data-label="fig:joint_distribution_matching"}](joint_pdf_matching.pdf){width="\textwidth"}
Minimization of equation \[eq:KL\] with respect to the generative model parameters $\theta$ presents two fundamental difficulties. First, the evaluation of both distributions $p_{\theta}(\bm{x},\bm{y})$ and $q(\bm{x},\bm{y})$ typically involves intractable integrals in high dimensions, and we may only have samples drawn from the two distributions, not their explicit analytical forms. Second, the differential entropy term $\mathbb{H}(p_{\theta}(\bm{x},\bm{y})))$ is intractable as $p_{\theta}(\bm{x},\bm{y}))$ is not known a-priori. In the next sections we revisit the unsupervised formulation put forth in [@li2018learning] and derive a tractable inference procedure for learning $p_{\theta}(\bm{x},\bm{y}))$ from scattered observation pairs $\{\bm{x}_{i}, \bm{y}_{i}\}$, $i = 1,\dots,N$.
### Density ratio estimation by probabilistic classification {#sec:density_ratio}
By definition, the computation of the reverse Kullback-Leibler divergence in equation \[eq:KL\] involves computing an expectation over a log-density ratio, i.e. $$\mathbb{KL}[p_{\theta}(\bm{x},\bm{y})||q(\bm{x},\bm{y})] := \mathbb{E}_{p_{\theta}(\bm{x},\bm{y})}\left[\log\left(\frac{p_{\theta}(\bm{x},\bm{y})}{q(\bm{x},\bm{y})}\right)\right].$$ In general, given samples from two distributions, we can approximate their density ratio by constructing a binary classifier that distinguishes between samples from the two distributions. To this end, we assume that $N$ data points are drawn from $p_{\theta}(\bm{x},\bm{y})$ and are assigned a label $c=+1$. Similarly, we assume that $N$ samples are drawn from $q(\bm{x},\bm{y})$ and assigned label $c=-1$. Consequently, we can write these probabilities in a conditional form, namely $$p_{\theta}(\bm{x},\bm{y}) = \rho(\bm{x},\bm{y}|c=+1), \ \ q(\bm{x},\bm{y}) = \rho(\bm{x},\bm{y}|c=-1),$$ where $\rho(\bm{x},\bm{y}|c=+1)$ and $\rho(\bm{x},\bm{y}|c=-1)$ are the class probabilities predicted by a binary classifier $T(\bm{x},\bm{y})$. Using Bayes rule, it is then straightforward to show that the density ratio of $p_{\theta}(\bm{x},\bm{y})$ and $q(\bm{x},\bm{y})$ can be computed as
$$\begin{aligned}
\frac{p_{\theta}(\bm{x},\bm{y})}{ q(\bm{x},\bm{y})} & = \frac{\rho(\bm{x},\bm{y}|c=+1)}{\rho(\bm{x},\bm{y}|c=-1)} \nonumber\\
& = \frac{\rho(c=+1|\bm{x},\bm{y})\rho(\bm{x},\bm{y})}{\rho(c=+1)} \bigg/ \frac{\rho(c=-1|\bm{x},\bm{y})\rho(\bm{x},\bm{y})}{\rho(c=-1)} \nonumber\\
& = \frac{\rho(c=+1|\bm{x},\bm{y})}{\rho(c=-1|\bm{x},\bm{y})} = \frac{\rho(c=+1|\bm{x},\bm{y})}{1 - \rho(c=+1|\bm{x},\bm{y})} \nonumber\\
& = \frac{T(\bm{x},\bm{y})}{1-T(\bm{x},\bm{y})}.\end{aligned}$$
This simple procedure suggests that we can harness the power of deep neural network classifiers to obtain accurate estimates of the reverse Kullback-Leibler divergence in equation \[eq:KL\] directly from data and without the need to assume any specific parametrization for the generative model distribution $p_{\theta}(\bm{x},\bm{y})$.
### Entropic regularization bound {#sec:entropy_bound}
Here we follow the derivation of Li [*et. al*]{} [@li2018learning] to construct a computable lower bound for the entropy $\mathbb{H}(p_{\theta}(\bm{x},\bm{y}))$. To this end, we start by considering random variables $(\bm{x}, \bm{y}, \bm{z})$ under the joint distribution $$p_{\theta}(\bm{x}, \bm{y}, \bm{z}) = p_{\theta}(\bm{x}, \bm{y}|\bm{z}) p(\bm{z}) = p_{\theta}(\bm{y}|\bm{x}, \bm{z}) p(\bm{x}, \bm{z}),$$ where $p_{\theta}(\bm{y}|\bm{x}, \bm{z}) = \delta(\bm{y}-f_{\theta}(\bm{x}, \bm{z}))$, and $\delta(\cdot)$ is the Dirac delta function. The mutual information between $(\bm{x}, \bm{y})$ and $\bm{z}$ satisfies the information theoretic identity $$\mathbb{I}(\bm{x}, \bm{y}; \bm{z}) = \mathbb{H}(\bm{x}, \bm{y})-\mathbb{H}(\bm{x}, \bm{y}|\bm{z}) =
\mathbb{H}(\bm{z}) - \mathbb{H}(\bm{z}|\bm{x}, \bm{y}),$$ where $\mathbb{H}(\bm{x}, \bm{y})$, $\mathbb{H}(\bm{z})$ are the marginal entropies and $\mathbb{H}(\bm{x}, \bm{y}|\bm{z})$, $\mathbb{H}(\bm{z}|\bm{x}, \bm{y})$ are the conditional entropies [@akaike1998information]. Since in our setup $\bm{x}$ is a deterministic variables independent of $\bm{z}$, and samples of $p_{\theta}(\bm{y}|\bm{x}, \bm{z})$ are generated by a deterministic function $f_{\theta}(\bm{x}, \bm{z})$, it follows that $\mathbb{H}(\bm{x}, \bm{y}|\bm{z}) = 0$. We therefore have $$\label{eq:info_identity}
\mathbb{H}(\bm{x}, \bm{y}) = \mathbb{H}(\bm{z}) - \mathbb{H}(\bm{z}|\bm{x}, \bm{y}),$$ where $\mathbb{H}(\bm{z}) := -\int \log p(\bm{z}) p(\bm{z}) d\bm{z}$ does not depend on the generative model parameters $\theta$.
Now consider a general variational distribution $q_{\phi}(\bm{z}|\bm{x}, \bm{y})$ parametrized by a set of parameters $\phi$. Then,
$$\begin{aligned}
\mathbb{H}(\bm{z}|\bm{x}, \bm{y}) = & -\mathbb{E}_{p_{\theta}(\bm{x}, \bm{y}, \bm{z})}[\log(p_{\theta}(\bm{z}|\bm{x}, \bm{y}))] \nonumber\\
= & -\mathbb{E}_{p_{\theta}(\bm{x}, \bm{y}, \bm{z})}[\log(q_{\phi}(\bm{z}|\bm{x}, \bm{y}))] \nonumber \\
& -\mathbb{E}_{p_{\theta}(\bm{x}, \bm{y})}[\mathbb{KL}[p_{\theta}(\bm{z}|\bm{x}, \bm{y})||q_{\phi}(\bm{z}|\bm{x}, \bm{y})]] \nonumber \\
\le & -\mathbb{E}_{p_{\theta}(\bm{x}, \bm{y}, \bm{z})}[\log(q_{\phi}(\bm{z}|\bm{x}, \bm{y}))]. \label{eq:entropy_bound}\end{aligned}$$
Viewing $\bm{z}$ as a set of latent variables, then $q_{\phi}(\bm{z}|\bm{x}, \bm{y})$ is a variational approximation to the true intractable posterior over the latent variables $p_{\theta}(\bm{z}|\bm{x}, \bm{y})$. Therefore, if $q_{\phi}(\bm{z}|\bm{x}, \bm{y})$ is introduced as an auxiliary inference model associated with the generative model $p_{\theta}(\bm{x}, \bm{y})$, for which $\bm{y} = f_{\theta}(\bm{x}, \bm{z})$ and $\bm{z}\sim p(\bm{z})$, then we can use equations \[eq:info\_identity\] and \[eq:entropy\_bound\] to bound the entropy term in equation \[eq:KL\] as $$\mathbb{H}(p_{\theta}(\bm{x},\bm{y})) \ge \mathbb{H}(p(\bm{z})) + \mathbb{E}_{p_{\theta}(\bm{x}, \bm{y}, \bm{z})}[\log(q_{\phi}(\bm{z}|\bm{x}, \bm{y}))].$$ Note that the inference model $q_{\phi}(\bm{z}|\bm{x}, \bm{y})$ plays the role of a variational approximation to the true posterior over the latent variables, and appears naturally using information theoretic arguments in the derivation of the lower bound.
### Adversarial training objective {#sec:ADVI}
By leveraging the density ratio estimation procedure described in section \[sec:density\_ratio\] and the entropy bound derived in section \[sec:entropy\_bound\], we can derive the following loss functions for minimizing the reverse Kullback-Leibler divergence with entropy regularization $$\begin{aligned}
\mathcal{L}_{\mathcal{D}}(\psi) = & \ \mathbb{E}_{q(\bm{x})p(\bm{z})}[\log\sigma(T_{\psi}(\bm{x},f_{\theta}(\bm{x},\bm{z})))] + \nonumber \\ & \ \mathbb{E}_{q(\bm{x},\bm{y})}[\log(1-\sigma(T_{\psi}(\bm{x},\bm{y})))] \label{eq:discriminator_loss}\\
\mathcal{L}_{\mathcal{G}}(\theta, \phi) = & \ \mathbb{E}_{q(\bm{x},\bm{y})p(\bm{z})}[T_{\psi}(\bm{x}, f_{\theta}(\bm{x},\bm{z}))+ (1-\lambda)\log(q_{\phi}(\bm{z}|\bm{x},f_{\theta}(\bm{x},\bm{z}))) + \\ & \beta\|f_{\theta}(\bm{x},\bm{z}) - \bm{y}\|^2] \label{eq:generator_loss},\end{aligned}$$ where $\sigma(x)=1/(1+e^{-x})$ is the logistic sigmoid function. For supervised learning tasks we can consider an additional penalty term controlled by the parameter $\beta$ that encourages a closer fit to the observed individual data points. Notice how the binary cross-entropy objective of equation \[eq:discriminator\_loss\] aims to progressively improve the ability of the classifier $T_{\psi}(\bm{x},\bm{y})$ to discriminate between “fake" samples $(\bm{x},f_{\theta}(\bm{x},\bm{z}))$ obtained from the generative model $p_{\theta}(\bm{x},\bm{y})$ and “true" samples $(\bm{x},\bm{y})$ originating from the observed data distribution $q(\bm{x},\bm{y})$. Simultaneously, the objective of equation \[eq:generator\_loss\] aims at improving the ability of the generator $f_{\theta}(\bm{x},\bm{y})$ to generate increasingly more realistic samples that can “fool" the discriminator $T_{\psi}(\bm{x},\bm{y})$. Moreover, the encoder $q_{\phi}(\bm{z}|\bm{x},f_{\theta}(\bm{x},\bm{z}))$ not only serves as an entropic regularization term than allows us to stabilize model training and mitigate the pathology of mode collapse, but also provides a variational approximation to true posterior over the latent variables. The way it naturally appears in the objective of equation \[eq:generator\_loss\] also encourages the cycle-consistency of the latent variables $\bm{z}$; a process that is known to result in disentangled and interpretable low-dimensional representations of the observed data [@friedman2001elements].
In theory, the optimal set of parameters $\{\theta^{\ast}, \phi^{\ast}, \psi^{\ast}\}$ correspond to the Nash equilibrium of the two player game defined by the loss functions in equations \[eq:discriminator\_loss\],\[eq:generator\_loss\], for which one can show that the exact model distribution and the exact posterior over the latent variables can be recovered [@goodfellow2014generative; @pu2017symmetric]. In practice, although there is no guarantee that this optimal solution can be attained, the generative model can be trained by alternating between optimizing the two objectives in equations \[eq:discriminator\_loss\],\[eq:generator\_loss\] using stochastic gradient descent as
$$\begin{aligned}
& \mathop{\max}_{\psi} \ \mathcal{L}_{\mathcal{D}}(\psi) \label{eq:discriminator}\\
& \mathop{\min}_{\theta, \phi} \ \mathcal{L}_{\mathcal{G}}(\theta, \phi) \label{eq:generator}.\end{aligned}$$
### Predictive distribution {#sec:predictions}
Once the model is trained we can characterize the statistics of the outputs $\bm{y}$ by sampling latent variables from the prior $p(\bm{z})$ and passing them through the generator to yield conditional samples $\bm{y} = f_{\theta}(\bm{x}, \bm{z})$ that are distributed according to the predictive model distribution $p_{\theta}(\bm{y}|\bm{x})$. Note that although the explicit form of this distribution is not known, we can efficiently compute any of its moments via Monte Carlo sampling. The cost of this prediction step is negligible compared to the cost of training the model, as it only involves a single forward pass through the generator function $f_{\theta}(\bm{x},\bm{z})$. Typically, we compute the mean and variance of the predictive distribution at a new test point $\bm{x}^{\ast}$ as
$$\begin{aligned}
\mu_{\bm{y}}(\bm{x}^{\ast}) & = \mathbb{E}_{p_{\theta}}[\bm{y}|\bm{x}^{\ast}, \bm{z}] \approx \frac{1}{N_s}\sum\limits_{i=1}^{N_s} f_{\theta}(\bm{x}^{\ast}, \bm{z}_i), \label{eq:predictive_mean} \\
\sigma^{2}_{\bm{y}}(\bm{x}^{\ast}) & = \mathbb{V}\text{ar}_{p_{\theta}}[\bm{y}|\bm{x}^{\ast}, \bm{z}] \approx \frac{1}{N_s}\sum\limits_{i=1}^{N_s} [f_{\theta}(\bm{x}^{\ast}, \bm{z}_i) - \mu_{\bm{y}}(\bm{x}^{\ast})]^2, \label{eq:predictive_variance}\end{aligned}$$
where $\bm{z}_i \sim p(\bm{z})$, $i = 1,\dots,N_s$, and $N_s$ corresponds to the total number of Monte Carlo samples.
Results {#sec:results}
=======
Here we present a diverse collection of demonstrations to showcase the broad applicability of the proposed methods. Moreover, in \[sec:appendix\] we provide a comprehensive collection of systematic studies that aim to elucidate the robustness of the proposed algorithms with respect to different parameter settings. In all examples presented in this section we have trained the models for 20,000 stochastic gradient descent steps using the Adam optimizer [@kingma2014adam] with a learning rate of $10^{-4}$, while fixing a one-to-five ratio for the discriminator versus generator updates. Unless stated otherwise, we have also fixed the entropic regularization and the residual penalty parameters to $\lambda = 1.5$ and $\beta = 0.0$, respectively. The proposed algorithms were implemented in Tensorflow v1.10 [@abadi2016tensorflow], and computations were performed in single precision arithmetic on a single NVIDIA Tesla P100 GPU card. All data and code accompanying this manuscript will be made available at <https://github.com/PredictiveIntelligenceLab/CADGMs>.
Regression of noisy data {#sec:regression}
------------------------
We begin our presentation with an example in which the observed data is generated by a deterministic process but the observations are stochasticaly perturbed by random noise. Specifically, we consider the following three distinct cases:
1. [*Gaussian homoscedastic noise:*]{} $$g(x) = \log(10 (|x-0.03|+0.03))\sin(\pi (|x-0.03|+0.03)) + \delta,$$ where $\delta$ corresponds to $5\%$ uncorrelated zero-mean Gaussian noise.
2. [*Gaussian heteroscedastic noise:*]{} $$g(x) = \log(10 (|x-0.03|+0.03))\sin(\pi (|x-0.03|+0.03)) + \delta(x),$$ where $\delta(x) = \frac{\epsilon}{\exp (2(|x-0.03|+0.03))}$, and $\epsilon\sim N(0, 0.5^2)$.
3. [*Non-additive, non-Gaussian noise:*]{} $$g(x) = \log(10 (|x-0.03|+0.03))\sin(\pi (|x-0.03|+0.03) + 2\delta(x)) + \delta(x)$$ where $\delta(x) = \frac{\epsilon}{\exp (2(|x-0.03|+0.03))}$, and $\epsilon\sim N(0, 0.5^2)$.
In all cases, we assume access to $N=200$ training pairs $\{\bm{x}_i, \bm{y}_i\}$, $i=1,\dots,N$ randomly sampled in the interval $x\in[-2,2]$ according to the empirical data distribution $q(\bm{x},\bm{y})$. Then, our goal is to approximate the conditional distribution $p_{\theta}(\bm{y}|\bm{x},\bm{z})$ using a generative model $\bm{y} = f_{\theta}(\bm{x}, \bm{z})$, $\bm{z}\sim p(\bm{z})$, that combines the original inputs $\bm{x}$ and a set of latent variables $\bm{z}$ to predict the outputs $\bm{y}$.
As described in section \[sec:Methods\], the outputs $\bm{y}$ are generated by pushing the inputs $\bm{x}$ and the latent variables $\bm{z}$ through a deterministic generator function $f_{\theta}(\bm{x},\bm{z})$, typically parametrized by deep neural networks. Moreover, a discriminator network is used to minimize the reverse KL-divergence between the generative model distribution $p_{\theta}(\bm{x},\bm{y})$ and the empirical data distribution $q(\bm{x},\bm{y})$. Finally, we introduce an auxiliary inference network to model the approximate posterior distribution over the latent variables, namely $q_{\phi}(\bm{z}|\bm{x}, \bm{y})$ that encodes the observed data $(\bm{x}, \bm{y})$ into a latent space using a deterministic mapping $\bm{z} = f_{\phi}(\bm{x}, \bm{y})$, also modeled using a deep neural network.
The proposed conditional generative model is constructed using fully connected feed-forward architectures for the encoder and generator networks with 3 hidden layers and 100 neurons per layer, while the discriminator architecture has 2 hidden layers with 100 neurons per layer. All activation use a hyperbolic tangent non-linearity, and we have not employed any additional modifications such as L2 regularization, dropout or batch-normalization [@goodfellow2016deep]. During model training, for each epoch we train the discriminator for two times, and encoder and generator for one time using stochastic gradient updates with the Adam optimizer [@kingma2014adam] and a learning rate of $10^{-4}$ using the entire data batch. Finally, we set the entropic regularization penalty parameter $\lambda = 1.5$.
Figure \[fig:regression\] summarizes our results for all cases obtained using:
1. The proposed conditional generative model described above.
2. A simple Gaussian process model with a Gaussian likelihood and a squared exponential covariance function trained using exact inference [@rasmussen2004gaussian].
3. A Bayesian neural network having the same architecture as the generator network described above and trained using mean-field stochastic variational inference [@neal2012bayesian]
We observe that the proposed conditional generative model returns robust predictions with sensible uncertainty estimates for all cases. On the other hand, the basic Gaussian process and Bayesian neural network models perform equally well for the simple uncorrelated noise case, but suffer from over-fitting and fail to return reasonable uncertainty estimates for the more complex heteroscedastic and non-additive cases. These predictions could in principle be improved with the use of more elaborate priors, likelihoods and inference procedures, however such remedies often hamper the practical applicability of these methods. In contrast, the proposed conditional generative model appears to be robust across these inherently different cases without requiring any modifications or specific assumptions regarding the nature of the noise process.
![[*Regression under homoscedastic noise:*]{} Training data (black crosses) and the exact noise-free solution (blue solid line) versus the predictive mean (red dashed line)and two standard deviations (orange shaded region) obtained by: (a) the proposed conditional generative model, (b) a Gaussian process regression model, and (c) a Bayesian neural network. [*Top row panels:*]{} Gaussian homoscedastic noise, [*Middle row panels:*]{} Gaussian heteroscedastic noise, [*Bottom row panels:*]{} Non-additive, non-Gaussain noise.[]{data-label="fig:regression"}](regression.pdf){width="\textwidth"}
Multi-fidelity modeling of stochastic processes {#sec:multifidelity}
-----------------------------------------------
In this section we demonstrate how the proposed methodology can be adapted to accommodate the setting of supervised learning from data of variable fidelity. Let it be a synthesis of expensive experiments and simplified analytical models, multi-scale/multi-resolution computational models, or historical data and expert opinion, the concept of multi-fidelity modeling lends itself to enabling effective pathways for accelerating the analysis of systems that are prohibitively expensive to evaluate. As discussed in section \[sec:Introduction\], these methods have been successful in a wide spectrum of applications including design optimization [@forrester2007multi; @robinson2008surrogate; @alexandrov2001approximation; @sun2010two; @sun2011multi], model calibration [@perdikaris2016model; @perdikaris2015data; @perdikaris2015calibration], and uncertainty quantification [@eldred2009comparison; @ng2012multifidelity; @padron2014multi; @biehler2015towards; @peherstorfer2016optimal; @peherstorfer2016multifidelity; @peherstorfer2016survey; @narayan2014stochastic; @zhu2014computational; @bilionis2013multi; @parussini2017multi; @perdikaris2016multifidelity].
Except perhaps for Gaussian process regression models, most existing approaches to multi-fidelity modeling are trying to construct deterministic surrogates of some form $\bm{y}=f(\bm{x})$, and use theoretical error bounds to quantify the accuracy of the surrogate model predictions. For instance, a multi-fidelity problem can be formulated by considering $\bm{y}:=\{y_l\}$ and $\bm{x}:=\{x,\lambda,y_1,y_2,\dots,y_{l-1},\}$, where $y_l$ is the output of our highest fidelity information source, $(y_1,y_2,\dots,y_{l-1})$ are predictions of lower fidelity models, $x$ is a vector of space-time coordinates, and $\lambda$ is a vector of uncertain parameters. Despite their growing popularity, the applicability of multi-fidelity modeling techniques is typically limited to systems that are governed by deterministic input-output relations. To the best of our knowledge, this is the first attempt of applying the concept of multi-fidelity modeling to expedite the statistical characterization of correlated stochastic processes.
Without loss of generality, and to keep our presentation clear, we will focus on a setting involving two correlated stochastic processes. Intuitively, one can think of the following example scenario. We want to characterize the statistics of a random quantity of interest (e.g., velocity fluctuations of a turbulent flow near a wall boundary) by recording its value at a finite set of locations and for a finite number of random realizations. However, these recordings may be hard/expensive to obtain as they may require a set of sophisticated and well calibrated sensors, or a set of fully resolved computational simulations. At the same time, it might be easier to obtain more measurements either by probing the same quantity of interest using a set of cheaper/uncalibrated sensors (or simplified/coarser computational models), or by probing an auxiliary quantity of interest that is statistically correlated to our target variable but is much easier to record (e.g., sensor measurements of pressure on the wall boundary). Then our goal is to synthesize these measurements and construct a predictive model that can fully characterize the statistics of the target stochastic process.
More formally, we assume that we have access to a number of [*high-fidelity*]{} input-output pairs $(\bm{x}_H,\bm{y}_H)$ corresponding to a finite number of realizations of the target stochastic process, measured at a handful input locations $x_H$ using high-fidelity sensors. Moreover, we also have access to [*low-fidelity*]{} input-output pairs $(\bm{x}_L,\bm{y}_L)$ corresponding to a finite number of realizations of either the target stochastic process or an auxiliary process that is statistically correlated with the target, albeit probed for a much larger collection of inputs. Then our goal is to learn the conditional distribution $p_{\theta}(\bm{y}_H|\bm{x}_H, \bm{y}_L, \bm{z})$ using a generative model $\bm{y}_H = f_{\theta}(\bm{x}_H, \bm{y}_L, \bm{z})$, $\bm{z}\sim p(\bm{z})$.
We will illustrate this work-flow using a synthetic example involving data generated from two correlated Gaussian processes in one input dimension
$$\begin{aligned}
\left[ \begin{array}{c} f_{L}(x) \\ f_{H}(x) \end{array} \right]
& \sim \mathcal{N}\left(\left[\begin{array}{c} \mu_L(x) \\ \mu_H(x) \end{array} \right],
\left[ \begin{array}{c c} K_{LL} & K_{LH}
\\ K_{LH}' & K_{HH}
\end{array} \right]\right), \\\end{aligned}$$
with a mean and covariance functions given by $$\begin{aligned}
\mu_L(x) &= 0.5\mu_H(x) + 10(x-0.5) - 5 \\
\mu_H(x) &= (6x-2)^2\sin(12x-4) \label{eq:mu_H}\\
K_{LL} & = k(x,x;\theta_{L}) \\
K_{LH} & = \rho k(x,x;\theta_{L}) \\
K_{HH} & = \rho^2 k(x,x;\theta_{L}) + k(x,x;\theta_{H}).\end{aligned}$$ Here $\theta_{L}=(\sigma_{f_L}^2, l_L^2)$ and $\theta_{H}=(\sigma_{f_H}^2, l_H^2)$ correspond to two different sets of hyper-parameters of a square exponential kernel $$\label{eq:rbf_kernel}
k(x,x';\theta) = \sigma_f^2\exp\left(-\frac{(x-x')^2}{2l^2}\right).$$ Moreover, $\rho$ is a parameter that controls the degree to which the two stochastic processes exhibit linear correlations [@kennedy2000predicting; @perdikaris2017nonlinear]. In this example we have considered $\sigma_{f_L}^2 = 0.1, l_L^2 = 0.5, \sigma_{f_H}^2 = 0.5, l_H^2=0.5$ and $\rho=0.8$, and generated a training data-set consisting of 50 realizations of $f_L(x)$ and $f_H(x)$ recorded using a set of sensors fixed at locations $x_L = x_H = [0, 0.4, 0.6, 1.0]$ (see figure \[fig:MuFi\](a)).
We employ a conditional generative model constructed using simple feed-forward neural networks with 3 hidden layers and 100 neurons per layer for both the generator and the encoder, and 2 hidden layers with 100 neurons per layer for the discriminator. The activation function in all cases is chosen to be a hyperbolic tangent non-linearity. Moreover, we have chosen a one-dimensional latent space with a standard normal prior, i.e. $z\sim\mathcal{N}(0,1)$. Model training is performed using the Adam optimizer [@kingma2014adam] with a learning rate of $10^{-4}$ for all the networks. For each stochastic gradient descent iteration, we update the discriminator for 1 time and the generator for 5 times, while we fix the entropic regularization penalty parameter to $\lambda = 1.5$. Notice that during model training the algorithm only requires to see joint observations of $f_L(x)$ and $f_H(x)$ at a fixed set of input locations $x$ (see figure \[fig:MuFi\](a)). However, during prediction at a new test point $x^{\ast}$ one needs to first sample $y_L^{\ast} = f_L(x^{\ast})$, and then use the generative model to produce samples $y_H^{\ast} = f_{\theta}(x^{\ast}, y_L^{\ast}, z)$, $z\sim\mathcal{N}(0,1)$.
The results of this experiment are summarized in figures \[fig:MuFi\](b) and \[fig:MuFi\_KL\]. Specifically, in figure \[fig:MuFi\](b) we observe a qualitative agreement between the second order sufficient statistics for the predicted and the exact high-fidelity processes. The effectiveness of the multi-fidelity approach becomes evident when we compare our results against a single-fidelity conditional generative model trained only on the high-fidelity data. The result of this experiment is presented in \[fig:MuFi\_KL\](a) where it is clear that the generative model fails to correctly capture the target stochastic process. To make this comparison quantitative, we have estimated the forward and reverse Kullback-Leibler divergence for a collection of one-dimensional marginal distributions corresponding to different spatial locations in $x\in[0,1]$. To this end, we have employed a Gaussian approximation for the predicted marginal densities of the generative model and compared them against the exact Gaussian marginal densities of the target high-fidelity process using the analytical expression for the KL-divergence between two Gaussian distributions $p_1(x)\sim\mathcal{N}(\mu_1, \sigma_1^2)$ and $p_2(x)\sim\mathcal{N}(\mu_2, \sigma_2^2)$, $$\begin{aligned}
\mathbb{KL}[p_1(x)||p_2(x)] &= - \int p_1(x)\log p_2(x) dx + \int p_1(x)\log p_1(x) dx \nonumber \\
&= \frac{1}{2}\log(2\pi\sigma_2^2) + \frac{\sigma_1^2 + (\mu_1 - \mu_2)^2}{2\sigma_2^2} - \frac{1}{2}(1 + \log(2\pi\sigma_1^2)) \nonumber \\
& = \log\frac{\sigma_2}{\sigma_1} + \frac{\sigma_1^2 + (\mu_1 - \mu_2)^2}{2\sigma_2^2} - \frac{1}{2}. \label{eq:KL_div}\end{aligned}$$ The result of this comparison is shown in figure \[fig:MuFi\_KL\](b) for both the single- and multi-fidelity cases. Clearly, the appropriate utilization of the low-fidelity data results in significant accuracy gains for the multi-fidelity case, while the single-fidelity model is not able to generalize well and suffers from large errors in KL-divergence in all locations away from the training data.
![[*Multi-fidelity modeling of stochastic processes:*]{} (a) Sample realizations of the low- and high-fidelity stochastic processes (red and blue lines, respectively) along with the sensor measurements at $x = [0, 0.4, 0.6, 1.0]$ used to train the generative model (black and green crosses, respectively). (b) Predicted mean (red dashed line) and two standard deviations (yellow band) for the high-fidelity stochastic process versus the exact solution (blue solid line and green band, respectively).[]{data-label="fig:MuFi"}](MuFi.pdf){width="\textwidth"}
![[*Multi-fidelity modeling of stochastic processes:*]{} (a) Predicted mean (red dashed line) and two standard deviations (yellow band) of a single-fidelity conditional generative model versus the exact solution (blue solid line and green band, respectively). (b) Comparison of the KL-divergence and Reverse-KL-divergence between the exact marginal densities and the predictions of the single- and multi-fidelity conditional generative models.[]{data-label="fig:MuFi_KL"}](MuFi_comparison.pdf){width="\textwidth"}
Uncertainty propagation in high-dimensional dynamical systems {#sec:highdimUQ}
-------------------------------------------------------------
In this section we aim to demonstrate how the proposed inference framework can leverage modern deep learning techniques to tackle high-dimensional uncertainty propagation problems involving complex dynamical systems. To this end, we will consider the temporal evolution of the non-linear time-dependent Burgers equation in one spatial dimension, subject to random initial conditions. The equation and boundary conditions read as $$\label{eq:Burgers}
\begin{aligned}
&u_t + u u_x - \nu u_{xx} = 0, \quad\quad x\in[-7, 3], t\in[0,50],\\
% &u(0,x) = -\sin(\pi x),\\
&u(t,-7) = u(t,3) = 0,\\
\end{aligned}$$ where the viscosity parameter is chosen as $\nu = 0.5$ [@burgers1948mathematical]. We will evolve the system starting from a random initial condition generated by a conditional Gaussian process [@rasmussen2004gaussian] that constrains the initial sample paths to satisfy zero Dirichlet boundary conditions, i.e. $u(0,x)\sim\mathcal{GP}(\mu(x), \Sigma(x))$, with $$\label{eq:Burgers_initial}
\begin{aligned}
\mu(x) &= k(x,x_b)K^{-1}y_b, \\
\Sigma(x) &= k(x,x) - k(x,x_b)K^{-1}k(x_b,x),\\
\end{aligned}$$ where $x_b$ and $y_b$ are column vectors corresponding to zero data near the domain boundaries, and $K$ is a covariance matrix constructed by evaluating the square exponential kernel (see equation \[eq:rbf\_kernel\]) with fixed variance and length-scale hyper-parameters $\sigma_f^2 = 0.005$ and $l^2 = 1$, respectively (see figure \[fig:Burgers\_init\]). The resulting solution to this problem is a continuous spatio-temporal random field $u(x,t)$ whose statistical description defines a non-trivial infinite-dimensional uncertainty propagation problem. As we will describe below, we will leverage the capabilities of convolutional neural networks in order to construct a scalable surrogate model that is capable of providing a complete statistical characterization of the random field $u(x,t)$ for any time $t$ and for a finite collection of spatial locations $x$.
![[*Uncertainty propagation in high-dimensional dynamical systems:*]{} 100 representative samples of a conditional Gaussian process used as initial conditions for the Burgers equation.[]{data-label="fig:Burgers_init"}](Initial_Burgers.pdf){width="60.00000%"}
We generate a data-set consisting of 100 sample realizations of the system in the interval $t\in [0, 50]$ using a high-fidelity Fourier spectral method [@kassam2005fourth] on a regular spatial grid consisting of 128 points and 256 time-steps. Our goal here is to use a subset of this data to train a deep generative model for approximating the conditional density $p_{\theta}(\bm{u}|t,\bm{z})$, $\bm{z}\sim p(\bm{z})$, where the vector $\bm{u}\in\mathbb{R}^{128}$ corresponds to the collocation of the continuous field $u(x,t)$ at the 128 spatial grid-points for a given temporal snapshot at time $t$. We use data from 64 randomly selected temporal snapshots to train the generative model, and the rest will be used for validating our results.
To exploit the gridded structure of the data we employ 1d-convolutional neural networks [@krizhevsky2012imagenet] which allow us to construct a multi-resolution representation of the data that can capture local spatial correlations [@lecun2015deep; @mallat2016understanding]. To this end, the generator network is constructed using 5 transposed convolution layers with channel sizes of $[512, 256, 128, 64, 32]$, kernel size 4, stride 2, and a hyperbolic tangent activation function in all layers except the last. For the encoder we use 5 convolutional layers with channel sizes of $[32, 64, 128, 128, 256]$, each with a kernel size of 5, stride 2, followed by a batch normalization layer [@ioffe2015batch] and a hyperbolic tangent activation. The last layer of the encoder is a fully connected layer that returns outputs with the same dimension of $\bm{z}$. Here, we choose the latent space dimension to be 32, i.e. $\bm{z}\in \mathbb{R}^{32}$, with an isotropic normal prior, $p(\bm{z})\sim\mathcal{N}(\bm{0},\bm{I})$. Finally, for the discriminator we use 4 convolution layers with the channel sizes of $[32, 64, 128, 256]$, each with kernel size of 5, stride 2, and a hyperbolic tangent activation function in all layers except the last. The last layer of the discriminator is a fully connected layer to convert the final output into scalar class probability predictions that aim to correctly distinguish between real and generated samples in the 128-dimensional output space.
Notice that the time variable is treated as a continuous label corresponding to each time instant $t$, and it is incorporated in our work-flow as follows. For the discriminator and the encoder, we broadcast time as a vector having the same size of the data and treat it as an additional input channel. For the decoder, we broadcast time as a vector having the same size of the latent variable and concatenate them together. We use the Adam [@kingma2014adam] optimizer with the learning rate $10^{-4}$ for all the networks. For each epoch, we train the discriminator for 1 time and the generator for 1 time. Finally, we set the entropic regularization penalty to $\lambda = 1.5$ and the data fit penalty to $\beta = 0.5$ (see equation \[eq:generator\_loss\]).
Figure \[fig:Burgers\_samples\] provides a visual comparison between reference trajectory samples obtained by high-fidelity simulations of equation \[eq:Burgers\] and trajectories generated by sampling the trained conditional generative model $p_{\theta}(\bm{u}|t,\bm{z})$. A more detailed comparison is provided in figure \[fig:Burgers\_marginals\] in terms of one-dimensional slices taken at four distinct time instances that were not used during model training. In both figures we observe a very good qualitative agreement between the reference and the predicted solutions, indicating that the conditional generative model is able to correctly capture the statistical structure of the system. These results are indicative of the ability of the proposed method to approximate a non-trivial 128-dimensional distribution using only scattered measurements from 100 sample realizations of the system.
![[*Uncertainty propagation in high-dimensional dynamical systems:*]{} (a) Exact sample trajectories of the Burgers equation. (b) Samples generated by the conditional generative model $p_{\theta}(\bm{u}|t,\bm{z})$. The comparison corresponds to 16 different temporal snapshots and depicts 10 samples per snapshot. Each sample is a 128-dimensional vector.[]{data-label="fig:Burgers_samples"}](Burgers_samples.pdf){width="\textwidth"}
![[*Uncertainty propagation in high-dimensional dynamical systems:*]{} Mean (solid blue line) and two standard deviations (green shaded region) of reference simulated trajectories of the Burgers equation versus the predictions of the conditional generative model $p_{\theta}(\bm{u}|t,\bm{z})$ (red dashed line and yellow shaded region, respectively). Results are reported for four temporal instances that were not used during model training: (a) $t = 12.5$, (b) $t = 25$, (c) $t = 37.5$, and (d) $t = 50$.[]{data-label="fig:Burgers_marginals"}](Burgers_marginals.pdf){width="\textwidth"}
Discussion {#sec:discussion}
==========
We have presented a statistical inference framework for constructing scalable surrogate models for stochastic, high-dimensional and multi-fidelity systems. Leveraging recent advances in deep learning and stochastic variational inference, the proposed regularized inference procedure goes beyond mean-field and Gaussian approximations, it can accommodate the use of implicit models that are capable of approximating arbitrarily complex distributions, and is able to mitigate the issue of mode collapse that often hampers the performance of adversarial generative models. These elements enable the construction of conditional deep generative models that can be effectively trained on scattered and noisy input-output observations, and provide accurate predictions and robust uncertainty estimates. The latter, not only serves as a measure for a-posteriori error estimation, but it is also a key enabler of downstream tasks such as active learning [@cohn1996active] and Bayesian optimization [@shahriari2016taking]. Moreover, the use of latent variables adds flexibility in learning from data-sets that may be corrupted by complex noise processes, and offers a general platform for nonlinear dimensionality reduction. Taken all together, these developments aspire to provide a new set of probabilistic tools for expediting the analysis of stochastic systems, as well as act as unifying glue between experimental assays and computational modeling.
Our goal for this work is to present a new viewpoint on building surrogate models with a particular emphasis on the methodological foundations of the proposed algorithms. To this end, we confined the presentation to a diverse collection of canonical studies that were designed to highlight the broad applicability of the proposed tools, as well as to provide a test bed for systematic studies that elucidate their practical performance. In the process of gaining a deeper understanding of their advantages and limitations, future studies will focus on realistic large-scale applications in computational mechanics and beyond.
Acknowledgements {#acknowledgements .unnumbered}
================
This work received support from the US Department of Energy under the Advanced Scientific Computing Research program (grant DE-SC0019116) and the Defense Advanced Research Projects Agency under the Physics of Artificial Intelligence program.
|
{
"pile_set_name": "ArXiv"
}
|
=1
Introduction
============
Hopf algebras with $R$-matrices, so called *quasitriangular Hopf algebras*, give rise to tensor categories with a braiding $c\colon V\otimes W\stackrel{\sim}{\longrightarrow} W\otimes V$. Of particular interest are braided tensor categories where the braiding fulfills a certain non-degeneracy condition, see Definition \[def:FactorizabilityOfCats\], which is equivalent to the fact that there are no *transparent objects* $V$, i.e., no objects where the double-braiding $c^2\colon V\otimes W\stackrel{\sim}{\longrightarrow} V\otimes W$ is the identity for all $W$. A ${\mathbb{C}}$-linear tensor category with a nondegenerate braiding, as well as finiteness conditions and another natural transformation $\theta\colon V\stackrel{\sim}{\longrightarrow} V$ (twist), is called a *modular tensor category*. Note that we do not require the category to be semisimple.
Modular tensor categories have many interesting applications: They give rise to topological invariants and mapping class group actions [@KL01; @Tur94]. For example, the standard generators $T$, $S$ of the mapping class group of the torus $\mathrm{SL}_2({\mathbb{Z}})$ are constructed from $\theta$ and $c^2$, respectively. A different source for modular tensor categories in mathematical physics are vertex algebras. There are only few example classes of modular tensor categories, in particular non-semisimple ones.
The aim of the present article is to provide modular tensor categories from *small quantum groups* $u_q({\mathfrak{g}})$ at a primitive $\ell$-th root of unity $q$ for a finite-dimensional simple complex Lie algebra ${\mathfrak{g}}$. Lusztig [@Lus90] has constructed these finite-dimensional Hopf algebras and provided an ansatz for an $R$-matrix $R_0\bar{\Theta}$, where the fixed element $\bar{\Theta} \in u_q({\mathfrak{g}})^{-} \otimes u_q({\mathfrak{g}})^{+}$ is constructed from a dual basis of PBW generators, while $R_0 \in u_q({\mathfrak{g}})^0 \otimes u_q({\mathfrak{g}})^0$ is a free parameter subject to some constraints. He gives one canonical solution for $R_0$ whenever $\ell$ has no common divisors with root lengths, otherwise there are cases where no $R$-matrix exists [@KS11] and the quantum group becomes more interesting [@Len14], involving, e.g., the dual root system. Of particular interest in conformal field theory [@FGST06; @FT10; @GR15] is the most extreme case where all root lengths $(\alpha,\alpha)$ divide $\ell$. In particular our article addresses the question, which modular tensor category appear in these cases. We find indeed, e.g., in Lemma \[lm:Lusztigkernel\] that these extremal cases give especially nice $R$-matrices; although in general they are not factorizable and will require modularization (see for example [@Bru00]) to match the CFT side.
But even if there are no common divisors with the root length, the resulting braided tensor categories may not fulfill the non-degeneracy condition and hence provides no modular tensor category.
=-1 Both obstacles (existence and non-degeneracy) can be be resolved by extending the Cartan part of the quantum group by a choice of a lattice $\Lambda_R\subseteq \Lambda \subseteq \Lambda_W$ between root- and weight-lattice, respectively a choice of a subgroup of the fundamental group $\pi_1:=\Lambda_W/\Lambda_R$, corresponding to a choice of a Lie group between adjoint and simply-connected form. These extensions are already present in [@Lus90] as the choice of two lattices $X$, $Y$ with pairing $X\times Y\to {\mathbb{C}}^\times$ (root datum). In this way the number of possible $R$ matrices increases and the purpose of our paper is to study them all.
In a previous article [@LN14b] we have already constructed some solutions $R_0$ in this spirit (under some additional assumptions). As it turns out, the solutions can be parametrized by subgroups $H_1,H_2\subset \pi_1$ and group pairings between $H_1$, $H_2$, and the set of solutions depends on the common divisors of $\ell$ not only with root lengths, but also divisors of the Cartan matrix. Some cases admit no braided structure, while others have multiple in-equivalent solutions. An interesting occurrence was for example that $B_n$ behaves differently for $n$ odd or even, and that $D_{2n}$ with non-cyclic fundamental group allows several more solutions with non-symmetric $R_0$.
In the present article we conclude this effort: First we introduce more systematical techniques that allow us to compute a list of all quasitriangular structures (without additional assumptions, so we find more solutions). Then our new techniques allow us to determine, which of these choices fulfill the non-degeneracy condition. We also determine which cases have a ribbon structure. A main role in the first part is played by a natural pairing $a_\ell$ on the fundamental group $\pi_1$ which depends only on the common divisors of $\ell$ with the fundamental group and encapsulates the essential $\ell$-dependence. Then the non-degeneracy of the braiding turns out to depend only on the $2$-torsion of the abelian group in question.
Our result produces a list of modular tensor categories for representations of quantum groups. Moreover we use our methods to explicitly describe the group of transparent objects if the category is not modular, which is for example a prerequisite for modularization.
We now discuss our methods and results in more detail:
In Section \[section2\] we briefly recall the Lie theory and Hopf algebra preliminaries: For every finite-dimensional (semi-)simple complex Lie algebra ${\mathfrak{g}}$ and a primitive $\ell$-th root of unity $q$ Lusztig has introduced in [@Lus90] the *small quantum group* $u_q({\mathfrak{g}})$ which has a triangular decomposition $u_q^+u_q^0u_q^-$ where the (exponentiated) *Cartan algebra* $u_q^0$ is the groupring of the root lattice $\Lambda_R$ modulo some suitable sublattice and $u_q^\pm$ are generated by *simple root vectors* $E_{\alpha_i}$, $F_{\alpha_i}$ fulfilling $q$-deformed Serre relations. In [@Lus93 Section 32] he gives an ansatz for an $R$-matrix in the form $R_0\bar{\Theta}$, where $\bar{\Theta}$ consists of dual PBW basis’ and $R_0\in u_q^0\otimes u_q^0$ is an arbitrary element in the Cartan part that has to fulfill certain relations.
Our goal is to study the existence and non-degeneracy of $R$-matrices of this form for the quantum group $u_q({\mathfrak{g}},\Lambda,\Lambda')$ with any choice of lattice between root- and weight-lattice and any possible choice of quotient by a subgroup $\Lambda'\subseteq \Lambda$ in the Cartan part $u^0={\mathbb{C}}[\Lambda/\Lambda']$. Later, we prove that $\Lambda'$ is in fact unique if we want a quasitriagular structure (Corollary \[cor:LambdaPrime\]).
The $R_0$-matrix has the following interpretation: It is an $R$-matrix for the groupring ${\mathbb{C}}[\Lambda/\Lambda']$ and it appears as the braiding between highest-weight vectors in our $u_q({\mathfrak{g}})$-modules. Thus the previous theorem clarifies which choices for an $R$-matrix for the group ring lift to the quantum group.
In Section \[section3\] we address the question of constructing quasi-triangular $R$-matrices. First we briefly recall the following general combinatorial result in [@LN14b]:
The $R_0$-matrix is necessarily of the form $$\begin{gathered}
f(\mu,\nu)=\frac 1d q^{-(\mu, \nu)}g(\bar{\mu},\bar{\nu})\delta_{\bar{\mu}\in H_1}\delta_{\bar{\nu}\in H_2},\end{gathered}$$ where $H_1$, $H_2$ are subgroups of $\Lambda/\Lambda_R \subseteq \pi_1$ with $|H_1|=|H_2|=:d$ $($not necessarily isomorphic!$)$ and $g\colon H_1\times H_2\to{\mathbb{C}}^{\times}$ is a pairing of groups.
Then we proceed differently than in the previous article: Using the previous result, we prove in Lemma \[lm:NondegGroupPairing\] that the quasitriangularity of $R$ is equivalent to the assertion that the group pairing $\hat{f}:=|\Lambda/\Lambda'|\cdot f$ between the preimages $G_i:=\Lambda_i/\Lambda'$ of the groups $H_i$ is *non-degenerate* (which is no surprise). In particular we show that this condition fixes $\Lambda'$ uniquely. In later applications we often encounter $\hat{f}$ as a natural identification of $G_1$ and the dual $\hat{G}_2$, e.g., when studying representation theory.
To find all solutions $f$ with this property we develop a machinery to push $\hat{f}$ into the fundamental group $\pi_1$, which encapsulates all the $\ell$-dependence: In Definition \[def:matrix\_A\_l\] we give an abstract characterization of a *centralizer transfer map* $$\begin{gathered}
A_\ell\colon \ \Lambda/\Lambda_R \stackrel{\sim}{\longrightarrow} \operatorname{Cent}_{\Lambda}^\ell(\Lambda_R)/\operatorname{Cent}_{\Lambda_R}^\ell(\Lambda)\end{gathered}$$ (without =-2 proving that it always exists). In a generic case this is just multiplication by $\ell$, but it severely depends on common divisors of $\ell$ with root length and divisors of the Cartan matrix. With this matrix we can transfer $q^{-(\mu,\nu)}$ to a natural form $a_g^\ell$ on the fundamental group. We prove that $\hat{f}$ is non-degenerate iff $a_g^\ell(\mu,\nu)=q^{-(\mu, A_\ell(\nu))} \cdot g(\mu, A_\ell(\nu))$ is non-degenerate. This explains why the set of solutions, say for fundamental group ${\mathbb{Z}}_n$ always looks like the subset of invertible elements ${\mathbb{Z}}_n^\times$ but it is shifted (namely by $A_\ell$) depending on $\ell$ and the root system in question.
In Section \[section4\] the remaining computational work is done for quasitriagularity: We calculate a list containing $a_g^\ell$ for all simple ${\mathfrak{g}}$, depending on common divisors of $\ell$ with root length and divisors. We thus write down all solutions for $f$ and hence $R$-matrices. The calculation starts with the Smith normal form for the Cartan matrix in question and uses three cases: For $\Lambda=\Lambda_W$ we have a generic construction, the cases $A_n$ with their large fundamental group ${\mathbb{Z}}_{n+1}$ is treated by hand, as is $D_{2n}$ with non-cyclic fundamental group, which has the only cases allowing $\Lambda_1\neq \Lambda_2$.
In Section \[section5\] we address our main issue of factorizability with our new tools:
=-1 In Section \[section5.1\] we introduce factorizability. Then we calculate the monodromy matrix $R_{21}R$ for an arbitrary choice of $R$-matrix in terms the $R_0$-part. This gives a purely lattice theoretic problem equivalent to the factorizability of such an $R$-matrix. Then we prove in the main Theorem \[thm:InvertyMonodromymat\] that factorizability is equivalent to the non-degeneracy of a symmetrization $\operatorname{Sym}_G\big(\hat{f}\big)$ of $\hat{f}$. As will turn out later, the radical of this form is isomorphic to the group of transparent objects.
In Section \[section5.2\] we restrict ourselves to the *symmetric case* where $H_1=H_2$ and $f$, $g$ are symmetric. Other cases appear only in some of the non-cyclic ${\mathbb{Z}}_2\times{\mathbb{Z}}_2$-extension for type ${\mathfrak{g}}=D_{2n}$ and are dealt with in Section \[section5.3\] and give surprising new solutions.
The main result for the symmetric case is that the radical of the form $\operatorname{Sym}_G\big(\hat{f}\big)$ is in this case simply the $2$-torsion of $\Lambda/\Lambda'$ (Example \[ex:RadSymf\]) and that this is non-degenerate precisely for odd $\ell$ and odd $\Lambda/\Lambda_R$ as well as for ${\mathfrak{g}}=B_n$, $\Lambda=\Lambda_R$, $\ell\equiv 2$ ${\rm mod}~4$ including $A_1$.
In Section \[section5.4\] we prove the following result:
The transparent objects in the category of representations of the Hopf algebra $u_q({\mathfrak{g}},\Lambda)$ with $R$-matrix given by Lusztig’s ansatz are $1$-dimensional objects ${\mathbb{C}}_\chi$ and are the $f$-transformed of the radical of $\operatorname{Sym}_G\big(\hat{f}\big)$: $$\begin{gathered}
\chi(\mu)=f(\mu,\xi),\qquad \xi\in \operatorname{Rad}\big(\operatorname{Sym}_G\big(\hat{f}\big)\big).
\end{gathered}$$
In the following we summarize our results by a table containing all quasitriangular quantum groups $u_q({\mathfrak{g}},\Lambda)$ with their group of transparent objects. In Section \[section6\] we show that all our quasitriangular quantum groups admit a ribbon element. The factorizable solutions and thus modular tensor categories are $\ell$ odd, $\Lambda=\Lambda_R$ and the following new factorizable cases:
($\ell$ odd, $E_6$, $\Lambda=\Lambda_W$) and ($\ell\equiv 2$ ${\rm mod}~4$, ${\mathfrak{g}}=B_n$, $\Lambda=\Lambda_R$) (including $A_1$) and ($\ell$ odd, ${\mathfrak{g}}=D_{2n}$, $\Lambda_1\neq \Lambda_2$). All other cases can be modularized as discussed in Question \[q:modularize\].
The columns of the following table are labeled by
1. the finite-dimensional simple complex Lie algebra ${\mathfrak{g}}$,
2. the natural number $\ell$, determining the root of unity $q=\exp\big( {\frac{2 \pi i}{\ell}}\big) $,
3. the number of possible $R$-matrices for the Lusztig ansatz,
4. the subgroups $H_i \subseteq H=\Lambda/\Lambda_R$ introduced in Theorem \[thm:solutionsgrpeq\],
5. the subgroups $H_i$ in terms of generators given by multiples of fundamental dominant weights $\lambda_i \in \Lambda_W$,
6. the group pairing $g\colon H_1 \times H_2 \to {\mathbb{C}}^\times$ determined by its values on generators,
7. the group of transparent objects $T \subseteq \Lambda/\Lambda'$ introduced in Lemma \[lm\_transparent\].
[@c@||@c@|@c@||@c@|@c@|@c@|@c@]{} ${\mathfrak{g}}$ &$\ell$ &\# &$H_i\cong$ &$H_i\,{\scriptstyle (i=1,2)}$ &$g$ &$T \subseteq \Lambda/\Lambda'$\
& & & & & &\
&&&&&&\
&&&&&&\
&&&&&&\
& & & && &\
&&& & &&\
&&&&&&\
& & & &&&\
&&&&&&\
&&&&&&\
& & & & & &\
&&&&&&\
& & &&&&\
&&&&&&\
$B_{n\geq 2}$& & & & & &\
$\pi_1={\mathbb{Z}}_{2}$ & & &&&&\
& & &&&&\
&&&&&&\
& & & & & &\
&&&&&&\
&&&& & &\
&&&&&&\
$C_{n\geq 3}$ & & &&& &\
$\pi_1={\mathbb{Z}}_{2}$ & & &&&&\
& & &&&&\
&&&&&&\
& & & & & &\
&&&&&&\
& & & & & &\
&&&&&&\
& & & && &\
& & & &&&\
& &&&&&\
$D_{2n\geq 4}$&&&&&&\
$\pi_1={\mathbb{Z}}_{2} \times {\mathbb{Z}}_2$&&&&&&\
&&&&&&\
&&&&&&\
&&&&&&\
&&&&&&\
&&&&&&\
& &&&&&\
&&&&&&\
&&&&&&\
&&&&&&\
&&&&&&\
&&&&&&\
& & & & & &\
&&&&&&\
& & & & & &\
& & &&&&\
& & &&& &\
& & &&&&\
$D_{2n+1\geq 5}$ & & &&&&\
$\pi_1={\mathbb{Z}}_{4}$ &&&&&&\
&&&&&&\
&&&&&&\
&&&&&&\
&&&&&&\
& & & & & &\
&&&&&&\
& & & & & &\
& & &&&&\
$E_{6}$ & & &&&&\
$\pi_1={\mathbb{Z}}_{3}$&&&&&&\
&&&&&&\
&&&&&&\
& & & & & &\
&&&&&&\
& & & & & &\
$E_{7}$ & & &&&&\
$\pi_1={\mathbb{Z}}_{2}$ & & &&&&\
&&&&&&\
& & & & & &\
&&&&&&\
& & & & & &\
&&&&&&\
& & & & & &\
&&&&&&\
In the case $D_{2n}$, $\Lambda=\Lambda_W$, $g$ is uniquely defined by a $(2\times 2)$-matrix $K \in \mathfrak{gl}(2,{\mathbb{F}}_2)$, s.t. $g(\lambda_{2(n-1)+i},
\lambda_{2(n-1)+j})=\exp \big( \frac{2 \pi i K^g_{ij}}{2} \big)$ for $i,j\in \{1,2\}$.
Preliminaries {#section2}
=============
Lie theory {#section2.1}
----------
Throughout this article, ${\mathfrak{g}}$ denotes a finite-dimensional simple complex Lie algebra. We fix a choice of simple roots $\Delta=\{ \alpha_i\,|\,i \in I\}$, so that the Cartan matrix $C$ is given by $C_{ij}=2\frac{(\alpha_i,\alpha_j)}{(\alpha_i,\alpha_i)}$, where $(\,,\,)$ denotes the normalized Killing form. For a root $\alpha$, we define $d_\alpha:=\frac{(\alpha,\alpha)}{2}$ and set $d_i=d_{\alpha_i}$. By $\Lambda_R:=\mathbb{Z}[\Delta]$ and $\Lambda_R^\vee:=\mathbb{Z}[\Delta^\vee]$ we denote the (co)root lattice of ${\mathfrak{g}}$.
=-1 By $\Lambda_W$, we denote the *weight lattice* spanned by fundamental dominant weights $\lambda_i$, which are defined by the equation $(\lambda_i,\alpha_j)=\delta_{i,j}d_i$. Finally, we define the *co-weight lattice* $\Lambda_W^\vee$ as the $\mathbb{Z}$-span of the elements $\lambda_i^\vee:=\frac{\lambda_i}{d_i}$. The quotient $\pi_1:=\Lambda_W/\Lambda_R$ is called the *fundamental group* of ${\mathfrak{g}}$.
One can easily see that the Killing form restricts to a perfect pairing $(\,,\,)\colon \Lambda^\vee_W \times \Lambda_R \to \mathbb{Z}$ and that we get a string of inclusions $\Lambda_R \subseteq \Lambda_R^\vee \subseteq \Lambda_W \subseteq \Lambda_W^\vee$.
Lusztig’s ansatz for $\boldsymbol{R}$-matrices {#section2.2}
----------------------------------------------
The starting point for our work [@LN14b] was Lusztig’s ansatz in [@Lus93 Section 32.1] for a universal $R$-matrix of $U_q({\mathfrak{g}})$. Namely, for a specific element $\bar\Theta\in U_q^{\geq 0}\otimes U_q^{\leq 0}$ from a dual basis and a suitable (not further specified) element in the coradical $R_0\in U_q^0\otimes U_q^0$ we are looking for $R$-matrices of the form $$\begin{gathered}
R=R_0\bar\Theta.\end{gathered}$$ We remark that there is no claim that all possible $R$-matrices are of this form. However they are an interesting source of examples, motivated by the interpretation of $u_q({\mathfrak{g}})$ as a quotient of a Drinfeld double and thus well-behaved with respect to the triangular decomposition. This ansatz has been successfully generalized to general diagonal Nichols algebras in [@AY13]. In our more general setting $U_q({\mathfrak{g}},\Lambda,\Lambda')$, we have $$\begin{gathered}
R_0\in {\mathbb{C}}[\Lambda/\Lambda']\otimes{\mathbb{C}}[\Lambda/\Lambda'].\end{gathered}$$
This ansatz has been worked out by Müller in his dissertation [@Mue98a; @Mue98b; @Mue98b+] for small quantum groups $u_q({\mathfrak{g}})$ which we will use in the following, leading to a system of quadratic equation on $R_0$ that are equivalent to $R$ being an $R$-matrix:
\[thm:Theta\] Let $u:=u_q({\mathfrak{g}})$.
1. There is a unique family of elements $\Theta_\beta\in u_{\beta}^-\otimes u_{\beta}^+$, $\beta\in\Lambda_R$, such that $\Theta_0=1\otimes 1$ and $\Theta=\sum_{\beta}\Theta_{\beta}\in u\otimes u$ satisfies $\Delta(x)\Theta = \Theta \bar\Delta(x)$ for all $x\in\ u$.
2. Let $B$ be a vector space-basis of $u^-$, such that $B_{\beta}=B\cap u^-_{\beta}$ is a basis of $u^-_{\beta}$ for all $\beta$. Here, $u_{\beta}^-$ refers to the natural $\Lambda_R$-grading of $u^-$. Let $\{b^* \,|\, b\in B_{\beta}\}$ be the basis of $u_{\beta}^+$ dual to $B_{\beta}$ under the non-degenerate bilinear form $(\,\cdot\,,\,\cdot\,)\colon u^-\otimes u^+\to {\mathbb{C}}$. We have $$\begin{gathered}
\Theta_{\beta} = (-1)^{{\rm tr} \beta} q_{\beta} \sum_{b\in B_{\beta}} b^-\otimes b^{*+} \in u_{\beta}^- \otimes u_{\beta}^+.
\end{gathered}$$
\[thm:R0\] Let $\Lambda'\subset\{\mu\in\Lambda\,|\, K_{\mu} \text{ central in } u_q({\mathfrak{g}},\Lambda)\}$ a subgroup of $\Lambda$, and $G_1$, $G_2$ subgroups of $G:=\Lambda/\Lambda'$, containing $\Lambda_R/\Lambda'$. In the following, $\mu,\mu_1,\mu_2\in G_1$ and $\nu,\nu_1,\nu_2\in G_2$.
The element $R=R_0\bar\Theta$ with an arbitrary $R_0= \!\sum\limits_{\mu,\nu} f(\mu,\nu) K_{\mu} \otimes K_{\nu}$ is a $R$-matrix for $u_q({\mathfrak{g}},\Lambda,\Lambda')$, if and only if for all $\alpha\in \Lambda_R$ and $\mu$, $\nu$ the following holds: $$\begin{gathered}
f(\mu+ \alpha, \nu) = q^{-(\nu, \alpha)} f(\mu,\nu), \qquad f(\mu, \nu+ \alpha) = q^{-(\mu, \alpha)} f(\mu,\nu), \label{f01} \\
\sum_{\nu_1+\nu_2 = \nu} f(\mu_1,\nu_1)f(\mu_2,\nu_2) = \delta_{\mu_1,\mu_2} f(\mu_1,\nu),\qquad
\sum_{\mu_1+\mu_2 = \mu} f(\mu_1,\nu_1)f(\mu_2,\nu_2) = \delta_{\nu_1,\nu_2} f(\mu,\nu_1),\nonumber\\ \sum_{\mu} f(\mu,\nu) = \delta_{\nu,0},\qquad \sum_{\nu} f(\mu,\nu) = \delta_{\mu,0}.\nonumber \end{gathered}$$
Conditions for the existence of $\boldsymbol{R}$-matrices {#section3}
=========================================================
A first set of conditions on $\boldsymbol{\Lambda/\Lambda'}$ {#section3.1}
------------------------------------------------------------
The target of our efforts is a Hopf algebra called small quantum group $u_q({\mathfrak{g}},\Lambda,\Lambda')$ with Cartan part $u_q^0={\mathbb{C}}[\Lambda/\Lambda']$. It is defined, e.g., in [@LN14b] and depends on lattices $\Lambda$, $\Lambda'$ defined below. For $\Lambda=\Lambda_R$ the root lattice and this is the usual small quantum group; the choice of $\Lambda'$ differs in literature.
In the previous section we have discussed an $R=R_0\bar\Theta$-matrix for the quantum group $u_q({\mathfrak{g}},\Lambda,\Lambda')$ can be obtained from an $R_0$-matrix of the form $$\begin{gathered}
R_0= \sum_{\mu,\nu\in \Lambda} f(\mu,\nu) K_{\mu} \otimes K_{\nu}\in {\mathbb{C}}[\Lambda/\Lambda']\otimes{\mathbb{C}}[\Lambda/\Lambda'].\end{gathered}$$ In the following we collect necessary and sufficient conditions for $R=R_0\bar{\Theta}$ to be an $R$-matrix.
We fix once-and-for-all a finite-dimensional simple complex Lie algebra ${\mathfrak{g}}$ and a lattice $\Lambda$ between root- and weight-lattice $$\begin{gathered}
\Lambda_R\subseteq \Lambda \subseteq \Lambda_W.\end{gathered}$$ These choices have a nice geometric interpretation as quantum groups associated to different Lie groups associated to the Lie algebra ${\mathfrak{g}}$.
Another interesting choice is $\Lambda_R \subseteq \Lambda \subseteq \Lambda_W^\vee \cong \Lambda_R^*$, which would below pose no additional complications and may produce further interesting factorizable $R$-matrices.
We fix once-and-for-all a primitive $\ell$-th root of unity $q$. For $\Lambda_1,\Lambda_2 \subseteq \Lambda_W^\vee$ we define the sublattice $$\begin{gathered}
\operatorname{Cent}_{\Lambda_1}(\Lambda_2):= \{\,\nu \in \Lambda_1 \, | \, (\nu,\mu) \in \ell \cdot \mathbb{Z} \ \forall\, \mu \in \Lambda_2 \}.
\end{gathered}$$ Informally, this is the centralizer with respect to the braiding $q^{-(\nu,\mu)}$.
Contrary to [@LN14b] we do not fix $\Lambda'$ but we prove later Corollary \[cor:LambdaPrime\] that there is a necessary choice for $\Lambda'$. In this way, we get more solutions than in [@LN14b]. The only condition necessary to ensure that the Hopf algebra $u_q({\mathfrak{g}},\Lambda,\Lambda')$ is well-defined is $\Lambda'\subseteq \operatorname{Cent}_{\Lambda_R}(\Lambda_R)$.
\[thm:solutionsgrpeq\] The $R_0$-matrix is necessarily of the form $$\begin{gathered}
f(\mu,\nu)=\frac{1}{d|\Lambda_R/\Lambda'|}\cdot q^{-(\mu, \nu)}g(\bar{\mu},\bar{\nu})\delta_{\bar{\mu}\in
H_1}\delta_{\bar{\nu}\in H_2},
\end{gathered}$$ where $H_1$, $H_2$ are subgroups of $H:=\Lambda/\Lambda_R \subseteq \pi_1$ with equal cardinality $|H_1|=|H_2|=:d$ $($not necessarily isomorphic!$)$ and $g\colon H_1\times H_2\to{\mathbb{C}}^{\times}$ is a pairing of groups.
The necessity of this form (in particular that the support of $f$ is indeed a subgroup!) amounts to a combinatorial problem of its own interest, which we solved for $\pi_1$ cyclic in [@LN14a] and for ${\mathbb{Z}}_2\times {\mathbb{Z}}_2$ by hand; a closed proof for all abelian groups would be interesting.
Let $g\colon G \times H \to \mathbb{C}^\times$ be a finite group pairing, then the *left radical* is defined as $$\begin{gathered}
\operatorname{Rad}_L(g):=\{ \lambda \in G \,|\, g(\lambda,\eta)=1 \, \forall \, \eta \in H \}.
\end{gathered}$$ Similarly, the right radical is defined as $$\begin{gathered}
\operatorname{Rad}_R(g):=\{ \eta \in H \,|\, g(\lambda,\eta)=1 \, \forall \, \lambda \in G \}.
\end{gathered}$$ The pairing $g$ is called *non-degenerate* if $\operatorname{Rad}_L(g)=0$. If in addition $\operatorname{Rad}_R(g)=0$, $g$ is called *perfect*.
For an $R_0$-matrix of this form, a sufficient condition is that they fulfill the so-called *diamond-equations* (see [@LN14b Definition 2.7]) for each element $0\neq \zeta\in (\operatorname{Cent}(\Lambda_R)\cap\Lambda)/\Lambda'$.
However, we will now go into a different, more systematic direction that makes use of the following observation:
\[lm:NondegGroupPairing\] An $R_0$-matrix of the form given in Theorem [\[thm:solutionsgrpeq\]]{} is a solution to the equations in Theorem [\[thm:R0\]]{}, and hence produces an $R$-matrix $R_0\bar{\Theta}$ iff the restriction to the support $$\begin{gathered}
\hat{f}:=d |\Lambda_R/\Lambda'|\cdot f\colon \ G_1 \times G_2 \to {\mathbb{C}}^\times
\end{gathered}$$ is a *perfect* group pairing, where $G_i:=\Lambda_i/\Lambda' \subseteq \Lambda/\Lambda'=:G$.
We first show that a solution with restriction to the support a nondegenerate pairing solves the equation.
The first equations are obviously fulfilled for the form assumed $$\begin{gathered}
f(\mu+ \alpha, \nu) = q^{-(\nu, \alpha)} f(\mu,\nu),\qquad f(\mu, \nu+ \alpha) = q^{-(\mu, \alpha)} f(\mu,\nu).\end{gathered}$$ For the other equations the sums get only contributions in the support $\Lambda_1/\Lambda' \times \Lambda_2/\Lambda'$. The quantities $f(\mu,\nu)\cdot d|\Lambda_R/\Lambda'|$ for fixed $\nu$ (or $\mu$) are characters on the respective support, and by the assumed non-degeneracy all $\nu\neq 0$ give rise to different nontrivial characters. Then the second and third relations follows from orthogonality of characters. Note that since (equality of the latter was an assumption!) we were able to chose the right normalization.
For the other direction assume a solution of the given form to the equations. Then already the third equation shows that no $f(-,\nu)$ may be the trivial character and hence the form on the support is nondegenerate and hence perfect by $|G_1|=|G_2|$.
\[cor:LambdaPrime\] A first consequence of the perfectness of $\hat{f}$ $($i.e., a necessary condition for quasi-triangularity$)$ is $$\begin{gathered}
\operatorname{Cent}_{\Lambda_R}(\Lambda_1)=\operatorname{Cent}_{\Lambda_R}(\Lambda_2)=\Lambda'.\end{gathered}$$ This fixes $\Lambda'$ uniquely. Moreover in cases $\Lambda_1\neq \Lambda_2$, which can only happen for ${\mathfrak{g}}=D_{2n}$, where $\pi_1$ is noncyclic, we get an additional constraint relating $\Lambda_1$, $\Lambda_2$.
In our case, the only possibility for $\Lambda_1 \neq \Lambda_2$, s.t. $G_1 \cong G_2$ is ${\mathfrak{g}}=D_{2n}$. In this case, we have $\operatorname{Cent}_{\Lambda_R}(\Lambda_W)=\operatorname{Cent}_{\Lambda_R}(\Lambda_R)$ and thus the above condition is always fulfilled.
Our main goal for the new approach on quasitriangularity as well as the later modularity is to reduce this non-degeneracy condition for $\hat{f}$ to a non-degeneracy condition for $g$ on $H_1,H_2\subset \pi_1$ that can be checked explicitly.
A natural form on the fundamental group {#section3.2}
---------------------------------------
We now define for each triple $(\Lambda,\Lambda_1,\Lambda_2)$ and each $\ell$th root of unity $q$ a natural pairing $a_\ell$ on the subgroups $H_i:=\Lambda_i/\Lambda_R$ of the fundamental group $\pi_1:=\Lambda_W/\Lambda_R$. The simplest example is $a_\ell=e^{-2\pi i(\mu,\nu)}$. In general it is a transportation of the natural form $q^{-(\mu,\nu)}$ (which does not factorize over $\Lambda_R$) to $H_i$ by a suitable isomorphism $A_\ell$.
This isomorphism $A_\ell$ will encapsulate the crucial dependence on the common divisors of $\ell$, $|H|$ and the root lengths $d_i$; moreover, for different $H$ these forms are *not* simply restrictions of one another.
Then, we can moreover transport any given pairing $g$ together with $q^{-(\mu,\nu)}$ along the isomorphism $A_\ell$ to the $H_i$ and thus define forms $a_\ell^g$ on $H$. The main result of this section is in Theorem \[cor:factorizableAell\] that the non-degeneracy condition in Lemma \[lm:NondegGroupPairing\] for $R_0(f)$ depending on $H_i$, $g$ is equivalent to $a_\ell^g$ being non-degenerate.
Let $\Lambda \subseteq \Lambda_W^\vee$ be a sublattice, s.t. $\Lambda_R \subseteq \Lambda$. By $\hat{\Lambda} \subset \Lambda_W^\vee$ we denote the unique sublattice, s.t. the symmetric bilinear form $(\,\cdot\,,\,\cdot\,)\colon \Lambda_W^\vee \times \Lambda_W^\vee \to \mathbb{Q}$ induces a commuting diagram[$$\begin{tikzcd}
\Lambda_R \arrow[hookrightarrow]{r}\arrow{d}{\cong} & \hat{\Lambda} \arrow[hookrightarrow]{r}\arrow{d}{\cong} & \Lambda_W^\vee \arrow{d}{\cong} \\
{\Lambda_W^\vee}^* \arrow[hookrightarrow]{r} & \Lambda^* \arrow[hookrightarrow]{r} & \Lambda_R^* ,
\end{tikzcd}$$ where $\Lambda^*:= \operatorname{Hom}_\mathbb{Z}(\Lambda,\mathbb{Z})$. In particular, we have $\hat{\Lambda}_R=\Lambda_W^\vee$ and $\hat{\Lambda}_W^\vee=\Lambda_R$.]{}
\[def:matrix\_A\_l\] A *centralizer transfer map* is an group endomorphism $A_\ell \in \operatorname{End}_{\mathbb{Z}}(\Lambda)$, s.t.
1. $A_\ell(\Lambda) \stackrel{!}{=} \Lambda \cap \ell \cdot \hat{\Lambda}_R=\operatorname{Cent}_{\Lambda}^\ell(\Lambda_R)$,
2. $A_\ell(\Lambda_R) \stackrel{!}{=} \Lambda_R \cap \ell \cdot \hat{\Lambda}=\operatorname{Cent}_{\Lambda_R}^\ell(\Lambda)$.
Such a $A_\ell$ induces a group isomorphism $$\begin{gathered}
\Lambda/\Lambda_R \stackrel{\sim}{\longrightarrow} \operatorname{Cent}_{\Lambda}^\ell(\Lambda_R)/\operatorname{Cent}_{\Lambda_R}^\ell(\Lambda).
\end{gathered}$$ Of course $A_\ell$ is not unique.
Are there abstract arguments for the existence of these isomorphism and for its explicit form?
We will calculate explicit expressions for $A_\ell$ depending on the cases in the next section. At this point we give the generic answers:
For $\Lambda=\Lambda_W^\vee$ we have $A_\ell=\ell\cdot\operatorname{id}$.
For $\Lambda=\Lambda_R$ the two conditions are equivalent, so existence is trivial (resp. obviously the two trivial groups are isomorphic) and we may simply take for $A_\ell$ any base change between left and right side. The expression may however be nontrivial.
\[lm:AellForGCD=1\] Assume $\gcd(\ell, |\Lambda_W^\vee/\Lambda|)=1$, then $A_\ell=\ell\cdot \operatorname{id}$. In particular this is the case if $\ell$ is prime to all root lengths and all divisors of the Cartan matrix.
Moreover if $\ell=\ell_1\ell_2$ with $\gcd(\ell_1, |\Lambda_W^\vee/\Lambda|)=1$, then $A_\ell=\ell_1\cdot A_{\ell_2}$.
This means we only have to calculate $A_\ell$ for all divisors $\ell$ of $|\Lambda_W^\vee/\Lambda|$, which is a subset of all divisors of root lengths times divisors of the Cartan matrix.
For the first condition we need to show for any $\lambda\in \Lambda_W^\vee$ that $\ell\lambda\in\Lambda$ already implies $\lambda\in\Lambda$. But if by assumption the order of the quotient group $\Lambda_W^\vee/\Lambda$ is prime to $\ell$, then $\ell\cdot$ is an isomorphism on this abelian group, hence follows the assertion. For the second condition applies the same argument noting that $|\hat{\Lambda}/\Lambda_R|=|\Lambda_W^\vee/\Lambda|$.
For the second claim we simply consider the inclusion chains $$\begin{gathered}
A_\ell(\Lambda) \subset \Lambda \cap \ell_2 \cdot \hat{\Lambda}_R\subset \Lambda \cap \ell \cdot \hat{\Lambda}_R,\\
A_\ell(\Lambda_R)\subset \Lambda \cap \ell_2 \cdot \hat{\Lambda}\subset \Lambda_R \cap \ell \cdot \hat{\Lambda},\end{gathered}$$ where a first isomorphism is given by $A_{\ell_2}$ and again $\ell_1\cdot$ is a second isomorphism because it is prime to the index.
Our main result of this chapter is the following:
\[thm:radical\] Let $\Lambda_R \subseteq \Lambda_1$, $\Lambda_2 \subseteq \Lambda_W$ be intermediate lattices, s.t. the condition in Corollary [\[cor:LambdaPrime\]]{} is fulfilled, i.e., $\operatorname{Cent}_{\Lambda_R}(\Lambda_1)=\operatorname{Cent}_{\Lambda_R}(\Lambda_2)=\Lambda'$. Assume we have a centralizer transfer map $A_\ell$.
1. The following form is well defined on the quotients: $$\begin{aligned}
a_g^\ell\colon \ & \Lambda_1/\Lambda_R \times \Lambda_2/\Lambda_R \longrightarrow \mathbb{C}^\times, \\
& (\bar{\lambda},\bar{\mu}) \longmapsto q^{-(\lambda,A_\ell(\mu))} \cdot g(\lambda,A_\ell(\mu)).
\end{aligned}$$
2. Let $$\begin{gathered}
\operatorname{Cent}_{\Lambda_1}^g(\Lambda_2):=\big\{\lambda \in \Lambda_1 \,|\, q^{(\lambda,\mu)}=g(\lambda,\mu) \; \forall\, \mu \in \Lambda_2 \big\}.\end{gathered}$$ Then the inclusion $\operatorname{Cent}_{\Lambda_1}^g({\Lambda_2}) \hookrightarrow \Lambda_1$ induces an isomorphism $$\begin{gathered}
\operatorname{Cent}_{\Lambda_1}^g({\Lambda_2})/\Lambda' \cong \operatorname{Rad}\big(a_g^\ell\big).\end{gathered}$$
\[cor:factorizableAell\] The quasitriangularity conditions for a choice $R_0$ are by Lemma [\[lm:NondegGroupPairing\]]{} equivalent to the non-degeneracy of the group pairing on $\Lambda_1/\Lambda' \times \Lambda_2/\Lambda'$: $$\begin{gathered}
\hat{f}(\lambda,\mu)=q^{-(\lambda,\mu)} g(\lambda,\mu).
\end{gathered}$$ By the previous theorem this condition is now equivalent to the nondegeneracy of $ a_g^\ell$.
This condition on the fundamental group, which is a finite abelian group and mostly cyclic, can be checked explicitly once $a_g^\ell$ has been calculated.
The first part of the theorem is a direct consequence of the definition of the centralizer transfer matrix $A_\ell$. For the second part, we first notice that by assumption we have a commutative diagram of finite abelian groups $$\begin{tikzcd}
\Lambda_R/\Lambda' \arrow[hookrightarrow]{r}\arrow{d}{q^{-(\cdot,\cdot)}} &\Lambda_1/\Lambda' \arrow[twoheadrightarrow]{r}\arrow{d}{\hat{f}} &\Lambda_1/\Lambda_R \arrow{d}{\hat{f}'}\\
\left(\Lambda_2/\operatorname{Cent}_{\Lambda_2}(\Lambda_R) \right)^\wedge \arrow[hookrightarrow]{r}& (\Lambda_2/\Lambda')^\wedge \arrow[twoheadrightarrow]{r}& \left(\operatorname{Cent}_{\Lambda_2}(\Lambda_R) /\Lambda' \right)^\wedge,
\end{tikzcd}$$ where $G^\wedge$ denotes the dual group of a group $G$.
Now, by the five lemma we know that $\hat{f}$ is an isomorphism if and only if the induced map $\hat{f}'$ is an isomorphism. Post-composing this map with the dualized centralizer transfer matrix $A_\ell^\wedge\colon (\operatorname{Cent}_{\Lambda_2}(\Lambda_R) /\Lambda' )^\wedge \cong (\Lambda_2/\Lambda_R)^\wedge$ gives $a_g^\ell$.
Explicit calculation for every $\boldsymbol{{\mathfrak{g}}}$ {#section4}
============================================================
In the following, we want to compute the endomorphism $A_\ell \in \operatorname{End}_\mathbb{Z}(\Lambda)$ and the pairing $a_\ell$ on the fundamental group explicitly in terms of the Cartan matrices and the common divisors of $\ell$ with root lengths and divisors of the Cartan matrix. We will finally give a list for all ${\mathfrak{g}}$.
Technical tools {#section4.1}
---------------
We choose the basis of simple roots $\alpha_i$ for $\Lambda_R$ and the dual basis of fundamental coweights $\lambda_i^\vee$ for the dual lattice $\Lambda_W^\vee$ with $(\alpha_i,\lambda_j^\vee)=\delta_{i,j}$.
For any choice $\Lambda\subset \Lambda_W\subset \Lambda_W^\vee$, let $A_\Lambda$ be a *basis matrix*, i.e., any ${\mathbb{Z}}$-linear isomorphism $\Lambda_W^\vee\to \Lambda$ sending the basis $\lambda_i^\vee$ of $\Lambda_W^\vee$ to some basis $\mu_i$ of $\Lambda$. It is unique up to pre-composition of a unimodular matrix $U \in \mathrm{SL}_n(\mathbb{Z})$.
The dual basis $A_{\hat{\Lambda}}$ of $\hat{\Lambda}$ is defined by $$\begin{gathered}
\big(A_{\hat{\Lambda}}\big(\lambda_i^\vee\big),A_{\Lambda}\big(\lambda_j^\vee\big)\big)= \delta_{ij}.\end{gathered}$$ Explicitly, $A_{\hat{\Lambda}}$ is given by $ A_{\hat{\Lambda}} = \big(A_{\Lambda}^{-1}A_R\big)^T,$ where $(A_R)_{ij}=(\alpha_i,\alpha_j)$. Now, let $A_\Lambda=P_\Lambda S_\Lambda Q_\Lambda$ be the unique Smith decomposition of $A_\Lambda$, which means: $P_\Lambda$, $Q_\Lambda$ are unimodular and $S_\Lambda$ is diagonal with diagonal entries $(S_\Lambda)_{ii}=:d^\Lambda_i$, such that $d^\Lambda_i \,|\, d^\Lambda_j$ for $i<j$.
\[ex:ExplicitFormOfSNF\] For the root lattice the $d^{\Lambda_R}_i$ are the divisors of scalar product matrix $(\alpha_i,\alpha_j)$. Their product is $$\begin{gathered}
\prod_i d^{\Lambda_R}_i=\big|\Lambda_W^\vee/\Lambda_R\big|= \Big( \prod_i d_i\Big) \cdot |\pi_1|, \qquad d_i=\frac{(\alpha_i,\alpha_i)}{2}.\end{gathered}$$ For the coweight lattice all $d^{\Lambda_{W}^\vee}_i=1$. For the weight lattice we recover the familiar $d^{\Lambda_{W}}_i=d_i$.
Without loss of generality, we will assume the basis matrices $A_\Lambda$ to be symmetric, i.e., . We then have the following lemma:
\[lm:ExplicitFormOfCentralizers\] Let $\Lambda_R \subseteq \Lambda \subseteq \Lambda_W^\vee$ be a lattice. We define lattices $$\begin{gathered}
A_{\rm Cent}:= \big(P_\Lambda^T\big)^{-1} D_\ell P_{\Lambda}^{-1}, \qquad D_\ell:= \operatorname{Diag}\left( \frac{\ell}{\operatorname{gcd}\big(\ell,d_i^\Lambda\big)}\right).\end{gathered}$$ Then, $$\begin{gathered}
\operatorname{Cent}_{\Lambda_R}(\Lambda)=A_RA_{\rm Cent}\Lambda_W^\vee, \qquad \operatorname{Cent}_{\Lambda}(\Lambda_R)=A_\Lambda A_{\rm Cent}\Lambda_W^\vee.\end{gathered}$$
We compute explicitly, $$\begin{gathered}
\operatorname{Cent}_{\Lambda_R}(\Lambda) = \Lambda_R \cap \ell \cdot \hat{\Lambda}=A_R \Lambda_W^\vee \cap \big(A_{\Lambda}^{-1}A_R\big)^T \ell \Lambda_W^\vee\\
\hphantom{\operatorname{Cent}_{\Lambda_R}(\Lambda)}{} =\big(A_{\Lambda}^{-1}A_R\big)^T \big(\big(\big(A_{\Lambda}^{-1}A_R\big)^T\big)^{-1}A_R \Lambda_W^\vee \cap \ell \Lambda_W^\vee\big)=A_R A_{\Lambda}^{-1} \big(A_\Lambda \cap \ell \Lambda_W^\vee\big)\\
\hphantom{\operatorname{Cent}_{\Lambda_R}(\Lambda)}{}
=A_R \big(P_\Lambda S_\Lambda P_\Lambda^T\big)^{-1} \big(P_\Lambda S_\Lambda P_\Lambda^T \Lambda_W^\vee\cap \ell \Lambda_W^\vee\big)
=A_R \big(P_\Lambda^T\big)^{-1} S_\Lambda^{-1} \big(S_\Lambda \Lambda_W^\vee\cap \ell \Lambda_W^\vee\big) \\
\hphantom{\operatorname{Cent}_{\Lambda_R}(\Lambda)}{}
=A_R \big(P_\Lambda^T\big)^{-1} S_\Lambda^{-1} \operatorname{Diag}(\operatorname{lcm}(S_{\Lambda_{ii}},\ell))\Lambda_W^\vee
=A_R \big(P_\Lambda^T\big)^{-1} D_\ell \Lambda_W^\vee=A_RA_{\rm Cent}\Lambda_W^\vee.
\end{gathered}$$ On the other hand, $$\begin{gathered}
\operatorname{Cent}_{\Lambda}(\Lambda_R) = \Lambda \cup \ell \hat{\Lambda}_R = \Lambda \cup \ell \Lambda_W^\vee = A_\Lambda \Lambda_W^\vee \cup \ell \Lambda_W^\vee = P_\Lambda S_\Lambda P_\Lambda^T \Lambda_W^\vee \cup \ell \Lambda_W^\vee \\
\hphantom{\operatorname{Cent}_{\Lambda}(\Lambda_R)}{}
= P_\Lambda \big(S_\Lambda \Lambda_W^\vee \cup \ell \Lambda_W^\vee\big) = P_\Lambda S_\Lambda D_\ell \Lambda_W^\vee
= A_\Lambda \big(P_\Lambda^T\big)^{-1} D_\ell \Lambda_W^\vee=A_\Lambda A_{\rm Cent}\Lambda_W^\vee.
\end{gathered}$$ In particular, this means $A_{\hat{\Lambda}}\operatorname{Cent}_{\Lambda}(\Lambda_R)=\operatorname{Cent}_{\Lambda_R}(\Lambda)$.
Case $\boldsymbol{\Lambda=\Lambda_W}$ {#section4.2}
-------------------------------------
In order to exhaust all cases that appear in our setting, we continue with $\Lambda=\Lambda_W$:
\[lm:AellForLambda=LambdaW\] In the case $\Lambda=\Lambda_W$, the centralizer transfer matrix $A_\ell$ is of the following form: $$\begin{gathered}
A_\ell = \begin{cases}
A_{\Lambda_W}A_{\rm Cent} Q_C^TP_C^{-1}A_{\Lambda_W}^{-1}, &\operatorname{gcd}(\ell,|\pi_1|) \neq 1,\\
\ell \cdot \operatorname{id}, & \text{else}.
\end{cases}\end{gathered}$$ Here, $C=P_CS_CQ_C$ denotes the Smith decomposition of the Cartan matrix of ${\mathfrak{g}}$.
As we noted in Example \[ex:ExplicitFormOfSNF\], we have $A_{\Lambda_W}=\operatorname{Diag}(d_i)$, for $d_i$ being the $i$th root length. Since $d_i \in \{1,p\}$ for some prime number $p$, up to a permutation $A_{\Lambda_W}$ is already in Smith normal form: this means that $P_{\Lambda_W}$ is a permutation matrix of the form $(P_{\Lambda_W})_{ij}=\delta_{j,\sigma(i)}$ for some $\sigma \in S_n$, s.t. $d_{\sigma(1)}\leq \cdots \leq d_{\sigma(n)}$. It follows that $A_{\rm Cent}=\operatorname{Diag}\big( \frac{\ell}{\gcd(\ell,d_i)}\big)$.
Using the definition $C_{ij}=\frac{(\alpha_i,\alpha_j)}{d_i}$, in the case $\operatorname{gcd}(\ell,|\pi_1|) \neq 1$ we obtain $$\begin{gathered}
A_{\rm Cent}C^T=CA_{\rm Cent}.\end{gathered}$$ Thus, $$\begin{gathered}
A_\ell A_R= A_{\Lambda_W}A_{\rm Cent} Q_C^TP_C^{-1}A_{\Lambda_W}^{-1}A_R =A_RC^{-1}A_{\rm Cent}Q_C^TP_C^{-1}C \\
\hphantom{A_\ell A_R}{} =A_R A_{\rm Cent} \big(C^T\big)^{-1}Q_C^TP_C^{-1}C=A_R A_{\rm Cent}.\end{gathered}$$ By the previous lemma, this proves the first condition for $A_\ell$. The second condition follows immediately from the previous lemma.
The case $\operatorname{gcd}(\ell,|\pi_1|) = 1$ follows from Lemma \[lm:AellForGCD=1\] and the fact that $|\pi_1|=|\Lambda_W^\vee/\Lambda_R^\vee|$.
Case $\boldsymbol{A_n}$ {#section4.3}
-----------------------
In the following example, we treat the case ${\mathfrak{g}}=A_n$ with fundamental group $\Lambda_W/\Lambda_R={\mathbb{Z}}_{n+1}$ for general intermediate lattices $\Lambda_R \subseteq \Lambda \subseteq \Lambda_W$.
\[ex:AellForAn\]In order to compute the centralizer transfer map $A_\ell$, we first compute the Smith decomposition of $A_R$: $$\begin{gathered}
A_R=
\begin{pmatrix}
2 & -1& 0 & \dots & & 0 \\
-1 & 2 &-1 & 0 & & 0 \\
0 & -1& 2 & \ddots & \ddots & \vdots \\
0 &0 & \ddots & \ddots & -1 & 0 \\
\vdots & & \ddots & -1 & 2 & -1 \\
0 & \dots & & 0 & -1 & 2
\end{pmatrix}\\
\hphantom{A_R}{} =
\arraycolsep=4pt\def\arraystretch{0.8}
\begin{pmatrix}
-1 & 0& 0 & \dots & & 0 \\
2 & -1 &0 & & & 0 \\
0 & 2& -1 & \ddots & & \vdots \\
0 &0 & \ddots & \ddots & & 0 \\
\vdots & & \ddots & 2 & -1 & 0 \\
0 & \dots & & 0 & 2 & 1
\end{pmatrix}\!
\begin{pmatrix}
1 & 0& 0 & \dots & & 0 \\
0 & 1 &0 & & & 0 \\
0 & 0& 1 & \ddots & & \vdots \\
\vdots& & \ddots & \ddots & & \\
& & \ddots & & 1 & 0 \\
0 & \dots & & & 0 & n+1
\end{pmatrix}\!
\begin{pmatrix}
-2 & 1& 0 & \dots & & 0 \\
-3 & 0 &1 & \ddots & & 0 \\
-4 & 0& 0 & \ddots & & \vdots \\
\vdots& \vdots & & \ddots & 1 & 0 \\
-n & & & & 0 & 1 \\
1 & 0 & \dots & & 0 & 0
\end{pmatrix}.
\end{gathered}$$ A sublattice $\Lambda_R \subsetneq \Lambda \subsetneq \Lambda_W$ is uniquely determined by a divisor $d \,|\, n+1$, so that $\Lambda/\Lambda_R \cong {\mathbb{Z}}_d$ and is generated by the multiple $\hat{d}\lambda_n$, where $\hat{d}:=\frac{n+1}{d}$. Then $$\begin{gathered}
d_i^\Lambda =
\begin{cases}
1,& i<n,\\
d, & i=n.
\end{cases}
\end{gathered}$$ Since $A_n$ is simply laced with cyclic fundamental group, the formula $A_\Lambda=P_R S_\Lambda P_R^T$ gives us symmetric basis matrices of sublattices $\Lambda_R \subseteq \Lambda \subseteq \Lambda_W$. We also substitute the above basis matrix of the root lattice $A_R$ by $A_R(Q_R)^{-1}P_R^T$. It is then easy to see that the definition $A_\ell:=P_R D_\ell P_R^T$ gives a centralizer transfer matrix. We calculate it explicitly $$\begin{gathered}
(A_\ell)_{ij}=\big(P_R D_\ell P_R^{-1}\big)_{ij}=
\begin{cases}
\delta_{ij}, &i<n, \\
\displaystyle (n+1-j)\left( \frac{\ell}{\operatorname{gcd}(\ell,d)} -1 \right) , &i=n \text{ and } j<n, \\
\displaystyle\frac{\ell}{\operatorname{gcd}(\ell,d)} , & i=j=n.
\end{cases}
\end{gathered}$$ Now a form $g$ is uniquely determined by a $d$th root of unity $g(\chi,\chi)=\exp \big(\frac{2 \pi i\cdot k}{d}\big)=\zeta_{d}^k$ with some $k$. Then we calculate the form $a_g^\ell$ on the generator $$\begin{gathered}
a_g^\ell(\chi,\chi)=q^{-(\chi,A_\ell(\chi))} g(\chi,A_\ell(\chi))
= q^{-\frac{(n+1)^2 \cdot \ell}{d^2 \operatorname{gcd}(\ell,\hat{d})}(\lambda_n^\vee,\lambda_n^\vee)} \cdot g(\chi,\chi)^{\frac{ \ell}{\operatorname{gcd}(\ell,\hat{d})}}\\
\hphantom{a_g^\ell(\chi,\chi)}{}
= \exp \left(\frac{2 \pi i \cdot(k \ell - \hat{d}n)}{d \cdot \operatorname{gcd}(\ell,\hat{d})} \right).
\end{gathered}$$ For example the trivial $g$ (i.e., $k=0$) gives an $R$-matrix for all lattices $\Lambda$ (defined by $\hat{d}d=n+1$) iff $\frac{\hat{d}}{\operatorname{gcd}(\ell,\hat{d})}$ is coprime to $d$. For $\ell$ coprime to the divisor $n+1$ this amounts to all lattices associated to decompositions of $n+1$ into two coprime factors.
Case $\boldsymbol{D_n}$ {#section4.4}
-----------------------
Finally, we consider the root lattice $D_n$. Since we have $\pi_1(D_{2n\geq 4}) \cong {\mathbb{Z}}_2 \times {\mathbb{Z}}_2$ and $\pi_1(D_{2n+1\geq 5}) \cong {\mathbb{Z}}_4$, it is appropriate to split this investigation in two steps. We start with $D_{2n\geq 4}$. In order to compute the respective Smith decompositions, we used the software *Wolfram Mathematica*.
In the case $D_{2n\geq 4}$, we have three different possibilities for the lattices $\Lambda_R \subseteq \Lambda_1,\Lambda_2 \subseteq \Lambda_W$:
1. $\Lambda_1 \neq \Lambda_2$, $H_1 \cong H_2 \cong {\mathbb{Z}}_2$: In this case, the subgroups $\Lambda_i/\Lambda_R \subseteq \Lambda_R$ are spanned by the fundamental weights $\lambda_{2(n-1)+i}$. As in the case $A_n$, we define the centralizer transfer map $A_\ell:=P_R D_\ell P_R^{-1}$ on $H_2$. This is possible since the symmetric basis matrix $A_{\Lambda_2}=P_R S_{\Lambda_2} P_R^T$ of $\Lambda_2$ is already in Smith normal form. Using the software *Wolfram Mathematica* in order to compute $P_R$, we obtain $A_\ell(\lambda_{2n})=\frac{\ell}{\operatorname{gcd}(2,\ell)}$. Combining this with $(\lambda_{2n-1},\lambda_{2n})=\frac{n-1}{2}$, we get $$\begin{gathered}
a_g^\ell(\lambda_{2n-1},\lambda_{2n})=\exp \left(\frac{2 \pi i\cdot (kl-2(n-1))}{2\cdot \operatorname{gcd}(2,\ell)} \right)
\end{gathered}$$ for $g(\lambda_{2n-1},\lambda_{2n})=\exp \big( \frac{2 \pi i k}{2} \big) $.
2. $\Lambda_1 = \Lambda_2$, $H_i \cong {\mathbb{Z}}_2$: Without restrictions and in order to use the same definition for $A_\ell$ as above, we choose $\Lambda_i$, s.t. the group $\Lambda_i/\Lambda_R$ is spanned by $\lambda_{2n}$. Combining the above result $A_\ell(\lambda_{2n})=\frac{\ell}{\operatorname{gcd}(2,\ell)}$ with $(\lambda_{2n},\lambda_{2n})=\frac{n}{2}$, we obtain[$$\begin{gathered}
a_g^\ell(\lambda_{2n},\lambda_{2n})=\exp \left(\frac{2 \pi i\cdot (kl-2n)}{2\cdot \operatorname{gcd}(2,\ell)} \right)\end{gathered}$$ for $g(\lambda_{2n},\lambda_{2n})=\exp \big( \frac{2 \pi i k}{2} \big) $.]{}
3. $\Lambda_1=\Lambda_2=\Lambda_W$, $H \cong {\mathbb{Z}}_2 \times {\mathbb{Z}}_2$: A group pairing $g\colon (\mathbb{Z}_2 \times \mathbb{Z}_2) \times (\mathbb{Z}_2 \times \mathbb{Z}_2) \to \mathbb{C}^\times$ is uniquely defined by a matrix $K \in \mathfrak{gl}(2,\mathbb{F}_2)$, so that $$\begin{gathered}
g(\lambda_{2(n-1)+i},\lambda_{2(n-1)+j})=\exp \left( \frac{2 \pi i K_{ij}}{2} \right).
\end{gathered}$$ Since $D_n$ is simply-laced, we have $A_\ell=\ell \cdot \operatorname{id}$. Using $(\lambda_{2(n-1)+i},\lambda_{2(n-1)+j}) \text{ mod } 2=\delta_{i+j \text{odd}}$, we obtain $$\begin{gathered}
a_\ell^g(\lambda_{2(n-1)+i},\lambda_{2(n-1)+j}) =\exp \left( \frac{2 \pi i \cdot K_{ij}\ell}{2}\right)(-1)^{i+j}.
\end{gathered}$$
The last step is the case $D_{2n+1\geq 5}$:
Since it it is simply-laced and its fundamental group is cyclic, the case $D_{2n+1\geq 5}$ can be treated very similar to $A_n$. We distinguish two cases:
1. $\Lambda_1=\Lambda_2$, $H_i ={\langle2\lambda_{2n+1}\rangle} \cong {\mathbb{Z}}_2$. As in the case $A_n$, we define the centralizer transfer map $A_\ell:=P_R D_\ell P_R^{-1}$ on $H_2$. Using $(\lambda_{2n+1},\lambda_{2n+1})=\frac{2n+1}{4}$, we obtain $$\begin{gathered}
a_g^\ell(2\lambda_{2n+1},2\lambda_{2n+1})=\exp \left(\frac{2 \pi i\cdot (k\ell-2(2n+1))}{2\cdot \operatorname{gcd}(2,\ell)} \right).
\end{gathered}$$ for $g(2\lambda_{2n+1},2\lambda_{2n+1})=\exp \big( \frac{2 \pi i k}{2} \big) $.
2. $\Lambda_1=\Lambda_2=\Lambda_W$, $H ={\langle\lambda_{2n+1}\rangle} \cong {\mathbb{Z}}_4$. By an analogous argument as above, we obtain $$\begin{gathered}
a_g^\ell(\lambda_{2n+1},\lambda_{2n+1})=\exp \left(\frac{2 \pi i\cdot (k\ell-(2n+1))}{4} \right)
\end{gathered}$$ for $g(\lambda_{2n+1},\lambda_{2n+1})=\exp \big( \frac{2 \pi i k}{4} \big) $.
Table of all quasitriangular quantum groups {#section4.5}
-------------------------------------------
In the following table, we list all simple Lie algebras and check for which non-trivial choices of $\Lambda$, $\Lambda_i$, $\ell$ and $g$ the element $R_0\bar{\Theta}$ is an $R$-matrix. As before, we define $H_i:=\Lambda_i/\Lambda_R$ and $H:=\Lambda/\Lambda_R$. In the cyclic case, if $x_i$ are generators of the $H_i$, then the pairing is uniquely defined by an element $1 \leq k \leq |H_i|$, s.t. $g(x_1,x_2)=\exp \big( \frac{2 \pi i k}{|H_i|} \big)$. In the case $D_{2n}$, $\Lambda=\Lambda_W$, $g$ is uniquely defined by a $2\times 2$-matrix $K \in \mathfrak{gl}(2,{\mathbb{F}}_2)$, s.t. $g(\lambda_{2(n-1)+i},\lambda_{2(n-1)+j})=\exp \big( \frac{2 \pi i K^g_{ij}}{2} \big)$ for $i,j\in \{1,2\}$.
The columns of the following table are labeled by
1. the finite-dimensional simple complex Lie algebra ${\mathfrak{g}}$,
2. the natural number $\ell$, determining the root of unity $q=\exp\big( {\frac{2 \pi i}{\ell}}\big) $,
3. the number of possible $R$-matrices for the Lusztig ansatz,
4. the subgroups $H_i \subseteq H=\Lambda/\Lambda_R$ introduced in Theorem \[thm:solutionsgrpeq\],
5. the subgroups $H_i$ in terms of generators given by multiples of fundamental dominant weights $\lambda_i \in \Lambda_W$,
6. the group pairing $g\colon H_1 \times H_2 \to {\mathbb{C}}^\times$ determined by its values on generators,
7. the group pairing $a_g^\ell \subseteq \Lambda/\Lambda'$ introduced in Theorem \[thm:radical\] determined by its values on generators.
${\mathfrak{g}}$ $\ell$ \# $H_i\cong$ $H_i\,{\scriptstyle (i=1,2)}$ $g$ $a_g^\ell$
------------------------------------------------ -------- ---- ------------ ------------------------------- ----- ------------
$A_{n\geq 1}$
$\pi_1={\mathbb{Z}}_{n+1}$
$B_{n\geq 2}$
$\pi_1={\mathbb{Z}}_{2}$
$C_{n\geq 3}$
$\pi_1={\mathbb{Z}}_{2}$
$D_{2n\geq 4}$
$\pi_1={\mathbb{Z}}_{2} \times {\mathbb{Z}}_2$
$D_{2n+1\geq 5}$
$\pi_1={\mathbb{Z}}_{4}$
$E_{6}$
$\pi_1={\mathbb{Z}}_{3}$
$E_{7}$
$\pi_1={\mathbb{Z}}_{2}$
: Solutions for $R_0$-matrices.[]{data-label="tbl:Solutions"}
The Lie algebras $E_8$, $F_4$ and $G_2$ have trivial fundamental groups and thus have no non-trivial solution. We want to emphasize once more that the choice $\Lambda_i=\Lambda_R$ always leads to a quasitriangular quantum group.
The following lemma connects our results with Lusztig’s original result:
\[lm:Lusztigkernel\] In Lusztig’s definition of a quantum group he uses the quotient $$\begin{gathered}
\Lambda'_{\rm Lusz}=2\operatorname{Cent}_{\Lambda_R}(2\Lambda_W).
\end{gathered}$$ This coincide with our choice $\Lambda'=\operatorname{Cent}_{\Lambda_R}(\Lambda_1 + \Lambda_2)$, if and only if $$\begin{gathered}
\label{LusztigsChiceLambda'equation}
2\gcd\big(\ell,d_i^\Lambda\big)=\gcd\big(\ell,2d_i^W\big),
\end{gathered}$$ where the $d_i^\Lambda$ denote the invariant factors of $\Lambda_W^\vee/\Lambda$ and the $d_i^W$ denote the invariant factors of $\Lambda_W^\vee/\Lambda_W$ $($i.e., ordered root lengths$)$.
In particular, for $\ell$ odd these choices never coincide. For $\Lambda=\Lambda_W$, $\Lambda'=\Lambda'_{\rm Lusz}$ holds if and only if $2d_i \,|\, \ell$. This is the most extreme case of divisibility and it is precisely the case appearing in logarithmic conformal field theories.
We first note that in our cases, $\Lambda'=\operatorname{Cent}_{\Lambda_R}(\Lambda_1 + \Lambda_2)=\operatorname{Cent}_{\Lambda_R}(\Lambda)$. We have $$\begin{gathered}
2\operatorname{Cent}_{\Lambda_R}(2\Lambda_W) = 2\big(\Lambda_R \cap \widehat{2\Lambda_W}\big) \\
\hphantom{2\operatorname{Cent}_{\Lambda_R}(2\Lambda_W)}{} =A_R2\left(\Lambda_W^\vee \cap A_W^{-1} \frac{\ell}{2}\Lambda_W^\vee\right) =A_R \operatorname{Diag}\left( \frac{2 \ell}{\gcd(\ell,2d_i^W)}\right) \Lambda_W^\vee.
\end{gathered}$$ By Lemma \[lm:ExplicitFormOfCentralizers\], this coincides with $\Lambda'$ if and only if equation (\[LusztigsChiceLambda’equation\]) holds.
Factorizability of quantum group $\boldsymbol{R}$-matrices {#section5}
==========================================================
We first recall the definition of factorizable braided tensor categories and factorizable Hopf algebras, respectively.
\[def:FactorizabilityOfCats\] A braided tensor category $\mathcal{C}$ is *factorizable* if the canonical braided tensor functor $G\colon \mathcal{C}\boxtimes \mathcal{C}^\text{op} \to \mathcal{Z}(\mathcal{C})$ is an equivalence of categories.
In [@Sch01], Schneider gave a different characterization of factorizable Hopf algebras in terms of its Drinfeld double, leading to the following theorem:
\[def:FactorizableHopfAlgs\]A finite-dimensional quasitriangular Hopf algebra $(H,R)$ is called *factorizable* if its *monodromy matrix* $M:=R_{21}\cdot R \in H\otimes H$ is non-degenerate, i.e., the following linear map is bijective $$\begin{gathered}
H^* \to H,\qquad \phi\mapsto (\operatorname{id}\otimes \phi)(M).
\end{gathered}$$ Equivalently, this means we can write $M=\sum_i R_1^i\otimes R_2^i$ for two basis’ $R_1^i,R_2^j \in H$.
\[thm:FactorizabilityOfCats\] Let $(H,R)$ be a finite-dimensional quasitriangular Hopf algebra. Then the category of finite-dimensional $H$-modules $H-\mathsf{mod}_{fd}$ is factorizable if and only if $(H,R)$ is a factorizable Hopf algebra.
Shimizu [@Shi16] has recently proven a number of equivalent characterizations of factorizability for arbitrary (in particular non-semisimple) braided tensor categories. Besides the two previous characterizations (equivalence to Drinfeld center and nondegeneracy of the monodromy matrix), factorizability is equivalent to the fact that the so-called transparent objects are all trivial, see Theorem \[thm\_Shimizu\] below, which will become visible during our analysis later.
Monodromy matrix in terms of $\boldsymbol{R_0}$ {#section5.1}
-----------------------------------------------
In order to obtain conditions for the factorizability of the quasitriangular small quantum groups $(u_q({\mathfrak{g}},\Lambda,\Lambda'),R_0(f)\bar{\Theta})$ as in Theorem \[thm:R0\] in terms of ${\mathfrak{g}}$, $q$, $\Lambda$ and $f$, we start by calculating the monodromy matrix $M:=R_{21} \cdot R \in
u_q({\mathfrak{g}},\Lambda,\Lambda')\otimes u_q({\mathfrak{g}},\Lambda,\Lambda')$ in general as far as possible:
For $R=R_0(f)\bar\Theta$ as in Theorem [\[thm:R0\]]{}, the factorizability of $R$ is equivalent to the invertibility of the following complex-valued matrix $m$ with entries indexed by elements in $\mu,\nu\in \Lambda/\Lambda'$: $$\begin{gathered}
m_{\mu,\nu}:=\sum_{\mu',\nu'\in\Lambda/\Lambda'}f(\mu-\mu',\nu-\nu')f(\nu',\mu').
\end{gathered}$$
We first plug in the expressions for $R_0$ from Theorem \[thm:solutionsgrpeq\] and $\bar\Theta$ from Theorem \[thm:R0\] and simplify: $$\begin{gathered}
M:=R_{21} \cdot R =(R_0)_{21}\cdot \bar\Theta_{21} \cdot R_0\cdot \bar\Theta\\
\hphantom{M}{} =\left(\sum_{\mu_1,\nu_1\in \Lambda} f(\mu_1,\nu_1)K_{\nu_1}\otimes
K_{\mu_1}\right) \left( \sum_{\beta_1\in\Lambda_R^+}(-1)^{{\rm tr} \beta_1} q_{\beta_1}
\sum_{b_1\in B_{\beta_2}}b_1^{*+}\otimes b_1^- \right)\\
\hphantom{M=}{}
\times \left(\sum_{\mu_2,\nu_2\in \Lambda} f(\mu_2,\nu_2)K_{\mu_2}\otimes
K_{\nu_2}\right) \left( \sum_{\beta_2\in\Lambda_R^+}(-1)^{{\rm tr} \beta_2} q_{\beta_2}
\sum_{b_2\in B_{\beta_2}}b_2^-\otimes b_2^{*+}\right)\\
\hphantom{M}{}
=\sum_{\beta_1,\beta_2\in\Lambda_R^+}\!(-1)^{{\rm tr} \beta_1+\beta_2} q_{\beta_1}q_{\beta_2}\! \left(\sum_{\mu_1,\mu_2,\nu_1,\nu_2\in \Lambda}\!
f(\mu_1,\nu_1)f(\mu_2,\nu_2)q^{\beta_1(\nu_2-\mu_2)}K_{\nu_1+\mu_2}\otimes K_{\mu_1+\nu_2} \right)\! \\
\hphantom{M=}{}
\times \left(\sum_{b_1\in B_{\beta_1},b_2\in B_{\beta_2}} b_1^{*+}b_2^{-}\otimes b_1^{-}b_2^{*+}\right),
\end{gathered}$$ where $\Lambda_R^+={\mathbb{N}}_0[\Delta]$. The last equation holds since $b_1^-\in u_{\beta_1}^-$ and hence fulfills $K_{\nu_2}b_1^-=q^{-\beta_1\nu_2}b_1^-K_{\nu_2}$ and similarly for $b_1^{*+}$. We have two triangular decompositions $$\begin{gathered}
u_q=u_q^0u_q^-u_q^+,\qquad u_q=u_q^0u_q^+u_q^- ,
\end{gathered}$$ and the $\Lambda_R^+$-gradation on $u_q^{\pm}$ induces a gradation $$\begin{gathered}
u_q \otimes u_q \cong \bigoplus_{\beta_1,\beta_2} \big(u^0\otimes u^0\big)\big({u_q^+}_{\beta_1}{u_q^-}_{\beta_2}\otimes {u_q^-}_{\beta_1}{u_q^+}_{\beta_2}\big).
\end{gathered}$$ The factorizability of $R$ is equivalent to the invertibility of $M$ interpreted as a matrix indexed by the PBW basis. The grading implies a block matrix form of $M$, so the invertibility $M$ is equivalent to the invertibility of $M^{\beta_1,\beta_2} \in (u_q \otimes u_q)_{(\beta_1,\beta_2)}$ for every $\beta_1,\beta_2 \in \Lambda_R^+$ as follows $$\begin{gathered}
M^{\beta_1,\beta_2}:= \left(\sum_{\mu_1,\mu_2,\nu_1,\nu_2\in \Lambda}
f(\mu_1,\nu_1)f(\mu_2,\nu_2)q^{\beta_1(\nu_2-\mu_2)}K_{\nu_1+\mu_2}\otimes K_{\mu_1+\nu_2} \right)\\
\hphantom{M^{\beta_1,\beta_2}:=}{}\times
\left(\sum_{b_1\in B_{\beta_1},b_2\in B_{\beta_2}} b_1^{*+}b_2^{-}\otimes b_1^{-}b_2^{*+}\right).
\end{gathered}$$ Since the second sum in $M^{\beta_1,\beta_2}$ runs over a basis in ${u_q^+}_{\beta_1}{u_q^-}_{\beta_2}\otimes {u_q^-}_{\beta_1}{u_q^+}_{\beta_2}$, the invertibility of $M$ is equivalent to the invertibility for all $\beta_1 \in\Lambda_R^+$ the following element: $$\begin{gathered}
M_0^{\beta_1}:=\sum_{\mu_1,\mu_2,\nu_1,\nu_2\in \Lambda/\Lambda'}q^{\beta_1(\nu_2-\mu_2)}
f(\mu_1,\nu_1)f(\mu_2,\nu_2)K_{\nu_1+\mu_2}\otimes K_{\mu_1+\nu_2} \\
\hphantom{M_0^{\beta_1}}{} =\sum_{\mu,\nu\in\Lambda/\Lambda'} K_\nu \otimes K_\mu
\cdot \left(\sum_{\mu',\nu'\in\Lambda/\Lambda'} q^{\beta_1(\mu'-\nu')}
f(\mu-\mu',\nu-\nu')f(\nu',\mu')\right).
\end{gathered}$$ Since $K_\nu\otimes K_\mu$ is a vector space basis of $u_q^0\otimes u_q^0={\mathbb{C}}[\Lambda/\Lambda']\otimes{\mathbb{C}}[\Lambda/\Lambda']$, this in turn is equivalent to the invertibility of the following family of matrices $m^{\beta_1}$ for all $\beta_1\in\Lambda_R^+$ with rows/columns indexed by elements in $\mu,\nu\in \Lambda/\Lambda'$: $$\begin{gathered}
m^{\beta_1}_{\mu,\nu}:=\sum_{\mu',\nu'\in\Lambda/\Lambda'} f(\mu-\mu',\nu-\nu')f(\nu',\mu')q^{\beta_1(\mu'-\nu')}.
\end{gathered}$$ We now use the fact that $R$ was indeed an $R$-matrix. By property (\[f01\]) in Theorem \[thm:R0\] we have $$\begin{gathered}
m^{\beta_1}_{\mu,\nu}=\sum_{\mu',\nu'\in\Lambda/\Lambda'}f(\mu-\mu',\nu-\nu')f(\nu'+\beta_1,\mu')q^{-\beta_1\nu'}.
\end{gathered}$$ Since the invertibility of a matrix $m_{\mu,\nu}$ is equivalent to the invertibility of any matrix $m_{\mu,\nu+\beta_1}$, we may substitute $\nu'\mapsto \nu'+\beta_1$, $\nu\mapsto \nu+\beta_1$, pull the constant factor $q^{-\beta_1^2}$ in front (which also does not affect invertibility) and hence eliminate the first $\beta_1$ from the condition. Hence the invertibility of $R$ is equivalent to the invertibility of the following family of matrices $\tilde{m}^{\beta_1}$ for all $\beta_1\in\Lambda_R^+$: $$\begin{gathered}
\tilde{m}^{\beta_1}_{\mu,\nu}:=\sum_{\mu',\nu'\in\Lambda/\Lambda'} f(\mu-\mu',\nu-\nu')f(\nu',\mu')q^{-\beta_1\nu'}.
\end{gathered}$$ We may now use the same procedure to eliminate the second $\beta_1$, hence the invertibility of $R$ is equivalent to the invertibility of the following matrix with rows/columns induced by elements in $\mu,\nu\in \Lambda/\Lambda'$: $$\begin{gathered}
m_{\mu,\nu}:=\sum_{\mu',\nu'\in\Lambda/\Lambda'}f(\mu-\mu',\nu-\nu')f(\nu',\mu').
\end{gathered}$$ This was the assertion we wanted to prove.
Let $g\colon G_1 \times G_2 \to \mathbb{C}^\times$ be a group pairing. It induces a symmetric form on the product $G_1 \times G_2$ we denote by $\operatorname{Sym}(g)$: $$\begin{aligned}
\operatorname{Sym}(g)\colon \ & (G_1 \times G_2)^{\times 2} \longrightarrow \mathbb{C}^\times, \\
& ((\mu_1,\mu_2),(\nu_1,\nu_2)) \longmapsto g(\mu_1,\nu_2)g(\nu_1,\mu_2).
\end{aligned}$$
\[lm:Sym(f)nondeg\] If $g\colon G_1 \times G_2 \to \mathbb{C}^\times$ is a perfect pairing of abelian groups, then the symmetric form $\operatorname{Sym}(g)$ is perfect.
By assumption, $g \times g$ defines an isomorphism between $G_1 \times G_2$ to $\widehat{G_2} \times \widehat{G_1}$. The symmetric form $\operatorname{Sym}(g)$ is given by the composition of this isomorphism with the canonical isomorphism $\widehat{G_2} \times \widehat{G_1} \cong \widehat{G_1 \times G_2}$. This proves the claim.
Consider for a finite abelian group $G$ and subgroups $G_1,G_2 \leq G$ the canonical exact sequence $$\begin{gathered}
\label{al:exactSequence}
0\to G_1 \cap G_2 \to G_1 \times G_2\to G_1+G_2 \to 0.\end{gathered}$$ For $\mu \in G_1+G_2$, we denote its fiber by $$\begin{gathered}
(G_1 \times G_2)_{\mu}:=\{(\mu_1,\mu_2)\in G_1 \times G_2 \,|\, \mu_1+\mu_2=\mu \}.\end{gathered}$$ Moreover, we define $$\begin{gathered}
\operatorname{Rad}:= \big\{ (\mu_1,\mu_2) \in G_1 \times G_2 \,|\, \operatorname{Sym}\big(\hat{f}\big)((\mu_1,\mu_2),x)=1 \; \forall\, x \in (G_1 \times G_2)_0 \big\}, \\
\operatorname{Rad}_\mu:= \operatorname{Rad} \cap (G_1 \times G_2)_{\mu}, \\
\operatorname{Rad}_0^\perp := \{ \mu_1 + \mu_2 \in G \,|\, (\mu_1,\mu_2) \in \operatorname{Rad} \}.\end{gathered}$$
We have two split exact sequences: $$\begin{gathered}
0\to \operatorname{Rad}_0 \to \operatorname{Rad}\to \operatorname{Rad}_0^\perp \to 0,\\
0\to \operatorname{Rad}_0^\perp \to G \to \operatorname{Rad}_0 \to 0.
\end{gathered}$$
The first sequence is exact by definition of the three groups. Moreover, we know $$\begin{gathered}
\operatorname{Rad} = \ker\big(\hat{\iota} \circ \operatorname{Sym}\big(\hat{f}\big)\big) \cong \ker(\hat{\iota}) = \text{im}(\hat{\pi}) \cong \hat{G} \cong G,\end{gathered}$$ where $\hat{\iota}$, $\hat{\pi}$ denote the duals of the inclusion and projection in (\[al:exactSequence\]). In Example \[ex:RadSymf\] we will see that in the case $G_1=G_2=G$, $\hat{f}$ symmetric, $\operatorname{Rad}_0$ is the $2$-torsion subgroup of $G$, and the second map in the second exact sequence is just the projection, hence both diagrams split in this case. If $\hat{f}$ is asymmetric, we will see in Section \[section5.3\] that $\operatorname{Rad}_0$ is isomorphic to ${\mathbb{Z}}_2^k$ for some $k\geq 2$, thus $$\begin{gathered}
\operatorname{Rad}_0^\perp \longrightarrow \operatorname{Rad}, \qquad x \longmapsto \sum_{\tilde{x} \in \operatorname{Rad}_x} \tilde{x}\end{gathered}$$ is a section of the first exact sequence. Here we used that the sum over all elements in ${\mathbb{Z}}_2^k$ vanishes. Again, it follows that both diagrams split. Finally, if $G_1 \neq G_2$ (i.e., in the case $D_{2n}$), then $\hat{f}=q^{-(\cdot,\cdot)}$ on $G_1 \cap G_2$. By the same argument as in Example \[ex:RadSymf\], $\operatorname{Rad}_0$ is the $2$-torsion subgroup of $G_1 \cap G_2$. But we have $G \cong G_1 \cap G_2 \times \pi_1 $ in this case, hence both sequences split.
Using the projection $\alpha\colon G \to \operatorname{Rad}_0^\perp$ and the inclusion $\beta\colon \operatorname{Rad}_0^\perp \to \operatorname{Rad}$ from the above lemma, we can define a symmetric form on $G$: $$\begin{aligned}
\operatorname{Sym}_G\big(\hat{f}\big)\colon \ & G \times G \longrightarrow {\mathbb{C}}^\times, \\
& (\mu,\nu) \longmapsto \operatorname{Sym}\big(\hat{f}\big)(\beta\circ\alpha(\mu),\beta\circ\alpha(\nu)).
\end{aligned}$$ Moreover, we have $\operatorname{Rad}\big(\operatorname{Sym}_G\big(\hat{f}\big)\big) \cong \operatorname{Rad}_0$.
\[thm:InvertyMonodromymat\] We have shown in Theorem [\[thm:R0\]]{} and Lemma [\[lm:NondegGroupPairing\]]{} that the assumption that $R=R_0(f)\bar\Theta$ is an $R$-matrix is equivalent to the existence of subgroups $G_1,G_2\subset \Lambda/\Lambda'$ of same order some $d|\Lambda_R/\Lambda'|$ and $f$ restricting up to a scalar to a non-degenerate pairing $\hat{f}\colon G_1\times G_2\to{\mathbb{C}}^\times$ and $f$ vanishes otherwise.
In this notation the matrix $m$ as defined in the previous lemma can be rewritten as $$\begin{gathered}
m_{\mu,\nu}=\frac{1}{d^2|\Lambda_R/\Lambda'|^2} \sum_{\substack{\tilde{\mu} \in (G_1 \times G_2)_{\mu}\\\tilde{\nu} \in (G_1 \times G_2)_{\nu}}} \operatorname{Sym}\big(\hat{f}\big)(\tilde{\mu},\tilde{\nu}).\end{gathered}$$ It is invertible if and only if $\operatorname{Rad}_0=0$. In this case, $$\begin{gathered}
m_{\mu,\nu} = \frac{|G_1 \cap G_2|}{d^2|\Lambda_R/\Lambda'|^2} \operatorname{Sym}_G\big(\hat{f}\big).\end{gathered}$$
We first note that $\operatorname{Rad}_0=0$ implies $\operatorname{Rad}_0^\perp = G$ and thus $G = G_1 + G_2$. Together with Corollary \[cor:LambdaPrime\] this implies
$$\begin{gathered}
\Lambda'=\operatorname{Cent}_{\Lambda_R}(\Lambda).
\end{gathered}$$
Before we proof the theorem, we first give a simple example:
\[ex:RadSymf\] Let $G_1=G_2=G$ (correspondingly $\Lambda_1=\Lambda_2=\Lambda$) and assume $\hat{f}$ is symmetric non-degenerate, then the radical measures $2$-torsion: $$\begin{gathered}
\operatorname{Rad}\big(\operatorname{Sym}_G\big(\hat{f}\big)\big) \cong \operatorname{Rad}_0=\{\mu\in G\,|\, 2\mu=0\}.\end{gathered}$$ Again, this is the only case appearing for cyclic fundamental groups. Hence in all cases except ${\mathfrak{g}}=D_{2n}$ factorizability is equivalent to $|\Lambda/\Lambda'|$ being odd.
The first part of the theorem follows by applying Lemma \[lm:NondegGroupPairing\] to the matrix $m$ as given in the previous lemma. Now, assume that $m$ is invertible. We must have $G=G_1+G_2$, otherwise the matrix has zero-columns and rows, differently formulated: the fibers $(G_1 \times G_2)_{\mu}$ in the short exact sequence must be non-empty for all $\mu \in G$. If on the other hand, $\operatorname{Rad}_0=0$, then $\operatorname{Rad}_0^\perp = G$ and thus $G_1+G_2=G$ must also hold, thus we assume this from now on. By the short exact sequence the fiber $(G_1 \times G_2)_{0} \cong G_1 \cap G_2$, other fibers are of the explicit form $\tilde{\mu}+(G_1 \times G_2)_{0}$ for some choice of representative $\tilde{\mu}$. Therefore, $$\begin{gathered}
m_{\mu,\nu}= \frac{1}{d^2|\Lambda_R/\Lambda'|^2} \sum_{\substack{\tilde{\mu} \in (G_1 \times G_2)_{\mu}\\\tilde{\nu} \in (G_1 \times G_2)_{\nu}}} \operatorname{Sym}\big(\hat{f}\big)(\tilde{\mu},\tilde{\nu})\\
\hphantom{m_{\mu,\nu}}{}
= \frac{1}{d^2|\Lambda_R/\Lambda'|^2} \sum_{\tilde{\nu} \in (G_1 \times G_2)_{\nu}} \operatorname{Sym}\big(\hat{f}\big)(\tilde{\mu},\tilde{\nu}) \sum_{\tilde{\eta} \in (G_1 \times G_2)_{0}} \operatorname{Sym}\big(\hat{f}\big)(\tilde{\eta},\tilde{\nu}) \\
\hphantom{m_{\mu,\nu}}{} = \frac{|G_1 \cap G_2|}{d^2|\Lambda_R/\Lambda'|^2} \sum_{\tilde{\nu} \in (G_1 \times G_2)_{\nu}} \operatorname{Sym}\big(\hat{f}\big)(\tilde{\mu},\tilde{\nu}) \cdot \delta_{\operatorname{Sym}(f)(\tilde{\nu},\_)|_{G_1\cap G_2}=1}=(*).
\end{gathered}$$ Fix as above a representative $\tilde{\nu}$ of the fiber of $\nu$, i.e., $\tilde{\nu} \in (G_1 \times G_2)_{\nu}$ such that $\operatorname{Sym}(f)(\tilde{\nu},\_)|_{G_1\cap G_2}$ $=1$ holds. Two elements fulfilling this property differ by an element in the subgroup $\operatorname{Rad}_0 \leq G_1 \cap G_2$, thus $$\begin{gathered}
(*)=\frac{|G_1 \cap G_2|}{d^2|\Lambda_R/\Lambda'|^2} \operatorname{Sym}\big(\hat{f}\big)(\tilde{\mu},\tilde{\nu}) \sum_{\tilde{\xi} \in \operatorname{Rad}_0} \operatorname{Sym}\big(\hat{f}\big)(\tilde{\xi},\tilde{\nu}) \cdot \delta_{\operatorname{Sym}(f)(\tilde{\nu},\_)|_{G_1\cap G_2}=1} \\
\hphantom{(*)}{} = \frac{|G_1 \cap G_2||\operatorname{Rad}_0|}{d^2|\Lambda_R/\Lambda'|^2} \operatorname{Sym}\big(\hat{f}\big)(\tilde{\mu},\tilde{\nu}) \cdot \delta_{\operatorname{Sym}(\hat{f})(\tilde{\nu},\_)|_{G_1\cap G_2}=1}\, \delta_{\operatorname{Sym}(\hat{f})(\tilde{\mu},\_)|_{\operatorname{Rad}_0}=1}.\end{gathered}$$ Since $m$ is symmetric, we have $$\begin{gathered}
m_{\mu,\nu}=\frac{|G_1 \cap G_2||\operatorname{Rad}_0|}{d^2|\Lambda_R/\Lambda'|^2} \operatorname{Sym}\big(\hat{f}\big)(\tilde{\mu},\tilde{\nu}) \cdot \delta_{\operatorname{Sym}(\hat{f})(\tilde{\nu},\_)|_{G_1\cap G_2}=1} \delta_{\operatorname{Sym}(\hat{f})(\tilde{\mu},\_)|_{G_1\cap G_2}=1} \\
\hphantom{m_{\mu,\nu}}{} =\frac{|G_1 \cap G_2||\operatorname{Rad}_0|}{d^2|\Lambda_R/\Lambda'|^2} \operatorname{Sym}_G\big(\hat{f}\big)(\mu,\nu) \delta_{\operatorname{Rad}_\mu \neq \varnothing}\delta_{\operatorname{Rad}_\nu \neq \varnothing}\end{gathered}$$ and this is invertible if an only if $\operatorname{Rad}_0 \cong \operatorname{Rad}\big(\operatorname{Sym}_G\big(\hat{f}\big)\big)=0$.
Factorizability for symmetric $\boldsymbol{R_0(f)}$ {#section5.2}
---------------------------------------------------
For $R_0 = \sum_{\mu,\nu} f(\mu,\nu) K_\mu \otimes K_\nu$ being the Cartan part of an $R$-matrix, assume that $\hat{f}=|G|f$ on $G$ is symmetric. We have shown in Example \[ex:RadSymf\] that factorizability is equivalent to $|G|$ being odd.
We now want to give a necessary and sufficient condition for this:
\[lm:conditionFor|G|odd\] Let $\Lambda_R \subseteq \Lambda \subseteq \Lambda_W$ be an arbitrary intermediate lattice for a certain irreducible root system. Then the order of the group $G=\Lambda/\operatorname{Cent}_{\Lambda_R}(\Lambda)$ is odd if and only if both of the following conditions are satisfied:
1. $|\Lambda/\Lambda_R|$ is odd,
2. $\ell$ is either odd or $(\ell \equiv 2$ [mod]{} $4$, ${\mathfrak{g}}=B_n$, $\Lambda=\Lambda_R)$ including $A_1$.
We saw that in all our cases, there exists an isomorphism $$\begin{gathered}
\Lambda/\Lambda_R \cong \operatorname{Cent}_{\Lambda}(\Lambda_R)/\operatorname{Cent}_{\Lambda_R}(\Lambda).\end{gathered}$$ Moreover, from Lemma \[lm:ExplicitFormOfCentralizers\] we know that $|\Lambda/\operatorname{Cent}_{\Lambda}(\Lambda_R)|=\det (D_\ell)$, where $D_\ell$ was the diagonal matrix $\operatorname{Diag}\big( \frac{\ell}{\operatorname{gcd}(\ell,d_i^\Lambda)}\big) )$ with $d_i^\Lambda$ being the invariant factors of the lattice $\Lambda$ (i.e., the diagonal entries of the Smith normal form of a basis matrix of $\Lambda$). Thus, $$\begin{gathered}
|G|= |\Lambda/\operatorname{Cent}_{\Lambda_R}(\Lambda)| = |\Lambda/\operatorname{Cent}_{\Lambda}(\Lambda_R)||\operatorname{Cent}_{\Lambda}(\Lambda_R)/\operatorname{Cent}_{\Lambda_R}(\Lambda)| \\
\hphantom{|G|}{} = |\Lambda/\operatorname{Cent}_{\Lambda}(\Lambda_R)||\Lambda/\Lambda_R| = \det (D_\ell) |\Lambda/\Lambda_R|
= \prod_{i=1}^n \frac{\ell}{\operatorname{gcd}(\ell,d_i^\Lambda)} |\Lambda/\Lambda_R|.\end{gathered}$$ Clearly, this term is odd if $\ell$ and $|\Lambda/\Lambda_R|$ are odd. In the case ($\ell \equiv 2$ mod $4$, ${\mathfrak{g}}=B_n$, $\Lambda=\Lambda_R$), the Smith normal form $S_R$ of the basis matrix $A_R$ is given by $2\cdot \operatorname{id}$. Thus, $|G|$ is odd in this case. On the other hand, let $|G|$ be odd:
We first consider the case $\ell$ *even*. A necessary condition for $|\Lambda/\Lambda'|$ odd is that the multiplicity $m_\ell$ of the prime $2$ in $\prod\limits_{i=1}^n \frac{\ell}{\operatorname{gcd}(\ell,d_i^\Lambda)}$ is at most the multiplicity $m_{\pi_1}$ of the prime $2$ in $|\pi_1|$. We check this condition for rank $n>1$:
- For ${\mathfrak{g}}$ simply-laced (or triply-laced ${\mathfrak{g}}=G_2$) we have all $d_i=1$, hence $n\,|\, m_\ell$ (equality for $\ell=2$ ${\rm mod}~4$). The cases $D_n$ with $m_{\pi_1}=2$ have rank $n\geq 4$, all others except $A_n$ have $m_{\pi_1}=0,1$, so the necessary condition $m_\ell\leq m_{\pi_1}$ is never fulfilled. The cases $A_n$ have $2^{m_{\pi_1}}|(n+1)\leq (m_\ell+1)\stackrel{!}{\leq} (m_{\pi_1}+1)$ which can only be true in rank $n=1$ treated above.
- For ${\mathfrak{g}}$ doubly-laced of rank $n>1$, we always have always $m_{\pi_1}=0,1$ but $m_\ell$ can be considerably smaller than above, namely for $\ell=2$ ${\rm mod}~4$ equal to the number of short simple roots $d_{\alpha_i}=1$ (otherwise $m_\ell$ again increases by $n$ for every factor $2$ in $\ell$), hence the necessary condition $m_\ell\leq m_{\pi_1}$ can be fulfilled only for $B_n$ (which would also include $A_1$ above for $n=1$). More precisely, since $m_\ell=m_{\pi_1}$ and the decomposition for $\Lambda/\Lambda'$ has an additional factor $|\Lambda/\Lambda_R|$, it can only be odd for $\Lambda=\Lambda_R$.
On the other hand, if $\ell$ is *odd*, then the whole product term is odd. But since $|G|$ was assumed to be odd, also $|\Lambda/\Lambda'|$ must be odd.
\[cor:FactForLambda=LambdaR\] Let $\Lambda=\Lambda_R$. In the previous section we have seen that $\hat{f}=q^{-(\cdot,\cdot)}$ gives always an $R$-matrix in this case. By the proof of the previous lemma, we have $$\begin{gathered}
\operatorname{Rad}_0 \cong \prod_{i=1}^n {\mathbb{Z}}_{\gcd\big(2,\frac{\ell}{\gcd(\ell,d_i^R)} \big) },
\end{gathered}$$ where the $d_i^R$ denote the invariant factors of $\Lambda_W^\vee/\Lambda_R$.
Factorizability for $\boldsymbol{D_{2n}}$, $\boldsymbol{R_0}$ antisymmetric {#section5.3}
---------------------------------------------------------------------------
The split case ${\mathfrak{g}}=D_{2n}$, $G=G_1 \times G_2$ is clearly factorizable, so the only remaining case for which we have to check factorizabilty is ${\mathfrak{g}}=D_{2n}$, $\Lambda=\Lambda_W$ for $\hat{f}$ being not symmetric. We know that in this case, the corresponding form $g$ on $\Lambda/\Lambda_R$ is uniquely defined by a $2\times 2$-matrix $K \in \mathfrak{gl}(2,{\mathbb{F}}_2)$, s.t. $g(\lambda_{2(n-1)+i},\lambda_{2(n-1)+j})=\exp \big( \frac{2 \pi i K_{ij}}{2} \big)$ for $i,j\in \{1,2\}$. From this we see that if $g$ is not symmetric, it must be antisymmetric, i.e., $g(\mu,\nu)=g(\nu,\mu)^{-1}$. Thus, the following lemma applies in this case, and hence there are no factorizable $R$-matrices for $D_{2n}$, $\Lambda=\Lambda_W$.
\[lm:FactForAsymmPairing\]For ${\mathfrak{g}}$ simply-laced and $\Lambda=\Lambda_W$, let $\hat{f}=q^{-(\cdot,\cdot)}g\colon G \times G \to {\mathbb{C}}^\times $ be a non-degenerate form as in Theorem [\[thm:solutionsgrpeq\]]{} and Lemma [\[lm:NondegGroupPairing\]]{}, s.t. the form $g\colon\pi_1 \times \pi_1 \to {\mathbb{C}}^\times$ is asymmetric. Then, $$\begin{gathered}
\operatorname{Rad}_0 \cong \bigoplus_{i=1}^{n} {\mathbb{Z}}_{\gcd(2,\ell d_i^R)},
\end{gathered}$$ where the $d_i^R$ denote the invariant factors of $\pi_1$. In particular, $\operatorname{Rad}_0=0$ holds if and only if $\gcd(2, \ell|\pi_1|)=1$.
We recall the definition of $\operatorname{Rad}_0\big(\operatorname{Sym}_G\big(\hat{f}\big)\big) $ in this case: $$\begin{gathered}
\operatorname{Rad}_0(\operatorname{Sym}_G(\hat{f})) = \big\{ \mu \in G \,|\, f(\nu,\mu)^{-1}=f(\mu,\nu) \ \forall \, \nu \in G \big\} \\
\hphantom{\operatorname{Rad}_0(\operatorname{Sym}_G(\hat{f}))}{}
= \big\{ \mu \in G \,|\, q^{(\nu,\mu)}g(\nu,\mu)^{-1}=q^{-(\mu,\nu)}g(\mu,\nu) \ \forall \, \nu \in G \big\} \\
\hphantom{\operatorname{Rad}_0(\operatorname{Sym}_G(\hat{f}))}{}
= \big\{ \mu \in G \,|\, q^{(\nu,\mu)}=q^{-(\mu,\nu)} \ \forall \, \nu \in G \big\} \\
\hphantom{\operatorname{Rad}_0(\operatorname{Sym}_G(\hat{f}))}{}
= \big\{ \mu \in G \,|\, q^{(2\mu,\nu)}=1 \ \forall \, \nu \in G \big\} \\
\hphantom{\operatorname{Rad}_0(\operatorname{Sym}_G(\hat{f}))}{}
= \big\{ \mu \in G \,|\, 2\mu \in \operatorname{Cent}_{2\Lambda_W}(\Lambda_W)/2\operatorname{Cent}_{\Lambda_R}(\Lambda_W) \big\} = (\ast) .
\end{gathered}$$ For ${\mathfrak{g}}$ is simply-laced, we have $\Lambda_W = \Lambda_W^\vee$, thus $$\begin{gathered}
(\ast)\cong \operatorname{Cent}_{2\Lambda_W}(\Lambda_W)/2\operatorname{Cent}_{\Lambda_R}(\Lambda_W)
= (2 \Lambda_W \cap \ell A_R \Lambda_W)/2\ell A_R \Lambda_W \\
\hphantom{(\ast)}{} = P_R \operatorname{Diag}\big(\operatorname{lcm}\big(2,\ell d_i^R\big)\big)\Lambda_W / P_R 2 \ell S_R \Lambda_W
= \Lambda_W / \operatorname{Diag}\big(\gcd\big(2,\ell d_i^R\big)\big) \Lambda_W.
\end{gathered}$$ This proves the claim.
Transparent objects in non-factorizable cases {#section5.4}
---------------------------------------------
In this section, we determine the transparent objects in the representation category of $u_q({\mathfrak{g}},\Lambda)$ with our $R$-matrix given by $R_0\bar{\Theta}$ and $R_0=\frac{1}{|\Lambda/\Lambda'|}\sum\limits_{\mu,\nu\in\Lambda/\Lambda'} \hat{f}$ with $\hat{f}$ a group pairing $\Lambda_1/\Lambda'\times \Lambda_2/\Lambda'\to {\mathbb{C}}^\times$.
Let $\mathcal{C}$ be a braided monoidal category with braiding $c$. An object $V \in \mathcal{C}$ is called *transparent* if the double braiding $c_{W,V}\circ c_{V,W}$ is the identity on $V \otimes W$ for all $W \in \mathcal{C}$.
The following theorem by Shimizu gives a very important characterization of factorizable categories:
\[thm\_Shimizu\] A braided finite tensor category is factorizable if and only if the transparent objects are direct sums of finitely many copies of the unit object.
In particular, for a Hopf algebra $H$ the representation category $H-\mathsf{mod}_{fd}$ is factorizable if and only if the transparent objects are multiples of the trivial representation and vice versa.
Since in our cases $\Lambda_1\neq \Lambda_2$ can only appear in $D_{2n}$, and we know those are factorizable, we shall in the following restrict ourselves to the case $\Lambda_1=\Lambda_2=\Lambda$. The proof below works also in the more general case, but requires more notation. As usual we first reduce the Hopf algebra question to the group ring and then solve the group theoretical problem.
If a $u_q({\mathfrak{g}})$-module $V$, with a highest-weight vector $v$ and $K_\mu v=\chi(K_\mu)v$, is a transparent object, then necessarily the $1$-dimensional $\Lambda/\Lambda'$-module ${\mathbb{C}}_\chi$ is a transparent object over the Hopf algebra ${\mathbb{C}}[\Lambda/\Lambda']$ with $R$-matrix $R_0$. If $V$ is $1$-dimensional, then $V$ is transparent if and only if ${\mathbb{C}}_\chi$ is.
Let $V$ be transparent. For every $\psi\colon \Lambda/\Lambda'\to{\mathbb{C}}^\times$ we have another finite-dimensional module $W:=u_q({\mathfrak{g}})\otimes_{u_q({\mathfrak{g}})^+} {\mathbb{C}}_\psi$ with highest weight vector $w=1\otimes 1_\psi$ which we can test this assumption against $$\begin{gathered}
c^2\colon \ V\otimes W\to W\otimes V\to V\otimes W.
\end{gathered}$$ We calculate the effect of $c^2$ on the highest-weight vectors $v\otimes w$: $$\begin{gathered}
c^2(v\otimes w)=\tau_{W\otimes V} R_0\bar{\Theta} \tau_{V\otimes W} R_0\bar{\Theta} (v\otimes w).\end{gathered}$$ Because $v$, $w$ were assumed highest-weight vectors, the $\bar{\Theta}$ act trivially. Hence follows that ${\mathbb{C}}_\chi$, ${\mathbb{C}}_{\psi}$ have a trivial double braiding over the Hopf algebra ${\mathbb{C}}[\Lambda/\Lambda']$ with $R$-matrix $R_0$. Because we could achieve this result for any $\psi$ this means that ${\mathbb{C}}_\chi$ is transparent as asserted.
Now, let $V={\mathbb{C}}_\chi$ be $1$-dimensional over $u_q({\mathfrak{g}})$ and transparent over ${\mathbb{C}}[\Lambda/\Lambda']$, and let $w$ be any element in any module $W$, then again the two $\Theta$ act trivially, one time because $v=1_\chi$ is a highest weight vector, and one time because it is also a lowest weight vector. But if the double-braiding of $v=1_\chi$ with any element $w$ is trivial, then $V={\mathbb{C}}_\chi$ is already transparent over $u_q({\mathfrak{g}})$.
\[lm\_transparent\]${\mathbb{C}}_\chi$ is a transparent object over the Hopf algebra ${\mathbb{C}}[\Lambda/\Lambda']$ with $R$-matrix $R_0$ iff it is an $f$-transformed of the radical of $\operatorname{Sym}_G\big(\hat{f}\big)$, i.e., $$\begin{gathered}
\chi(\mu)=f(\mu,\xi),\qquad \xi\in \operatorname{Rad}_0.\end{gathered}$$
Since $f$ is nondegenerate, we can assume $\chi(\mu)=f(\mu,\xi)$ and wish to prove ${\mathbb{C}}_\chi$ is transparent iff $\xi\in \operatorname{Rad}_0$. We test transparency against any module ${\mathbb{C}}_\psi$ and also write $\psi(\mu)=f(\lambda,\mu)$ (note the order of the argument). We evaluate the double-braiding on $1_\chi\otimes 1_\psi$ and get the following scalar factor, which needs to be $=1$ for all $\psi$ in order to make ${\mathbb{C}}_\chi$ transparent: $$\begin{gathered}
\frac{1}{|G|^2}\sum_{\mu,\nu} \chi(\mu)\psi(\nu)\sum_{\substack{\mu_1+\mu_2=\mu \\ \nu_1+\nu_2=\mu}}\operatorname{Sym}\big(\hat{f}\big)((\mu_1,\mu_2),(\nu_1,\nu_2))\\
\qquad{} =\frac{1}{|G|^2}\sum_{\mu,\nu} f(\mu,\xi)f(\lambda,\nu)\sum_{\substack{\mu_1+\mu_2=\mu \\ \nu_1+\nu_2=\mu}}f(\mu_1,\nu_1)f(\nu_2,\mu_2)\\
\qquad{} =\frac{1}{|G|^2}\sum_{\mu,\nu} f(\mu,\xi)f(\lambda,\nu)\sum_{\nu_1,\mu_1}f(\mu_1,\nu_1) f(\nu,\mu)f^{-1}(\nu_1,\mu)f^{-1}(\nu,\mu_1)f(\nu_1,\mu_1)\\
\qquad{} =\frac{1}{|G|}\sum_{\nu}f(\lambda,\nu) \sum_{\nu_1,\mu_1}f(\mu_1,\nu_1)\delta_{\xi=-\nu+\nu_1}f^{-1}(\nu,\mu_1)f(\nu_1,\mu_1)\\
\qquad{} =\frac{1}{|G|}\sum_{\nu}f(\lambda,\nu) \sum_{\mu_1}f(\mu_1,\xi+\nu)f(\xi,\mu_1) =f^{-1}(\lambda,\xi)f^{-1}(\xi,\lambda)=\operatorname{Sym}_G\big(\hat{f}\big)(\lambda,\xi). \end{gathered}$$ This scalar factor of the double braiding is equal $+1$ for all $\lambda$ (and hence all ${\mathbb{C}}_\psi$) iff $\xi\in \operatorname{Rad}_0$ as asserted.
The previous two lemmas combined imply that any irreducible transparent $u_q({\mathfrak{g}})$-module has necessarily the characters $\chi(\mu)= f(\mu,\xi)$, $\xi\in\operatorname{Rad}_0$ as highest-weights, and conversely if such a character $\chi$ gives rise to $1$-dimensional $u_q({\mathfrak{g}})$-modules (i.e., $\chi|_{2\Lambda_R}=1$), then these are guaranteed transparent objects. Hence the final step is to give more closed expressions for the $f$-transformed characters $\chi$ of the radical depending on the case and check the $1$-dimensionality condition.
In all cases where $f$ is symmetric we have seen in Example \[ex:RadSymf\] that $\operatorname{Rad}_0\big(\operatorname{Sym}_G\big(\hat{f}\big)\big)$ is the $2$-torsion subgroup of $\Lambda/\Lambda'$, so in these cases $\chi$ gives rise to a $1$-dimensional object.
If $f$ is symmetric $($true for all cases except $D_{2n})$ then the transparent objects are all $1$-dimensional ${\mathbb{C}}_\chi$ where the characters $\chi$ are the $f$-transformed of the elements in the radical of the bimultiplicative form $\operatorname{Sym}\big(\hat{f}\big)|_{G}$ on $G=\Lambda/\Lambda'$. In particular the group of transparent objects is isomorphic to this radical as an abelian group.
In the case of symmetric $f$ $($all cases except $D_{2n})$ the fact that $\operatorname{Rad}_0$ is the $2$-torsion of $\Lambda/\Lambda'$ and $f$-transformation is a group isomorphism shows:
The group $T$ of transparent objects consists of ${\mathbb{C}}_\chi$ where $\chi|_{2\Lambda}=1$, i.e., the two-torsion of the character group.
The remaining case in $D_{2n}$ with $f$ nonsymmetric and has been done by hand in Lemma \[lm:FactForAsymmPairing\].
Quantum groups with a ribbon structure {#section6}
======================================
In [@Mue98b+ Theorem 8.23], the existence of ribbon structures for $u_q({\mathfrak{g}},\Lambda)$ is proven. In this section we construct a ribbon structure for all cases. In the proof, we use several auxiliary results from [@Mue98b+].
Let $u_q({\mathfrak{g}},\Lambda)$ be quasitriangular Hopf algebra, with an $R$-matrix satisfying the conditions in Theorem [\[thm:R0\]]{} and let $u:=S(R_{(2)})R_{(1)}$. Then $v:=K_{\nu_0}^{-1}u$ is a ribbon element in $u_q({\mathfrak{g}},\Lambda)$.
We consider the natural $\mathbb{N}_0[\alpha_i \,| \, i \in I]$-grading on the Borel parts $u^\pm:=u_q(\mathfrak{g},\Lambda)^\pm$ [@Lus93]. Since $u^\pm$ is finite-dimensional, there exists a maximal $\nu_0 \in \mathbb{N}_0[\alpha_i \,| \, i \in I]$, s.t. the homogeneous component $u_{\nu_0}^\pm$ is non-trivial. More explicitly $\nu_0$ is of the form $$\begin{gathered}
\nu_0= \sum_{\alpha \in \Phi^+} (\ell_\alpha -1) \alpha,
\end{gathered}$$ where $\ell_\alpha:= \frac{\ell}{\operatorname{gcd}(\ell,2d_\alpha)}$.
=-1 Using the formulas $u=\big(\sum f(\mu,\nu)K_{\mu+\nu}\big)^{-1} \vartheta$ and $S(u)=\big(\sum f(\mu,\nu) K_{\mu+\nu}\big)^{-1}S(\vartheta)$, where $\vartheta=\sum \bar{\Theta}^{(2)}S^{-1}\big(\bar{\Theta}^{(2)}\big)$, Müller proves the formula $K_{-\nu_0}^2=u^{-1}S(u)$. Using the fact that $u$ commutes with all grouplike elements, this implies $v^2=uS(u)$. In order to show that $v$ is central, we first show that $K_{\nu_0+2\rho}^{-1}$ is a central element. By the $K,E$-relations, this is equivalent to $$\begin{gathered}
\nu_0 + 2\rho \in \operatorname{Cent}_{\Lambda}(\Lambda_R),\end{gathered}$$ where $\rho=\frac{1}{2} \sum\limits_{\alpha \in \Phi^+}\,\alpha$ is the Weyl vector.
We calculate directly that this is always the case: $$\begin{gathered}
(\nu_0 + 2\rho,\beta)=q^{\sum\limits_{\alpha\in\Phi^+} (\ell_\alpha-1+1) (\alpha,\beta)}
=q^{\ell\sum\limits_{\alpha\in\Phi^+}\frac{1}{\operatorname{gcd}(\ell,2d_\alpha)}\cdot 2d_\alpha(\alpha^\vee,\beta)}=1.
\end{gathered}$$ Since $K_{2\rho}ux=xK_{2\rho}u$ holds for all $x \in u_q({\mathfrak{g}},\Lambda)$ (see [@Mue98b+ Lemmas 8.22 and 8.19]), we have $$\begin{gathered}
vx=K_{\nu_0}^{-1}ux=K^{-1}_{\nu_0+2\rho}K_{2\rho}ux
=K^{-1}_{\nu_0+2\rho}xK_{2\rho}u=xK^{-1}_{\nu_0+2\rho}K_{2\rho}u=xv,
\end{gathered}$$ hence $v$ is central.
Open questions {#section7}
==============
It was surprising to us that the case $D_{2n}=\mathfrak{so}_{4n}({\mathbb{C}})$ has so many more solutions that the other cases, in particular with non-symmetric $R_0$, due to the non-cyclic fundamental group. Do these additional modular tensor categories appear elsewhere? Does the non-symmetry have interesting implications on the category?
Our procedure would be similarly possible for any diagonal Nichols algebra. The Lusztig ansatz can in these cases be found in [[@AY13]]{}.
\[q:modularize\] In each case where $u_q({\mathfrak{g}},\Lambda),R$ is not factorizable, we can modularize $($see [[@Bru00])]{} the corresponding representation category and get a modular tensor category, which should be representations over some “quasi-quantum group” $u_q({\mathfrak{g}},\tilde{\Lambda},\omega),R$ which is a quasi-Hopf algebra where the group ring ${\mathbb{C}}[\tilde{\Lambda}]$ is deformed by a $3$-group-cocycle $\omega$. Can we describe this quasi-Hopf algebra in a closed form? Moreover, is every factorizable quasi-quantum group the modularization of a quasi-triangular quantum group from our list?
More technically:
The centralizer transfer map $A_\ell$ in Definition [\[def:matrix\_A\_l\]]{} $($and correspondingly the form $a_\ell)$ had a very general characterization, but we could only prove existence by a construction using the classification of simple Lie algebras $($and distinguishing three cases$)$. We strongly suspect that these maps exist under rather general assumptions.
Also the result Theorem [\[thm:solutionsgrpeq\]]{} from our previous article [[@LN14b]]{} has only been proven there for cyclic groups $($and by hand for ${\mathbb{Z}}_2\times {\mathbb{Z}}_2)$ although we strongly suspect it holds for every abelian group.
Acknowledgements {#acknowledgements .unnumbered}
----------------
Both authors thank Christoph Schweigert for helpful discussions and support. They also thank the referees, who gave a relevant contribution to improve the article with their comments. The first author was supported by the DAAD P.R.I.M.E program funded by the German BMBF and the EU Marie Curie Actions as well as the Graduiertenkolleg RTG 1670 at the University of Hamburg. The second author was supported by the Collaborative Research Center SFB 676 at the University of Hamburg.
[99]{}
Angiono I., Yamane H., The [$R$]{}-matrix of quantum doubles of [N]{}ichols algebras of diagonal type, [*J. Math. Phys.*](https://doi.org/10.1063/1.4907379) **56** (2015), 021702, 19 pages, [arXiv:1304.5752](https://arxiv.org/abs/1304.5752).
Bruguières A., Catégories prémodulaires, modularisations et invariants des variétés de dimension 3, [*Math. Ann.*](https://doi.org/10.1007/s002080050011) **316** (2000), 215–236.
Etingof P., Gelaki S., Nikshych D., Ostrik V., Tensor categories, [*Mathematical Surveys and Monographs*](https://doi.org/10.1090/surv/205), Vol. 205, Amer. Math. Soc., Providence, RI, 2015.
Feigin B.L., Gainutdinov A.M., Semikhatov A.M., Tipunin I.Yu., Modular group representations and fusion in logarithmic conformal field theories and in the quantum group center, [*Comm. Math. Phys.*](https://doi.org/10.1007/s00220-006-1551-6) **265** (2006), 47–93, [hep-th/0504093](https://arxiv.org/abs/hep-th/0504093).
Feigin B.L., Tipunin I.Yu., Logarithmic [CFT]{}s connected with simple [L]{}ie algebras, [arXiv:1002.5047](https://arxiv.org/abs/1002.5047).
Gainutdinov A.M., Runkel I., Symplectic fermions and a quasi-[H]{}opf algebra structure on [$\overline{U}_{q}s\ell(2)$]{}, [*J. Algebra*](https://doi.org/10.1016/j.jalgebra.2016.11.026) **476** (2017), 415–458, [arXiv:1503.07695](https://arxiv.org/abs/1503.07695).
Kerler T., Lyubashenko V.V., Non-semisimple topological quantum field theories for 3-manifolds with corners, [*Lecture Notes in Mathematics*](https://doi.org/10.1007/b82618), Vol. 1765, Springer-Verlag, Berlin, 2001.
Kondo H., Saito Y., Indecomposable decomposition of tensor products of modules over the restricted quantum universal enveloping algebra associated to [${\mathfrak{sl}}_2$]{}, [*J. Algebra*](https://doi.org/10.1016/j.jalgebra.2011.01.010) **330** (2011), 103–129, [arXiv:0901.4221](https://arxiv.org/abs/0901.4221).
Lentner S., A [F]{}robenius homomorphism for [L]{}usztig’s quantum groups for arbitrary roots of unity, [*Commun. Contemp. Math.*](https://doi.org/10.1142/S0219199715500406) **18** (2016), 1550040, 42 pages, [arXiv:1406.0865](https://arxiv.org/abs/1406.0865).
Lentner S., Nett D., A theorem of roots of unity and a combinatorial principle, [arXiv:1409.5822](https://arxiv.org/abs/1409.5822).
Lentner S., Nett D., New [$R$]{}-matrices for small quantum groups, [*Algebr. Represent. Theory*](https://doi.org/10.1007/s10468-015-9555-6) **18** (2015), 1649–1673, [arXiv:1409.5824](https://arxiv.org/abs/1409.5824).
Lusztig G., Quantum groups at roots of [$1$]{}, [*Geom. Dedicata*](https://doi.org/10.1007/BF00147341) **35** (1990), 89–113.
Lusztig G., Introduction to quantum groups, [*Modern Birkhäuser Classics*](https://doi.org/10.1007/978-0-8176-4717-9), Birkhäuser/Springer, New York, 2010.
Müller E., Quantengruppen im [E]{}inheitswurzelfall, Ph.D. Thesis, Ludwig-Maximilians-Universität München, 1998.
Müller E., Some topics on [F]{}robenius–[L]{}usztig kernels. [I]{}, [*J. Algebra*](https://doi.org/10.1006/jabr.1997.7364) **206** (1998), 624–658.
Müller E., Some topics on [F]{}robenius–[L]{}usztig kernels. [II]{}, [*J. Algebra*](https://doi.org/10.1006/jabr.1998.7423) **206** (1998), 659–681.
Schneider H.-J., Some properties of factorizable [H]{}opf algebras, [*Proc. Amer. Math. Soc.*](https://doi.org/10.1090/S0002-9939-01-05787-2) **129** (2001), 1891–1898.
Shimizu K., Non-degeneracy conditions for braided finite tensor categories, [arXiv:1602.06534](https://arxiv.org/abs/1602.06534).
Turaev V.G., Quantum invariants of knots and 3-manifolds, *De Gruyter Studies in Mathematics*, Vol. 18, Walter de Gruyter & Co., Berlin, 1994.
|
{
"pile_set_name": "ArXiv"
}
|
[**On the Smooth Points of $T$-stable\
Varieties in $G/B$ and the Peterson Map** ]{}
[**Abstract**]{}
Introduction {#intro}
============
Let $G$ be a semi-simple algebraic group over $k={\mathbb C}$. Fix a Borel subgroup $B$ of $G$ and a maximal torus $T\subset B$. The purpose of this paper is to investigate the singular locus of a $T$-stable subvariety $X$ of the flag variety $G/B$. More precisely, we would like to describe the set of nonsingular $T$-fixed points of $X$. This problem originates with the question of determining the connection between the singular loci of a Schubert variety $X\subset G/B$ (i.e. the closure of a $B$-orbit) in the sense of rational smoothness (cf. [@kl1; @kl2]) and the sense of algebraic geometry. It was shown, for example, in [@deod] that when $G$ is of type $A$, i.e. $G/B$ is the variety of complete flags in $k^n$, then the two singular loci are the same, in particular every rationally smooth point of $X$ is nonsingular. More recently, Dale Peterson (unpublished) extended this to Schubert varieties in $G/B$ in the full $ADE$ setting (see below). His method is to study how tangent spaces of a Schubert variety $X$ behave when deformed along $T$-invariant curves in $X$ containing a nonsingular point of $X$.
Before describing our results, we need to fix some notation. Recall that the $T$-fixed point set $G/B^T$ is in a one to one correspondence with the Weyl group $W$ of $(G,T)$ via $w\mapsto wB$. Hence we may simply denote $wB\in G/B^T$ by $w$. The Schubert variety $X(w)$ associated to $w\in W$ is by definition the Zariski closure in $G/B$ of the $B$-orbit $Bw$. Recall that $B$ defines a Coxeter system for $W$. Let $\le$ denote the associated partial order on $W$, the so called Bruhat-Chevalley order. This Coxeter system has two fundamental properties. Firstly, $x\leq y$ if and only if $X(x)\subset X(y)$. Hence $X(w)^T=\{x\leq w\}$. Note that we will usually use $[x,w]$ to denote $\{x\leq w\}$. Secondly, if $\ell(w)$ denotes the length of $w\in W$, then $\ell(w)=\dim X(w)$.
For simplicity, let $X$ denote $X(w)$. The set $E(X,x)$ of $T$-invariant curves, or briefly, $T$-curves, in $X$ containing the $T$-fixed point $x$ turns out to be of basic importance in determining the singular locus of $X$ (cf. [@cp; @spsv]). Let $\Phi\subset X(T)$ be the root system of $(G,T)$, and recall that to each ${{\alpha}}\in \Phi$, there is a one dimensional unipotent subgroup $U_{{\alpha}}$ of $G$ called the [*root subgroup*]{} associated to ${{\alpha}}$. Recall that the positive roots $\Phi^+$ can be described as those such that $U_{{\alpha}}\subset B$. Then any $C \in
E(X,x)$ has the form $\overline{U_{{\alpha}}x}$ for some ${{\alpha}}$. Moreover, $C^T=\{x,y\}$, where $y=r_{{\alpha}}x$, $r_{{\alpha}}$ denoting the reflection corresponding to ${{\alpha}}$. When $y>x$, then ${{\alpha}}< 0$ and we can write $C = \overline{U_{{\beta}}y}$ with ${{\beta}}= -{{\alpha}}>
0$, so one can translate the Zariski tangent space $T_y(X)$ to $X$ at $y$ along $C\backslash \{x\}$ via $U_{{{\beta}}}$ leaving $X$ invariant. Taking the limit gives a $T$-stable subspace $\tau_C(X,x)$ of $T_x(X)$ of dimension $\dim T_y(X)$. The key result is
[**Peterson’s Theorem**]{} [*Suppose that $X=X(w)$ is nonsingular at all $y\in W$ such that $x<y\leq w$ and that all $\tau_C(X,x)$ coincide when $C\in E(X,x)$ has the property that $X$ is nonsingular on $C\backslash \{x\}$. Then $X$ is nonsingular at $x$.*]{}
The idea of Peterson’s proof is to show that if all the $\tau_C(X,x)$ coincide, then the fibre over $x$ in the Nash blow up of $X$ at $x$ contains no $T$-curves. Since Schubert varieties are normal, it follows from Zariski’s Connectedness Theorem and Lemma \[TC\] that this fibre consists of a single point. This implies, by a result of Nobile [@no], that $X$ is nonsingular at $x$. Using this, Peterson was able to show
\[pade\] If $G$ is of type $ADE$, then every rationally smooth point of a Schubert variety $X(w)$ in $G/B$ is nonsingular.
Combining this result with the characterizations of rationally smooth Schubert varieties given in [@cp], we get several lovely descriptions of the nonsingular Schubert varieties in $G/B$ for the simply laced setting.
[(cf Theorem A of [@cp])]{} Let $G$ be semi-simple. Then a Schubert variety $X(w)$ in $G/B$ is rationally smooth if and only if any of the following equivalent conditions hold:
- the Poincaré polynomial of $X(w)$ $$P(X(w),t)=\sum b_{i}(X(w))t^i=\sum_{x\le
w}t^{2\ell(x)}$$ is symmetric;
- the order $\le$ on $[e,w]$ is rank symmetric;
- for each $x\le w$, $|E(X(w),x)|=\ell(w)$; and
- the average $a(w)$ of the length function on $[e,w]$ is $\frac{1}{2}\ell(w)$. that is, $$a(w)=\frac{1}{|[e,w]|}\sum_{x\le w} \ell(x)=\frac{1}{2}\ell(w).$$
We therefore obtain
\[RSGmodP\] If $G$ is simply laced, then a Schubert variety $X(w)$ in $G/B$ is nonsingular if and only if any of the equivalent conditions [(1)-(4)]{} hold.
A corresponding $G/P$ version will be stated in §\[G/P\].
We now describe some generalizations of these results for arbitrary irreducible $T$-stable subvarieties $X$ of $G/B$ proved in this paper. Put $$TE(X,x)=\sum_{C\in E(X,x)} T_x(C).$$ If $C=\overline{U_{{{\alpha}}}x}$, then $T_x(C)$ is a $T$-stable line of weight ${{\alpha}}$, so $TE(X,x)$ is a $T$-submodule of $T_x(X)$ such that $\dim TE(X,x)=|E(X,x)|$ (cf. [@cp]). In particular $\dim T_x(X)\geq
|E(X,x)|$. We will call $C\in E(X,x)$ [*good*]{} if $X$ is nonsingular along the open $T$-orbit in $C$, or, equivalently, if $C$ is not contained in the singular locus of $X$. Our generalization of Peterson’s Theorem goes as follows:
\[TE\] Suppose $\dim X\geq 2$ and $x\in X^T$. Then a necessary and sufficient condition that $X$ be nonsingular at $x$ is that there exist at least two distinct good $T$-curves $C,D\in E(X,x)$ such that $$\tau_C(X,x)=\tau_D(X,x)=TE(X,x).$$ If $X$ is Cohen-Macaulay at $x$, then $X$ is nonsingular at $x$ if and only if there exists at least one good $C\in E(X,x)$ such that $\tau_C(X,x)=TE(X,x)$.
The proof only uses the Zariski-Nagata Theorem, and hence is completely algebraic. In particular, it works over any algebraically closed field. (This improves the proof given in [@kut].) If $X$ is a Schubert variety, it is not hard to show that Theorem \[TE\] implies Peterson’s Theorem, giving the first of four several proofs. The Cohen-Macaulay statement is proved in Proposition \[CM\].
In order to generalize Peterson’s $ADE$ Theorem, we need to understand where the $\tau_C(X,x)$ are situated in $T_x(X)$. For this, let $\Theta_x(X)$ denote the linear span of the reduced tangent cone of $X$ at $x$. If $C\in E(X,x)$ has the form $C=\overline{U_{{\alpha}}x}$, we will call $C$ [*long*]{} or [*short*]{} according to whether ${{\alpha}}$ is long or short. If $G$ is simply laced, then, by convention, all $T$-curves will be called short. Clearly, $$TE(X,x)\subset \Theta_x(X)\subset T_x(X).$$ The next result is one of our key observations.
\[tausubsetT\] Assume $G$ has no $G_2$ factors. Then, if $C\in E(X,x)$ is good, $$\tau_C(X,x)\subset \Theta_x(X).$$ Moreover, if $C$ is short, then $\tau_C(X,x)\subset TE(X,x)$. In particular, if $G$ is simply laced and $C$ is good, then $\tau_C(X,x)\subset TE(X,x)$
We give an example in §\[tancone\] which shows that the $G_2$ hypothesis is necessary. Peterson’s $ADE$ Theorem is now a simple consequence. Indeed, assuming $X=X(w)$, it suffices to suppose $x$ is a rationally smooth $T$-fixed point such that $X$ is smooth at every $y$ with $x<y\leq
w$. Since the singular locus of a Schubert variety has codimension at least two, $\ell(x)\leq \ell(w)-2$. Hence $x$ lies on at least two good $T$-curves (cf. Proposition 2.3). The proof now follows from Theorems \[TE\] and \[tausubsetT\], since if $X$ is rationally smooth at $x$, then $|E(X,x)|=\dim X$.
This result now gives us a complete description of the smooth points of Schubert varieties in $G/B$ as long as $G$ contains no $G_2$ factors.
\[SPSV\] Suppose $G$ contains no $G_2$ factor. Then the Schubert variety $X=X(w)$ is smooth at $x<w$ if and only if $\dim \Theta_y(X)=\dim X$ for all $y\in [x,w]$. In other words, $X$ is smooth at $x$ if and only if the reduced tangent cones of $X$ at all $y\in [x,w]$ are linear. In particular, $X$ is smooth if and only if all its reduced tangent cones are linear.
The proof is essentially the same as that of the above proof of the $ADE$ Theorem.
A natural question is whether Peterson’s $ADE$ Theorem holds for arbitrary $T$-varieties in $G/B$ if $G$ is simply laced. It turns out that the answer is in general no, but we do have the following:
If $G$ is simply laced, $X$ is rationally smooth at $x$ and $\dim X\geq 2$, then $X$ is smooth at $x$ if and only if $E(X,x)$ contains at least two good $T$-curves.
Indeed, if $X$ is rationally smooth at $x$, then by a recent result of Brion [@bri], $|E(X,x)|=\dim X$. Hence the corollary follows immediately from Theorem \[TE\].
If $X$ is a Schubert variety and $G$ is simply laced, then we know from [@cp; @car] that $\Theta_x(X)=TE(X,x)$. In fact, if $G$ is simply laced, this turns out to be true for all irreducible $T$-subvarieties of $G/B$.
\[sltc\] Assume $G$ is simply laced, and $x\in X^T$. Then every $T$-line in the reduced tangent cone to $X$ at $x$ has the form $T_x(C)$ for some $C\in E(X,x)$. That is, $$\Theta_x(X)=TE(X,x).$$
We now briefly describe the rest of the paper. First of all, in §\[PM\], we define the Peterson map in a general setting and derive its basic properties. In particular, if $X$ is a Schubert variety, we show there is a remarkable explicit formula for $\tau_C(X,x)$ for any (not necessarily good) $C\in E(X,x)$. In §\[FL\], we prove a fundamental lemma showing that the Peterson map for a good $T$-curve $C$ is completely determined by its behavior on the $T$-surfaces in $X$ containing $C$. This gives us Theorem \[tausubsetT\] and, in addition, allows us to deduce that certain weights outside $TE(X,x)$ may occur in $\Theta_x(X)$. However, the fact that there is no general description of $\Theta_x(X)$ makes it desirable to find a subspace containing $\tau_C(X,x)$, assuming $C$ is good, admitting an explicit description. We describe such a subspace in §\[PTSV\]. In the next section, we mention an algorithm for finding the singular locus of a Schubert variety in $G/B$. In §\[G/P\], we prove a lemma which extends our results to any $G/P$, with the suitable restrictions on $G$, and in the last section we mention some open problems.
A remark about the field is in order. Although we are assuming $k={\mathbb C}$, we believe our arguments are valid over any algebraically closed field. This goes hand in hand with the fact proved in [@po] that the singular locus of a Schubert variety is independent of the field of definition.
[**Acknowledgement**]{} The authors would like to thank Dale Peterson for discussions about his results. We also thank Michel Brion for some comments on rational smoothness.
The second author would like to thank the University of British Columbia, Vancouver, and the University of California, San Diego, for hospitality during the work on this paper.
Preliminaries on $T$-varieties {#prelim}
==============================
Throughout this paper, $T$ will denote an algebraic torus over $k=
\mathbb C$ with character group $X(T)$ and dual group $Y(T)$ of one parameter subgoups of $T$. $X$ will always denote an irreducible $T$-variety with finite non-empty fixed point set $X^T$ which is locally linearizable in the following sense: every point $z\in X$ has a connected affine $T$-stable neighborhood $X_z$ admitting a $T$-equivariant embedding into an affine space $V$ with a linear $T$-action. This is for example true for closed $T$-stable subsets of a normal $T$-variety ([@sumi1],[@sumi2]). For any $T$-variety $X$ and $x \in X$ we choose once and for all such a neighborhood $X_x$.
We will denote the set of weights of a $T$-module $V$ by $\Omega (V)$. If $x\in X^T$ and all elements of $\Omega (T_x(X))$ lie on one side of a hyperplane in $X(T)\otimes \mathbb Q$, then $x$ is called [*attractive*]{}. It follows immediately from the definition, that if $x$ is attractive, there exists a one parameter group $\lambda \in Y(T)$ such that $\langle {{\alpha}}, \lambda
\rangle >
0$, where $\langle \cdot, \cdot \rangle :
X(T) \times Y(T) \rightarrow \mathbb Z$, is the natural pairing. Equivalently, $x$ is attractive if and only if $\lim_{t \rightarrow
0}\lambda(t)y
= x$ for all $y \in X_x$. It is well known that if $x \in X^T$ is attractive, there is a closed $T$-equivariant immersion $X_x
\subset T_x(X)$. For example, every $T$-fixed point in $G/B$ is attractive. If $x$ is attractive and $L\subset T_x(X)$ is a $T$-stable line, we may consider the restriction to $X_x$ of a $T$-equivariant linear projection $T_x(X) \RA L$. Since $L$ is an affine line, this restriction gives rise to a $T$-eigenvector $f \in k[X_x]$. We say that $f \in k[X_x]$ [*corresponds*]{} to $L$ if $f$ is so obtained.
Another fact, that we will use below, is
\[AFF\] Let $X$ be affine and $x\in X^T$ attractive. If $Y$ is any affine $T$-variety, then a $T$-equivariant morphism $f: X
\rightarrow Y$ is finite if and only if $f^{-1}(f(x))$ is a finite set.
As in the introduction, $E(X,x)$ will denote the set of $T$-curves in $X$ containing the point $x\in X^T$. Also as above, a $T$-curve $C$ is called *good* if $C^o=C\backslash C^T\subset
X^*$, where $X^*$ is the set of nonsingular points in $X$. This just means that $C\cap X^*$ is nonempty. The following lemma gives a very useful fact about $E(X,x)$ (cf [@cp]).
\[TC\] For any $x\in X^T$, $$|E(X,x)|\geq \dim_x X.$$ That is, the number of $T$-curves in $X$ through $x$ is at least $\dim_x
X$.
If the number of $T$-curves in $X$ is finite, then there is a finite graph $\G(X)$, called the [*Bruhat graph*]{} of the pair $(X,T)$, which generalizes the Bruhat graph of the Weyl group $\G(W)$ (see for example [@cp]). The vertices of $\G(X)$ are the $T$-fixed points, and two $x,y\in X^T$ are joined by an edge if and only if there exists a $T$-curve $C$ in $X$ such that $x,y\in C$. When $X$ is a $T$-variety in $G/B$, where $T$ is maximal in $B$, then $\G(X)$ is a subgraph of $\G(W)$. In particular, if $x,y\in X^T$ are joined by an edge in $\G(X)$, then there exists an $r\in R$ such that $x=ry$. If $X$ is a Schubert variety, then $\G(X)$ is a full subgraph; any edge of $\G(W)$ joining $x,y\in X^T$ is also an edge of $\G(X)$. Notice also that the Chevalley-Bruhat order gives a natural order on the vertices of $\G(X)$. In the Schubert case, Lemma \[TC\] implies
\[DI\] [(Deodhar’s Inequality]{} [@cp]) If $x<w$, then there exist at least $\ell(w)-\ell(x)$ reflections $r\in W$ for which $x<rx\leq w$.
The Peterson Map {#PM}
================
Let $X$ be a $T$-variety, $x\in X^T$ a locally linearizable isolated fixed point, and assume $C\in E(X,x)$. In this section, we will define and study what we call the [*Peterson map*]{} $\tau_C(~,x)$. The version we consider here is slightly more general than the tangent space deformation considered by Peterson, which was only defined in the case of Schubert varieties. In the next section, we will relate these two deformations and give an explicit computation Peterson’s version.
Our Peterson map is defined on certain subspaces of the tangent space to $X$ at an arbitrary point $z\in C^o$ of a $T$-curve $C$ in $X$. We may suppose the $T$-stable neighborhood $X_x \subset V$ is embedded equivariantly into a $T$-module $V$. Let $M\subset
T_z(X)$ be a $k$-subspace stable under the isotropy group $S$ of $z$, and let ${\mathbf
M}^o = T M \subset T(X) \left |
_{C^o}\right . $ be the $T$-stable vector-bundle over $C^0$, having fibre $M$ over $z$. Define ${\mathbf
M}$ to be the Zariski closure of ${\mathbf
M}^o$ in $T(X)$. Then, by definition, the Peterson map assigns $\tau_C(M,x)={\mathbf M}\cap T_x(X)$ to $M$. If $M=T_z(X)$, then we will denote $\tau_C(M,x)$ by $\tau_C(X,x)$. Clearly, if $X$ is nonsingular at $x$, then $\tau_C(X,x)=T_x(X)$.
If $C$ is smooth, then an alternative description of $\tau_C(M,x)$ is as follows. By the properness of Grassmannians, the $T$-stable vector bundle ${\mathbf
M}$ on $C\cap X_x$ extends to a vector bundle $\mathbf M$ on $C$ such that the restriction of ${\mathbf M}$ to $C^o$ is ${\mathbf M}^o$. Then $\tau_C(M,x)={\mathbf
M}_x$.
The main properties of the Peterson map are given in the next result. We assume the notation defined above is still in effect.
\[PPP\] Suppose $X$, $T$, $x\in X^T$ and $C\in
E(X,x)$ are as above, and let $M$ be an $S$-stable subspace of $T_z(X)$ of dimension $m$. Then:
- $\tau_C(M,x)$ is a $T$-stable subspace of $T_x(X)$ of dimension $m$, and, moreover, $M$ and $\tau_C(M,x)$ are isomorphic $S$-modules. If $Y \supset X$ is any ambient smooth $T$-variety, then as elements of the Grassmannian ${\mathcal
G}_m(Y)$ of $m$-planes in $T(Y)$, $\tau_C(M,x) = \lim_{t\RA 0}{\mathrm
d}\lambda(t)M$, where $\lambda$ is an arbitrary one parameter subgroup of $T$ such that $\lim_{t\RA 0 }\lambda(t)z=x$.
- If $M=M_1\oplus \dots \oplus M_t$ is the $S$-weight decomposition of $M$, then $\tau_C(M,x)= \tau_C(M_1,x)\oplus \dots
\oplus
\tau_C(M_t,x)$ is the $S$-weight decomposition of $\tau_C(M,x)$.
- If $N$ is any $T$-stable subspace of $\tau_C(X,x)$, then there exists an $S$-stable subspace $M\subset T_z(X)$ such that $\tau_C(M,x)=N$.
It follows from the definitions that $\tau_C(M,x)$ is $T$-stable. That it is a subspace of the same dimension as $M$ also follows from the properness of the Grassmannian. Moreover, it follows easily that the limit in the Grassmannian is found by closing the bundle $T(X)$ over $C^o$. Given all this, the first statement follows from the second, once we have proved that $\tau_C(M_i,x)$ and $M_i$ are isomorphic as $S$-modules. Now $S$ acts on $M_i$ by a character ${{\alpha}}_i$, hence on $TM_i \subset T(X)$, as well. With $TM_i$ being dense in $\overline{TM_i}$, it follows that $sv = {{\alpha}}_i(s)v$ for all $v \in \tau_C(M_i,x), s \in S$. It is now obvious that $\tau_C(M,x)$ decomposes as stated. Thus we have 1) and 2).
For the last statement, it is enough, of course, to consider the case where $N$ is a line having $T$-weight say ${{\alpha}}$. By 1) and 2), there is an $S$-subspace $M_1$ of $T_z(X)$, on which $S$ acts by the character ${{\alpha}}$ restricted to $S$, so that $\tau_C(M_1,x)$ contains $N$. Let $\lambda
\in
Y(T)$ be a regular one parameter subgroup of $T$ so that $\lim_{t\rightarrow
0}\lambda(t)z=x$. Thus, there is an induced surjective morphism $f:
{\mathbb
A}^1 \RA C_x \subset X_x$. Let $V$ be a $T$-module, into which $X_x$ embeds equivariantly with $x = 0$. Then $${\mathbf B} = f^*(T(X_x)) ={\mathbb A}^1\times_{C_x} T(X_x)$$ is a $T$-stable subvariety of ${\mathbb A}^1\times V$, which certainly contains ${\mathbf M}^\prime = \overline{f^*(TM_1)}$, the latter being a vector bundle over ${\mathbb A}^1$. This means in particular that ${\mathbf M}^\prime$ is a trivial bundle. Moreover, it is easy to see, that over $
{\mathbb G}_m \subset {\mathbb A}^1$, the map $(s,w)\mapsto(s,{\mathrm{d}}\lambda (s)w)$ is a closed $S$-equivariant immersion, $S$ acting trivially on the first factor ${\mathbb A}^1$.
Summarizing, there is a global section $\sigma$ of ${\mathbf
M}^\prime$ with $\sigma(1) \in M_1$ and with $0 \not= \sigma(0) \in N$, which, over ${\mathbb G}_m$ has the form $$\sigma(s)=(s, {\mathrm d}\lambda(s)(\sum_i s^i w_i)),$$ for suitable $w_i \in M_1 \subset V$. Let $V = \bigoplus_{{\beta}}V_{{\beta}}$ be the decomposition into $T$-weightspaces, so that $w_i = \sum_{{\beta}}v_{i,{{\beta}}}$ with $v_{i,{{\beta}}} \in V_{{\beta}}$. Now compare weights in the expansion $$\sum_i{\mathrm d}\lambda(s)s^{i}w_i = \sum_{i,{{\beta}}} s^{i + \langle {{\beta}},
\lambda \rangle}v_{i,{{\beta}}}.$$ Since $\sigma$ extends to zero and ${{\sigma}}(0)\in N\setminus \{0\}$, the term on the right hand side of (2) of degree zero occurs when $i = - \langle{{\alpha}},\lambda \rangle$. Furthermore, for every nonzero term on the right hand side, $i+\langle {{\beta}}, \lambda\rangle \geq 0$. Since $\lambda$ is regular, it follows that $\tau_C(kw_i,x)=kv_{i,{{\alpha}}}$, where $i = -
\langle{{\alpha}},\lambda\rangle$. This says $\tau_C(kw_i,x)=N$, and we are done.
The Peterson Map for $G/B$ {#G/B}
==========================
In this section, we will explicitly compute $\tau_C(X,x)$ for a Schubert variety $X=X(w)$ in $G/B$, where $C\in
E(X,x)$ is such that $C^T=\{x,y\}$ and $y>x$. In other words, we are considering what happens when we pass from a higher vertex of the Bruhat graph along an edge to a lower vertex. Then $C$ can be expressed in the form $C=\overline{U_{{{\alpha}}}y}$ with ${{\alpha}}>0$, hence the additive group $U_{{\alpha}}$ acts transitively on $C\backslash \{x\}$. Thus any subspace $M\subset T_zX$, $z\in C^o$, is the $U_{{\alpha}}$-translate of a unique subspace of $T_yX$. In addition, the vector bundle ${\mathbf M}$ introduced in the previous section is defined and $U_{{\alpha}}$-equivariant on $C$. Therefore, $\tau_C$ can be viewed as defined on subpaces of $T_y(X)$. This is the map originally considered by Peterson.
Letting $S=\ker ({{\alpha}})$, suppose $M\subset T_y(X)$ is $S$-stable (resp. $T$-stable). Then ${\mathbf M}$ is also $S$-equivariant (resp. $T$-equivariant). Thus $\tau_C(M,x)$ is an $S$-module (resp. $T$-module), and furthermore, $\tau_C(M,x)$ is also $U_{{\alpha}}$-module. (This does not require that $M$ be $S$-stable.) Assuming $M$ is $S$-stable, any $S$-weight space $V$ of $M$ is a direct sum of certain ${\mathbf g}_{{{\beta}}+k{{\alpha}}}$, ${{\beta}}$ is fixed, $k\leq 0$ and $y^{-1}({{\alpha}}+k{{\beta}})<0$ (since $\Omega (T_y(G/B))=y^{-1}(-\Phi^+)$).
Now suppose $M$ is an $S$-weight space in $T_y(X)$ of dimension $\ell$. Then $M$ and $\tau_C(M,x)$ are isomorphic as $S$-modules, but possibly different when viewed as $T$-modules. However, the $T$-weights of $\tau_C(M,x)$ are not hard to determine. Indeed, since $M$ has only one $S$-weight, it follows that $\Omega (M)$ is contained in a single ${{\alpha}}$-string in $\Phi$. Now $T_y(G/B)$ is a ${\mathbf g}_{-{{\alpha}}}$-module, although $T_y(X)$ need not be one. In fact there exists a unique ${\mathbf
g}_{-{{\alpha}}}$-submodule $M^*$ of $T_y(G/B)$ such that $M^*\cong M$ as $S$-modules. That $M^*$ exists is clear. If $M$ isn’t already a ${\mathbf g}_{-{{\alpha}}}$-module, then one way to describe $M^*$ is as the unique $U_{-{{\alpha}}}$-fixed point on the $T$-curve $\overline{U_{-{{\alpha}}}M}$ in the ordinary Grassmannian $G_{\ell}(T_y(G/B))$. Clearly $M^*$ is determined by the unique ${{\beta}}\in \Phi$ lying on the ${{\alpha}}$-string containing $\Omega
(M)$ satisfying the condition that $y^{-1}({{\beta}}-\ell {{\alpha}})\not\in \Phi^-$, but $y^{-1}(\{{{\beta}}, {{\beta}}-{{\alpha}}, \dots ,{{\beta}}-(\ell -1){{\alpha}}\})\subset \Phi^-$. That is, $$\Omega (M^*)=\{{{\beta}}, {{\beta}}-{{\alpha}}, \dots ,{{\beta}}-(\ell -1){{\alpha}}\}.$$ We will call ${{\beta}}$ the [*leading weight*]{} of $M^*$.
\[PMG/B\] Assuming $M$ is as above, $\tau_C(M,x)={\mathrm{d}} \dot{r}_{{\alpha}}(M^*)$, where ${\mathrm{d}} \dot{r}_{{\alpha}}$ denotes the differential at $y$ of a representative ${\dot{r}}_{{\alpha}}\in N(T)$ of $r_{{\alpha}}$. Consequently, $\Omega (\tau_C(M,x))=r_{{{\alpha}}}\Omega (M^*)$.
Since $M^*$ is a $U_{-{{\alpha}}}$-module, $\dot{r}_A(M^*)$ is a $U_{{\alpha}}$-module contained in $T_x(G/B)$ which is isomorphic to $M$ as an $S$-module. But this condition uniquely determines $\tau_C(M,x)$.
Of course we do not actually need to define $M^*$ to determine $\tau_C(M,x)$. We will use this formulation in §\[SL\].
Recall that if $X(w)$ is smooth at $x\leq w$, then $$\Omega (T_x(X(w))=\Omega (TE(X,x))=\{\g\in \Phi\mid x^{-1}(\g)<0,~
r_\g x\leq w\}.$$ If $x<y\leq w$, then clearly $X(w)$ is smooth at $y$ also. Using the Peterson map, we can now describe how to obtain the $T$-weights of $T_x(X(w))$ by degenerating to $x$ along edges of $\G(X(w))$. Denoting $\Omega (T_x(X(w)))$ by $\Phi(x,w)$ as in [@cp] and putting $\Phi(x,w)^*=\Omega (T_x(X(w))^*)$, Proposition \[PMG/B\] gives
Let $X(w)$ be smooth at two adjacent vertices of the Bruhat graph $\G(X(w))$, say $x$ and $y=rx$, where $y>x$ and $r\in R$. Then $$\Phi(x,w)=r(\Phi(y,w)^*).$$ Consequently, if $x<z\leq w$ is another vertex of $\G(X(w))$ adjacent to $x$, then $r(\Phi(y,w)^*)=t(\Phi(z,w)^*)$, where $x=t z$ with $t\in R$.
We will see later that if $X(w)$ is smooth at $y$ but not necessarily at $x=ry<y$, then it is still true that $r(\Phi(y,w)^*)\subset \Phi(x,w),$ provided the corresponding $T$-curve $C\in
E(X,x)$ is short. Note: we are assuming (contrary to the common practice) that if $\Phi$ is simply laced, then all its elements are short. Thus all $T$-curves in the corresponding $G/B$ are by convention short.
If $C$ is long, the situation is more complicated. This is illustrated in the following example.
\[B2\] Suppose $G$ is of type $B_2$, and $w=r_{{\alpha}}r_{{\beta}}r_{{\alpha}}$, where ${{\alpha}}$ the short simple root and ${{\beta}}$ the long simple root. In this example, we will compute the Peterson maps and use the result to determine the singular locus of $X(w)$, which is of course already well known. Put $X=X(w)$ and $\Omega (T_x(X))=\Omega (x)$. If $x\leq w$ is a nonsingular point of $X$, then $\Omega
(x)=\Phi(x,w)$. Clearly (for example, by Peterson’s Theorem), $w$, $r_{{\alpha}}r_{{\beta}}$ and $r_{{\beta}}r_{{\alpha}}$ are nonsingular points, and one easily sees that
- $\Omega (w)=\{{{\alpha}},{{\alpha}}+{{\beta}},2{{\alpha}}+{{\beta}}\}$;
- $\Omega (r_{{\alpha}}r_{{\beta}})=\{{{\alpha}},2{{\alpha}}+{{\beta}},-({{\alpha}}+{{\beta}})\}$;
- $\Omega (r_{{\beta}}r_{{\alpha}})=\{-{{\alpha}},{{\beta}},{{\alpha}}+{{\beta}}\}$;
It remains to test whether the points $r_{{\alpha}}$ and $r_{{\beta}}$ are nonsingular. Indeed, since ${{\alpha}}$ is simple and $r_{{\alpha}}w<w$, $\dot{r}_{{\alpha}}X=X$. Moreover, if $C=\overline{U_{{\alpha}}x}$, then $\Omega (\tau_C(X,x))=r_{{\alpha}}(\Omega (r_{{\alpha}}x))$ as long as $r_{{\alpha}}x<
x\leq w$. Thus $e$ is a nonsingular point if and only if $r_{{\alpha}}$ is. Let’s first compute $\Omega (\tau_C(X,r_{{\alpha}})$ where $C=\overline{U_{{\beta}}r_{{\beta}}r_{{\alpha}}}$. It is clear that $\Omega (r_{{\beta}}r_{{\alpha}})$ is the set of weights of of a ${\mathbf g}_{-{{\beta}}}$-submodule of $T_{r_{{\beta}}r_{{\alpha}}}(G/B)$, so $$\Omega (\tau_C(X,r_{{\alpha}}))=r_{{\beta}}(\Omega (r_{{\beta}}r_{{\alpha}}))=\{-({{\alpha}}+{{\beta}}),-{{\beta}},{{\alpha}}\}.$$ Next consider $\Omega (\tau_D(X,r_{{\alpha}}))$ where $D=\overline{U_{2{{\alpha}}+{{\beta}}} r_{{\alpha}}r_{{\beta}}}.$ It is again clear that $\Omega (r_{{\alpha}}r_{{\beta}})$ is the set of weights of of a ${\mathbf g}_{-(2{{\alpha}}+{{\beta}})}$-submodule of $T_{r_{{\alpha}}r_{{\beta}}}(G/B)$, so $$\Omega (\tau_D(X,r_{{\alpha}}))=r_{2{{\alpha}}+{{\beta}}} (\Omega (r_{{\alpha}}r_{{\beta}}))=\{-({{\alpha}}+{{\beta}}),{{\alpha}},
-(2{{\alpha}}+{{\beta}})\}.$$ Hence $X$ is singular at $r_{{\alpha}}$. Now consider $\tau_D(X,r_{{\beta}})$ for $D=\overline{U_{{\alpha}}r_{{\alpha}}r_{{\beta}}}$. By the previous comment, $$\Omega (\tau_D(X,r_{{\beta}}))=r_{{\alpha}}(\Omega (r_{{\alpha}}r_{{\beta}}))=\{-{{\alpha}},{{\beta}},-({{\alpha}}+{{\beta}})\}.$$ It remains to compute $\Omega (\tau_C(X,r_{{\beta}}))$ for $C=U_{{{\alpha}}+{{\beta}}}r_{{\beta}}r_{{\alpha}}$. Organizing $\Omega (r_{{\beta}}r_{{\alpha}})$ into $-({{\alpha}}+{{\beta}})$-strings gives $$\Omega (r_{{\beta}}r_{{\alpha}})=\{-{{\alpha}},{{\beta}}\}\cup \{{{\alpha}}+{{\beta}}\}.$$ Since $(r_{{\beta}}r_{{\alpha}})^{-1}(2{{\alpha}}+{{\beta}})>0$, it follows from Proposition \[PMG/B\] that $$\Omega (\tau_D(X,r_{{\beta}}))=r_{{{\alpha}}+{{\beta}}}(\{-{{\alpha}},-(2{{\alpha}}+{{\beta}}),{{\alpha}}+{{\beta}}\})=
\{-{{\alpha}},{{\beta}},-({{\alpha}}+{{\beta}})\}.$$ Thus, by Peterson’s Theorem, $X$ is nonsingular at $r_{{\beta}}$. By the remark above, $\Omega (e)=r_{{\alpha}}(\Omega (r_{{\alpha}}))$, so $$\Omega (e)=\{-{{\beta}},-({{\alpha}}+{{\beta}}),
-{{\alpha}},-(2{{\alpha}}+{{\beta}})\}.$$ The upshot of this calculation is that the singular locus of $X(w)$ is $X(r_{{\alpha}})$.
A Criterion For Smoothness Of $T$-varieties {#SC}
===========================================
In this section we will prove a generalization of Theorem \[TE\]. Let $X$ be an irreducible $T$-variety, and let $x \in X^T$ be an attractive $T$-fixed point. Since the action of $T$ is linearizable, and since smoothness is a local property we may assume that $X=X_x$. Note that we are not assuming here that $E(X,x)$ is finite.
\[ZNL\] Let $f: X \rightarrow Y$ be a quasi-finite equivariant morphism of $T$-varieties with $Y$ nonsingular at $f(x)$. Let $Z \subset X$ be the ramification locus of $f$, i.e. the closed subvariety of points, at which $f$ is not étale. Then either $Z$ equals $X$, $Z$ is empty or $Z$ has codimension one at $x$.
Assume that $\codim_xZ \geq 2$. We have to show that $Z$ is empty. First of all, since $x$ is attractive, the image of $f$ is contained in every connected open $T$-stable affine neighborhood of $f(x)$, hence in $Y_{f(x)}$. Viewing $f$ as a map to $Y_{f(x)}$, the fibre of $f$ over $f(x)$ is finite, hence $f$ is finite. Thus, $f(X)$ is a closed subset of the unique irreducible component of $Y_{f(x)}$ through $f(x)$. Since $f$ is smooth somewhere it follows that $\dim X =
\dim_x Y_{f(x)}$, so $f(X)$ is the unique component of $Y_{f(x)}$ through $f(x)$. It follows that $f(x)$ is an attractive fixed point of $Y_{f(x)}$, and therefore $Y_{f(x)}$ is nonsingular. Passing to the normalization $\tilde X$ of $X$, we obtain an equivariant finite map $\tilde f :
\tilde X
\rightarrow Y_{f(x)}$, which is étale in codimension one, because the natural map $\tilde X \rightarrow X$ is clearly an isomorphism over $X
\setminus Z$. Thus, by the theorem of Zariski-Nagata [@Zar-Nag], $\tilde f$ is étale everywhere. Hence for some point $\tilde x \in \tilde X$ which maps to $x \in X$ we have $T_{\tilde x}(\tilde X) \cong
T_{f(x)}(Y_{f(x)})$ via ${\rm d}f$. This implies that $\tilde X$ is attractive, forcing $\tilde f$ to be an isomorphism. Thus $f$ is birational. But being finite, $f$ is also an isomorphism, so we are through.
This gives the following criterion for smoothness of attractive $T$-actions.
\[GPT\] Let $X$ be as above and let $x$ be an attractive fixed point. Suppose there is a subset $E \subset
E(X,x)$ such that every $C\in E$ is good which satisfies the following conditions:
- \[firstp\] $|E(X,x) \setminus E| \leq \dim X - 2$.
- \[secondp\] $\tau_C(X,x) = \tau_D(X,x)$ for all $T$-curves $C,D \in E$.
- \[fourthp\] If $\tau(E)$ denotes the common value of $\tau_C(X,x)$ for $C\in E$, then $T_x(C) \cap \tau(E) \not = 0$ for all curves $C \in E(X,x)$.
Then $x$ is a nonsingular point of $X$.
Since $x$ is attractive we may assume that $X \subset T_x(X)$ and $x = 0$. Fix an equivariant projection $\tilde
p: T_x(X) \rightarrow \tau(E)$, and denote its restriction to $X$ by $p$. Since $T_x(C) \cap \tau(E) \not = 0$ for all curves $C \in E(X,x)$, it follows that there is no $T$-curve in $p^{-1}(0)$, so by Lemma \[TC\], $\dim p^{-1}(0)=0$. This implies $p$ is finite, since $x$ is attractive. Let $Z$ be the ramification locus of $p$. According to Lemma \[ZNL\], we are done if $\codim_x Z \geq 2$. By assumption, if $C \in
E$, then $C^o \subset X^*$. It follows that $C^o \subset Z$ if and only if $dp$ has a nontrivial kernel $L
\subset T_z(X)$ for some $z \in C^o$. But then $\tau_C(L) \subset
\ker {\mathrm d}p \cap \tau(E)$. With $p$ being the projection to $\tau(E)$, the latter is trivial, so $\tau_C(L)$ and hence $L$ both are equal to $0$. We conclude that $E\cap E(Z,x)$ is empty. Thus, by condition 1), $|E(Z,x)|
\leq \dim X -2$ forcing $\dim_xZ\leq \dim X-2,$ thanks again to Lemma \[TC\]. This ends the proof.
Note that the last condition is automatically satisfied for curves $C \in E$ since $\tau_C(C,x) \subset \tau(E)$ for such a curve. Moreover, if all curves in $E(X,x)$ are smooth and have non collinear weights, then the last condition is equivalent to saying that $\tau(E) = TE(X,x)$. This in turn implies that $E$ consists of good curves if $E \subset E(X,x)$ is a set satisfying 2) and $|E(X,x)| =
\dim X$.
We immediately conclude the following corollary, which implies the first part of Theorem \[TE\].
\[TECOR\] Suppose that either $|E(X,x)|=\dim X$ or all $C\in E(X,x)$ are nonsingular and any two distinct $C,D\in E(X,x)$ have distinct tangents. Suppose also that there exist two distinct good $T$-curves $C,D\in E(X,x)$ such that $$\tau_C(X,x)=\tau_D(X,x)=TE(X,x).$$ Then $X$ is nonsingular at $x$.
We will prove the Cohen-Macaulay assertion in Proposition \[CM\].
If $X$ is normal, one does not need to assume $x$ is attractive since the Zariski-Nagata Theorem can be directly applied.
If $X$ is a $H$-variety for some algebraic group $H$, and $(S,T)$ is an attractive slice to a $H$-orbit $Hx$ (i.e. $S \subset X$ is locally closed, affine, stable under some nontrivial torus $T \subset H_x$, such that $x$ is an isolated point of $S \cap Hx$ and the natural mapping $H
\times S \rightarrow X$ is smooth at $x$), then $T_x(Hx) \subset \tau_C(X,x)$. More precisely, one has $\tau_C(X,x) = \tau_C(S,x)\oplus T_x(Hx)$ for all $C \in E(S,x)
\subset E(X,x)$. Thus the third condition in the theorem is always satisfied if $E(X,x) = E(S,x) \cup E(Hx,x)$ and $E = E(S,x)$.
If $X$ is a Schubert variety $X(w)$, an explicit attractive slice for $X$ at any $x\leq w$ is given as follows.
Let $U$ be the maximal unipotent subgroup of $B$ and $U^-$ the opposite maximal unipotent subgroup, and suppose $x<w$. Then an attractive slice for $X(w)$ is given by the natural multiplication map $$(U\cap xU^- x^{-1})\times X(w)\cap U^- x\RA X(w).$$
We can now give a proof of Peterson’s Theorem (cf. §1). If $x<w$ and $\ell(w)-\ell(x)=1$, there is nothing is to prove, since Schubert varieties are nonsingular in codimension one. Letting $E$ be the set of $C\in E(X,x)$ such that $C^T\subset [x,w]$, the existence of a slice and the hypothesis of Peterson’s Theorem imply that conditions 2) and 3) of Theorem \[GPT\] hold. If $\ell(w)-\ell(x)\geq
2$, then Deodhar’s inequality (Proposition \[DI\]) implies 1) holds. Hence $X$ is nonsingular at $x$.
A Fundamental Lemma {#FL}
===================
In this section, $X$ will denote a $T$-variety. We will now prove a basic lemma which allows us to deduce good properties of the Peterson translate from good properties of the Peterson translates $\tau_C(\Sigma,x)$, where $\Sigma$ ranges over the $T$-stable surfaces containing a good $C\in
E(X,x)$.
\[FSL\] If $C \in E(X,x)$ is a good curve we have $$\tau_C(X,x)=\sum _\Sigma\tau_C(\Sigma,x)$$ where the sum ranges over all $T$-stable irreducible surfaces $\Sigma$ containing $C$.
Let $L \subset \tau_C(X,x)$ be a $T$-stable line. Then by Proposition \[PPP\] there is an $S$-line $M \subset T_z(C)$, where $S$ is the isotropy group of an arbitrary $z
\in C^o$, such that $\tau_C(M,x) = L$. As $X$ is nonsingular at $z$, there exists an $S$-stable curve $D$ satisfying $M\subset T_z(D)$. Setting $\Sigma = \overline{TD}$ we obtain a $T$-stable surface, which contains $C$, and which satisfies $L \subset \tau_C(\Sigma,x)$.
Although the lemma is almost obvious, it is a great help in the case when $X$ is a $T$-stable subvariety of $G/B$, where $G$ has no $G_2$-factors. One reason for this is
\[2D\] Suppose $G$ has no $G_2$-factors and let $\Sigma$ be an irreducible $T$-stable surface in $G/B$. Let $\sigma \in \Sigma^T$. Then $|E(\Sigma,{{\sigma}})|=2$, and either $\Sigma$ is nonsingular at $\sigma$ or the weights of the two $T$-curves to $\Sigma$ at $\sigma$ are orthogonal long roots ${{\alpha}},{{\beta}}$ in $B_2$. In this case, $\Sigma_{{{\sigma}}}$ is isomorphic to a surface of the form $z^2=xy$ where $x,y,z\in k[\Sigma_{{{\sigma}}}]$ have weights $-{{\alpha}},-{{\beta}},
-1/2({{\alpha}}+{{\beta}})$ respectively. In particular, if $G$ is simply laced, then $\Sigma$ is nonsingular.
The first claim follows easily from the fact that $\Sigma$ has a dense two dimensional $T$-orbit (cf [@car-kur]). Let $C,D$ denote the two elements of $E(\Sigma,\sigma)$, and let ${{\alpha}},{{\beta}}$ denote their weights. For any function $f \in k[\Sigma_\sigma]$ of weight $\omega$ corresponding to a $T$-line $L$ in $T_x(\Sigma)$, there is a positive integer $N$ such that $N(-\omega) \in {{\mathbb Z}}_{\geq 0}{{\alpha}}+
{{\mathbb Z}}_{\geq 0} {{\beta}}$. Note that the functions corresponding to $C$ and $D$ have weights $-{{\alpha}}$ and $-{{\beta}}$ respectively (thus, the minus sign for $\omega$). Except for the case where ${{\alpha}}$, ${{\beta}}$ and $-\omega$ are contained in a copy of $B_2 \subset \Phi$ this actually implies that $-\omega = a{{\alpha}}+ b{{\beta}}$ for suitable nonnegative integers $a,b$. Using the multiplicity freeness of the representation of $T$ on $k[\Sigma_\sigma]$, one is done in these cases. In the remaining case, it turns out that $\Sigma$ is nonsingular at ${{\sigma}}$ unless ${{\alpha}},{{\beta}}$ are orthogonal long roots in $B_2$ ([@car-kur]). Let $\g=1/2({{\alpha}}+{{\beta}})$. Then $\Sigma_{\sigma}$ isomorphic to $z^2 = xy$ where $x,y,z\in
k[\Sigma_{{{\sigma}}}]$ correspond to $T$-lines in $T_{{\sigma}}(\Sigma)$ of weights ${{\alpha}},{{\beta}}$ and $\g$.
The Span Of The Tangent Cone Of A $T$-Variety In $G/B$ {#tancone}
======================================================
In this section and for the rest of this paper we will assume that $X$ is a closed irreducible $T$-stable subvariety of $G/B$ and that the underlying semi-simple group $G$ has no $G_2$ factors. Recall that all $T$-curves in $G/B$ are smooth, and two distinct $T$-curves have different weights. Moreover, if $C\in E(X,x)$, then $T_x(C) = {\mathcal T}_x(C) \subset {\mathcal
T}_x(X)$. In particular, $TE(X,x)\subset \Theta_x(X)$, the $k$-linear span of the reduced tangent cone ${\mathcal T}_x(X)$ of $X$ at $x$.
We will now study the Peterson translates $\tau_C(X,x)$ of $X$ along good $T$-curves $C$ in $X$, where $x\in C^T$. We will first show that each $\tau_C(X,x)$ is a subspace of $\Theta_x(X)$. Hence the tangent spaces at $T$-fixed points behave well under the Peterson map. On the other hand, it is an open question as to how to explicitly describe $\Theta_x(X)$ for any $x\in X^T$. The only relevant fact we know of is the following, proved in [@car].
Let $S$ be an algebraic torus over $k$ and $V$ a finite $S$-module. Suppose $Y$ is a Zariski closed $S$-stable cone in $V$, and let ${{{\mathcal H}}}(Y)$ denote the convex hull of $$\Phi(Y)=\{{{\alpha}}\in X(S) \mid V_{{\alpha}}\subset Y \}$$ in $X(S)\otimes {{\mathbb R}}$, where $V_{{\alpha}}$ denotes the ${{\alpha}}$-weight space in $V$. Also let $\Theta(Y)$ denote the $k$-linear span of $Y$ in $V$. Then $$\Omega (\Theta(Y))\subset {{{\mathcal H}}}(Y).$$
In particular, if $\Phi$ is simply laced, then $\Omega
(\Theta_x(X))=\Phi \cap
{{{\mathcal H}}}({\mathcal T}_x(X))$, so $\Theta_x(X)$ is the $k$-linear span of the set of $T$-lines in ${\mathcal T}_x(X)$. In §\[SL\], we will show that if $G$ is simply laced, then, in general, $\Theta_x(X)=TE(X,x)$. We now prove of one of our main results.
\[TCINTHETA\] Suppose that $X$ is an arbitrary $T$-stable subvariety of $G/B$. Then for any $x\in X^T$ and any good curve $C \in E(X,x)$ we have $$\tau_C(X,x) \subset \Theta_x(X).$$ Moreover, if $C$ is short, then $\tau_C(X,x)\subset TE(X,x)$.
Since $\tau_C(X,x)$ is generated by $T$-invariant surfaces and since $\Theta_x(\Sigma) \subset \Theta_x(X)$ for all surfaces $\Sigma
\subset X$ which contain $x$, it is enough to show the proposition when $X$ is a surface. If $X$ is nonsingular at $x$, then $\tau_C(X,x)=T_x(X)$. Otherwise we know from Proposition \[2D\] that $X$ is a cone over $x$, and for a cone, $\Theta_x(X) = T_x(X)$. Since $X$ is nonsingular at $x$ when $C$ is short, the last statement is obvious.
The last assertion of the theorem gives us a generalization of Peterson’s $ADE$ Theorem.
\[2GC\] Let $G$ be simply laced, and suppose $X$ is rationally smooth at $x$. Then $X$ is nonsingular at $x$ if and only if there are two good $T$-curves in $E(X,x)$. Moreover, if $X$ is Cohen-Macaulay, one good $T$-curve suffices.
By a result of Brion [@bri], if $X$ is rationally smooth at $x$, then $|E(X,x)|=\dim X$. By Theorem \[TCINTHETA\], we have $\tau_C(X,x) =
TE(X,x)$ for every good $C\in E(X,x)$. Hence the result follows from Theorem \[TE\]. We will prove the last statement below.
In general, Theorem \[TCINTHETA\] does not hold if $G=G_2$. For example, consider the surface $\Sigma$ given by $z^2 = xy^3$ in ${\mathbb A}^3$. Let ${{\alpha}},
{{\beta}},
\g$ be characters of $T$, satisfying $\gamma = 2{{\alpha}}+ 3{{\beta}}$, and let $T$ act on ${\mathbb A}^3$ by $$t\cdot (x,y,z)= (t^{{\alpha}}x,t^{({{\alpha}}+2{{\beta}})}y, t^\g z).$$ Clearly $\Sigma$ is $T$-stable, and its reduced tangent cone at $0$ is by definition $\ker {\mathrm d}z$, hence is linear. The $T$-curve $C=\{x=0\}$ is good, and along $C^o$, we have $T_v(\Sigma) = \ker {\mathrm d}x$. It follows that $\tau_C(\Sigma,x) =
\ker{\mathrm d}x$, which is not a subspace of $\Theta_0(\Sigma)$. It remains to remark that $\Sigma$ is open in a $T$-stable surface in $G_2/B$, where $T$ is the usual maximal torus and ${{\alpha}}$ and ${{\beta}}$ are respectively the corresponding long and short simple roots.
We now generalize Peterson’s $ADE$ Theorem in a different direction. That is, we study which rationally smooth $T$-fixed points of a Schubert variety (or more generally, a $T$-variety in $G/B$) are nonsingular without the assumption $G$ is simply laced. Since $\dim_x {\mathcal T}_x(X) =\dim X$, it is obvious that ${\mathcal
T}_x(X)$ is linear if and only if $\dim_k \Theta_x(X) =
\dim X$. Thus, as a consequence of Theorems \[TCINTHETA\] and \[TE\], we obtain
\[THETAMIN\] A $T$-variety $X\subset G/B$ is nonsingular at $x \in X^T$ if and only if $\Theta_x(X)$ has minimal dimension $\dim X$ and $E(X,x)$ contains at least two good curves.
Specializing to the Schubert case gives
\[TCL\] A Schubert variety $X = X(w)$ is nonsingular at $x \in X^T$ if and only if all reduced tangent cones ${\mathcal
T}_y(X)$, $x\leq y\leq w$, are linear. Consequently, the nonsingular Schubert varieties are exactly those whose tangent cones are linear at every $T$-fixed point.
The proof is similar to the proof of Peterson’s Theorem given at the end of §5 so we will omit it.
Corollary \[TCL\] gives another proof of Peterson’s $ADE$ Theorem since if $G$ is simply laced and $X=X(w)$ is rationally smooth at $x$, then its tangent cones are linear at every $T$-fixed point $y$ with $x\leq y\leq w$ [@cp].
We now prove the second assertion of Theorem \[TE\] that if $X$ is Cohen-Macaulay one good $T$-curve such that $\tau_C(X,x)=TE(X,x)$ suffices to guarantees that $x$ is nonsingular.
\[CM\] Suppose $X$ is Cohen-Macaulay and $x \in X^T$. $X$ is nonsingular at $x$ if and only if there is a good $T$-curve $C$ in $E(X,x)$ with $\tau_C(X,x) = TE(X,x)$.
Let $x_1,\dots,x_n\in k[X_x]$ be the functions corresponding to the $T$-curves $C_1,\dots,C_n$ in $E(X_x,x)$. Since $\dim \tau_C(X,x) =
\dim X$, and since the $T$-curves are smooth and have non collinear weights, $n$ equals the dimension of $X$. We may assume that $C = C_1$.
Let ${\mathbf a} \subset k[X_x]$ be the ideal generated by $x_2,\dots,x_n$. Then ${\mathbf a}$ is contained in the ideal of $C$. Now $\tau_C(X_x,x)$ equals the span $TE(X,x)$ of the $T$-curves $C_i$. This means that the differentials ${\mathrm d}x_i$ are independent along $C^o$. As $C \cong {\mathbb A}^1$ is nonsingular, we are done if ${\mathbf
a}$ is the ideal $I(C)$ of $C$. We know that ${\mathbf a}_y =
I(C)_y$ at every stalk $k[X_x]_y$ for $y \in C^o$, since the $({\mathrm d}x_i)_y$ are independent.
As a subset of $X_x$, $C$ is equal to the support of the Cohen-Macaulay subscheme $Z =
\Spec(A/{\mathbf a})$ of $X_x$. Under the natural restriction, a function $f$ on $X_x$, which vanishes on $C$, defines a global section of ${\mathcal O}_Z$ with support contained in $\{x\}$. It is well known that on a Cohen-Macaulay scheme, the only such section is zero. So $f$ restricts to zero, and we are done.
Suppose $X$ is Cohen-Macaulay and there exists a short good $C\in E(X,x)$. Then if $|E(X,x)|=\dim X$, then $X$ is nonsingular at $x$. In particular, if $X$ is rationally smooth on $C$, then it is smooth on $C$.
The only facts we have to note are, firstly, that if $C$ is short, then $\tau_C(X,x)\subset TE(X,x)$ and, secondly, if $X$ is rationally smooth at $x\in X^T$, then $|E(X,x)|=\dim X$ (see [@bri]).
Since Schubert varieties are Cohen-Macaulay, we obtain from this yet another proof of Peterson’s $ADE$ Theorem.
For locally complete intersections or normal $T$-orbit closures in $G/B$, Proposition \[CM\] puts quite strong restrictions onto the Bruhat graph $\G(X)$. In fact, recalling that $X^*$ denotes the set of nonsingular points of $X$, we see that if $X$ is Cohen-Macaulay, then every rationally smooth vertex of $\G(X)$ connected to a vertex of $\G(X^*)$ is in fact a vertex of $\G(X^*)$. Therefore we get the following
Suppose $G$ is simply laced and $X$ is Cohen-Macaulay , rationally smooth and $\G(X^*)$ is non-trivial. Then $X$ is nonsingular as long as $\G(X)$ is connected.
In fact, if $X$ is nonsingular, then its Bruhat graph is connected.This can be shown by considering the Bialynicki-Birula decomposition of $X$ induced by a regular element of $Y(T)$. We will omit the details.
More On Peterson Translates and Schubert Varieties {#PTSV}
==================================================
The purpose of this section is to adress the problem that there is in general no nice description of $\Theta_x(X)$. Hence we would like to find a more precise picture of $\tau_C(X,x)$ when $X$ is a Schubert variety in $G/B$, say $X=X(w)$ and, as usual, $G$ is not allowed any $G_2$ factors. Of course, if $G$ is simply laced or $C$ is short, we’ve already shown $\tau_C(X,x)\subset TE(X,x)$.
It turns out that if $C$ is long, there is a $T$-subspace of $\Theta_x(X)$, depending only on $TE(X,x)$ and the isotropy group $B_x$ of $x$ in $B$, which contains most of $\tau_C(X,x)$, and the part that fails to lie in this subspace is easy to describe. Let ${\mathbf g}(x)$ denote the Lie-algebra of $B_x$, $U({\mathbf g}(x))$ its universal enveloping algebra and define ${\mathbb T}_x(X)$ to be the ${\mathbf g}(x)$-submodule $${\mathbb T}_x(X)=U({\mathbf g}(x))TE(X,x)\subset \Theta_x(X)$$ We will show that if $C\in E(X,x)$ is both good and long and $C^T\subset
]x,w]$, then ${\mathbb T}_x(X)$ almost contains $\tau_C(X,x)$. In fact, taking $x=r_{{\alpha}}$ in Example \[B2\] shows that in general $\tau_C(X,x)\not
\subset {\mathbb T}_x(X).$ Consequently, $\Theta_x(X)$ is not in general equal to ${\mathbb T}_x(X).$
\[BbbT\] Assume $C\in E(X,x)$ is long, say $C=\overline{U_{-\mu} x}$ where $\mu$ is positive and long, and suppose $X$ is nonsingular at $y=r_\mu x$. Assume ${\mathbf g}_{\g}\subset
\tau_C(X,x)$, but ${\mathbf g}_\g
\not
\subset {\mathbb T}_x(X)$, and put ${{\delta}}=\g+\mu$. Then:
- there exists a long positive root $\phi$ orthogonal to $\mu$ such that ${\mathbf g}_{-\phi}\subset TE(X,x)$ and $$\g=-1/2(\phi+\mu);$$
- ${\mathbf g}_{-\phi}\not \subset T_{y}(X)$;
- if ${{\delta}}>0$, then $x^{-1}({{\delta}})<0$ and $${\mathbf g}_\g \oplus {\mathbf g}_{{{\delta}}}\oplus {\mathbf g}_{-\mu}
\subset \tau_C(X,x),\quad {\mathbf g}_\g \oplus {\mathbf
g}_{{{\delta}}}\oplus {\mathbf g}_\mu \subset T_{y}(X);$$
- on the other hand, if ${{\delta}}<0$, then $x^{-1}({{\delta}})>0$ and $${\mathbf g}_\g \oplus {\mathbf g}_{-{{\delta}}}\oplus {\mathbf g}_{-\mu}
\subset \tau_C(X,x), \quad {\mathbf g}_{-\g} \oplus {\mathbf
g}_{{{\delta}}}\oplus {\mathbf g}_\mu \subset T_{y}(X);$$
- In particular, if $D=\overline{U_{-\phi}x}$, then $\tau_C(X,x)\neq\tau_D(X,x)$.
The existence of a long root $\phi$ satisfying all the conditions in part (1) except possibly positivity follows from the Fundamental Lemma (\[FSL\]), Proposition \[2D\] and the fact that ${\mathbf g}_{-\mu} \subset TE(X,x)$. To see $\phi$ is positive suppose otherwise. It’s then clear that ${{\delta}}\in \Phi^+$. If $x^{-1}({{\delta}})>0$ also, then ${\mathbf g}_\g \subset {\mathbb T}_x(X)$ since $\g=-\mu+{{\delta}}$, contradicting the assumption. Hence $x^{-1}({{\delta}})<0$. But as $\tau_C(X,x)$ is a ${\mathbf g}_\mu$-module, it follows immediately that ${\mathbf g}_\g \oplus {\mathbf g}_{{\delta}}\subset \tau_C(X,x)$. Since $\mu$ is long, Proposition \[PMG/B\] implies $${\mathbf g}_\g\oplus {\mathbf g}_{{{\delta}}}\subset T_y(X).$$ Moreover, since $x^{-1}(\phi)=y^{-1}(\phi)>0$, it also follows that $ {\mathbf g}_{-\phi} \subset T_y(X)$, so $\mu,~{{\delta}},~-\phi$ constitute a complete $\g$-string occuring in $\Omega (T_y(X))$. Since $X$ is nonsingular at $y$ and $\g,y^{-1}(\g)<0$, we get the inequality $y<r_\g y\leq w$. Thus $X$ is nonsingular at $r_\g y$. Letting $E$ be the good $T$-curve in $X$ such that $E^T=\{y,r_\g y \}$, we have $\tau_E(X,y)=T_y(X)$, so the string $\mu,~{{\delta}},~-\phi$ also has to occur in $\Omega (T_{r_\g y}(X))$. In particular, ${\mathbf g}_{-\phi}
\subset TE(X, r_\g y)=T_{r_\g y}(X)$, and hence $r_\phi r_\g
y\leq w$. But this means $$r_\g x=r_\g r_\mu y=r_\g r_\mu r_\g r_\g y=r_\phi r_\g y\leq w,$$ so ${\mathbf g}_\g \subset TE(X,x)$. This is a contradiction, so $\phi>0$.
We next prove (2). Recall $y=r_\mu x$ and suppose to the contrary that ${\mathbf g}_{-\phi} \subset
T_{r_\mu x}(X)$. If ${{\delta}}>0$, we can argue exactly as above, so we are reduced to assuming ${{\delta}}<0$. If $x^{-1}({{\delta}})<0$, then ${\mathbf g}_\g \subset {\mathbb T}_x(X)$ due to the fact that $\g=-(\phi+{{\delta}})$. Thus $x^{-1}({{\delta}})>0$, whence $y^{-1}(\g)>0$ and ${\mathbf g}_{-\g}\subset T_y(X)$ since $-\g>0$. Moreover, ${\mathbf g}_{{\delta}}\subset T_y(X)$ since ${\mathbf g}_\g
\subset
\tau_C(X,x)$. Hence, we have $${\mathbf g}_\mu \oplus {\mathbf g}_{-\phi} \oplus {\mathbf g}_{{\delta}}\oplus {\mathbf
g}_{-\g}
\subset T_y(X).$$ Now ${{\delta}}$ and $-\g$ form a $\phi$-string in $\Omega (T_y(X))$, and since $y<r_\phi y
\leq w$, it follows as above that ${\mathbf g}_{{\delta}}\subset T_{r_\phi
y}(X)$. Therefore, $r_{{\delta}}r_\phi y\leq w$. But $r_\g x=r_{{\delta}}r_\phi y$, so we have a contradiction. Hence, ${\mathbf g}_{-\phi}\not \subset T_{r_\mu x}(X)$. To prove (3), note that, as usual, $x^{-1}({{\delta}})<0$, so ${\mathbf g}_{{\delta}}\subset
\tau_C(X,x)$ by the ${\mathbf g}_\mu$-module property. To obtain (4), note that if ${{\delta}}<0$, then $x^{-1}({{\delta}})>0$, so $y^{-1}(-\g)<0$. As $\g<0$, this implies that ${\mathbf g}_{-\g}\subset T_y(X)$, so in fact ${\mathbf g}_{-\g} \oplus {\mathbf g}_{{\delta}}\oplus {\mathbf g}_\mu \subset
T_y(X)$. The proof of (5) is clear, so the proof is now finished.
Now fix $C$ and $\mu\in \Phi^+$ as above and let $I_\mu \subset \Phi$ consist of all negative $\gamma$ such that:
- $\gamma=-1/2(\mu + \phi)$, where $\phi$ satisfies conditions (1) and (2) of Proposition \[BbbT\],
- $\delta =\mu +\gamma\in \Phi$, and
- $\delta$ satisfies conditions (4) and (5).
Put $V_C=\bigoplus_{\g\in I_\mu}{\mathbf g}_\g$. Notice that $V_C\subset T_x(G/B)$. Proposition \[BbbT\] thus gives the following:
Assuming the previous hypotheses, we have $$\tau_C(X,x)\subset {\mathbb T}_x(X)+V_C\subset \Theta_x(X).$$
The Simply Laced Case {#SL}
=====================
The crucial point in the proof of Peterson’s $ADE$ Theorem is the fact that every $T$-stable line in the span of the tangent cone of $X$ comes from a $T$-curve in $X$. It turns out that this is true for any closed $T$-variety $X\subset G/B$ as long as $G$ is simply laced. We now prove this fact.
\[TC=TL\] Suppose $G$ has no $G_2$-factors. Let $L
\subset \Theta_x(X)$ be a $T$-stable line with weight $\omega$. Then $$\omega = \frac{1}{2}({{\alpha}}+ {{\beta}})$$ where ${{\alpha}}$ and ${{\beta}}$ are the weights of suitable $T$-curves $C$ and $D$, respectively. If, moreover, $G$ is simply laced, then ${{\alpha}}= {{\beta}}= \omega$, hence $L$ is the tangent line of a $T$-curve $C \in E(X,x)$.
We will prove the following equivalent ’dual’ statement: if $\omega$ is the weight of a function corresponding to a $T$-stable line $L \subset
\Theta_x(X)$, then there are ${{\alpha}}$ and ${{\beta}}$ with $\omega = 1/2({{\alpha}}+ {{\beta}})$ where ${{\alpha}}$ and ${{\beta}}$ are the weights of functions corresponding to $T$-curves $C$ and $D$, respectively.
Let $z \in k[X_x]$ be the $T$-eigenfunction corresponding to $L$, and let $$x_1,x_2,\dots,x_n \in k[X_x]$$ be those corresponding to the $T$-curves $C_1,C_2, \dots, C_n$ through $x$. Consider the unique linear projections $$\tilde x_i:T_x(X) \RA T_x(C_i),\quad
\tilde z:T_x(X) \RA L$$ which restrict respectively to $x_i, z \in k[X_x]$.
Since the (restriction of the) projection $X_x \RA \bigoplus
T_x(C)$ has a finite fibre over $0$, $k[X_x]$ is a finite $k[x_1,x_2, \dots , x_n]$-module. In particular $z \in k[X_x]$ is integral over $k[x_1,\dots,x_n]$. We obtain a relation $$\label{INTEQ}
z^ N = p_{N-1} z^{N-1} + p_{N-2}z^{N-2} + \dots + p_1 z
+ p_0$$ where $N$ is a suitable integer and the $p_i\in k[x_1,\dots,x_n]$. Without loss of generality we may assume that every summand on the right hand side is a $T$-eigenvector with weight $N \omega$. Let $P_i \in k[\tilde x_1, \dots, \tilde x_n]$ be polynomials restricting to $p_i$, having the same weight $(N-i)\omega$ as $p_i$. Then every monomial $m$ of $P_i$ has this weight too. If for all $i$ every such monomial $m$ has degree $\deg m > N - i$, then $p_i z^{N- i}$ is an element of ${\mathbf m}^{N + 1}_x$, where ${\mathbf m}_x$ is the maximal ideal of $x$ in $k[X_x]$. This means that $\tilde z$ vanishes on the tangent cone of $X_x$, so $L \not \subset \Theta_x(X)$, which is a contradiction.
Thus, there is an $i$ and a monomial $m$ of $P_i$, such that $\deg
m \leq M = N - i$. Let $m = c \tilde x_1^{d_1} \tilde x_2 ^{d_2}
\dots \tilde x_n^{d_n}$, with integers $d_j$ and a nonzero $c \in
k$. So $\sum_j d_j \leq N$. Let ${{\alpha}}_j$ be the weight of $\tilde
x_j$. Then we have $$M \omega = \sum d_j {{\alpha}}_j$$ After choosing a new index, if necessary, we may assume that $d_j
\not = 0$ for all $j$. Let $F$ be a nondegenerate bilinear form on $X(T)\otimes {\mathbb Q}$ which induces the length function on $\Phi$. We have to consider two cases. First suppose that $\omega$ is a long root, with length say $L$. Then $F({{\alpha}}_j, \omega) \leq L^2$ with equality if and only if ${{\alpha}}_j = \omega$. Thus, $M L^2 = \sum d_j
F({{\alpha}}_j,\omega) \leq M \max_{j} F({{\alpha}}_j, \omega) \leq M L^2$ and so there is a $j$ with ${{\alpha}}_j = \omega$ and we are done, since this implies $\tilde
z = \tilde x_j$. Note that, although we are considering all roots short in case $G$ is simply laced, this contains actually the case that all roots have the same length.
Now suppose $\omega$ is short, having length $l$. In this case $F({{\alpha}}_j, \omega) \leq l^2$. Since $M l^2 = MF(\omega,\omega) = \sum_j
d_j
F({{\alpha}}_j, \omega)$ and since $\sum d_j \leq M$, it follows that all ${{\alpha}}_j$ satisfy $F({{\alpha}}_j, \omega) = l^2$. If there is a $j$ such that ${{\alpha}}_j = \omega$, then, as above, we are done. Otherwise for each $j$, ${{\alpha}}_j$ is long, and ${{\alpha}}_j$ and $\omega$ are contained in a copy $B(j)
\subset \Phi$ of $B_2$. There is a long root ${{\beta}}_j \in B(j)$ with ${{\alpha}}_j + {{\beta}}_j = 2\omega$. We have to show that there are $j_0$ and $j_1$ so that ${{\beta}}_{j_0} = {{\alpha}}_{j_1}$. Fix $j_0 = 1$ and let ${{\alpha}}=
{{\alpha}}_1$, ${{\beta}}= {{\beta}}_1$. Then $F({{\alpha}},{{\beta}}) = 0$. This gives us the result: $M l^2 = MF(\omega, {{\beta}}) = 0 + \sum_{j>1}
F({{\alpha}}_j, {{\beta}})$. Now if all $F({{\alpha}}_j,{{\beta}})$ are less or equal $l^2$, this last equation cannot hold, since $\sum_{j>1}d_j
< M$. We conclude that there is a $j_1$ so that $F({{\alpha}}_{j_1}, {{\beta}}) =
L^2$, hence ${{\alpha}}_{j_1} = {{\beta}}$, and we are through. The statement for $G$ simply laced follows from this, since in this case all roots have the same length, hence are long.
For completeness, we state the following corollary.
\[\]Suppose the $G$ is simply laced and that $X$ is a $T$-variety in $G/B$ such that $\dim X\ge 2$. Then $X$ is smooth at $x\in X^T$ if and only if $|E(X,x)|=\dim X$ and there are at least two good $T$-curves at $x$.
Theorem \[TC=TL\] implies in particular that the linear spans of the tangent cones of two $T$-varieties behave nicely under intersections, and this allows us to deduce a somewhat surprising fact about the intersection of the tangent spaces of two $T$-varieties at a common nonsingular point.
\[INT\] Suppose the $G$ is simply laced and that $X$ and $Y$ are $T$-varietes in $G/B$. Suppose also that $x\in
X^T\cap Y^T$. Then $$\Theta_x(X\cap Y)=\Theta_x(X)\cap \Theta_x(Y).$$ Furthermore, if both $X$ and $Y$ are nonsingular at $x$, then $$T_x(X\cap Y)=T_x(X)\cap T_x(Y).$$ In particular, if $|E(X\cap Y,x)|=\dim X\cap Y$, then $X\cap Y$ is nonsingular at $x$.
The first assertion is a consequence of Theorem \[TC=TL\] and the fact that $E(X,x)\cap
E(Y,x)=E(X\cap Y, x)$. For the second, use the fact that if a variety $Z$ is nonsingular at $z$, then $T_z(Z)=\Theta_z(Z)$. Thus $$\begin{aligned}
T_x(X)\cap T_x(Y)&=&\Theta_x(X)\cap \Theta_x(Y)\\
&=&\Theta_x(X\cap Y)\\
&\subset & T_x(X\cap Y)\\
&\subset & T_x(X)\cap T_x(Y)\end{aligned}$$ The final assertion follows from the fact that if $X$ and $Y$ are both nonsingular at $x$, then $T_x(X)\cap T_x(Y)=TE(X\cap Y,x)$.
For example, it follows that the in the simply laced setting, the intersection of a Schubert variety $X(w)$ and a dual Schubert variety $Y(v)=\overline{B^-v}$ is nonsingular at each $x$ with $v\leq
x \leq
w$ as long as each of the constituents is nonsingular at $x$.
The previous corollary was stated in [@cp] (cf. Theorem H and Corollary H) for so called shifted Schubert varieties, that is any subvariety of $G/B$ of the form $X(y,w)=yX(w)$, where $y,w\in W$. One can in fact say a little more for shifted Schubert varieties in type $A$ because from a result of Lakshmibai and Seshadri [@ls] saying that if $G$ is of type $A$, then $\Theta_x(X)=T_x(X)$ for every shifted Schubert variety $X$. Namely, $$T_x(X)\cap T_x(Y)=T_x(X\cap Y)$$ for any two shifted Schubert varieties $X$ and $Y$ meeting at $x\in W$.
Singular Loci of Schubert Varieties {#SLSV}
===================================
In this section we give an algorithm for computing the singular locus $X^{\times}$ of a Schubert variety $X=X(w)$ assuming $G$ has no $G_2$ factors. Obviously $X^{\times}$ is a union of Schubert varieties, so we only have to compute the maximal elements of $X^{\times T}$. Schubert varieties $X=X(w)$ being Cohen-Macaulay , we know $x<w$ is a nonsingular point as long as $E(X,x)$ contains a short good $T$-curve and $|E(X,x)|=\dim X$. Hence maximal elements $x$ of $X^{\times}$ either have the property that $|E(X,x)|>\dim X$ or every good $T$-curve at $x$ is long.
On the other hand, we can use Proposition \[PMG/B\], which gives a criterion for deciding when $\tau_C(X,x)=\tau_D(X,x)$ that doesn’t depend on knowing either or both of $C,D\in E(X,x)$ are good. Suppose $C=\overline{U_{{\alpha}}x}$ and $D=\overline{U_{{\beta}}z}$ where ${{\alpha}},{{\beta}}>0$. Let $y=r_{{\alpha}}x, z=r_{{\beta}}x$ so $y,z\in [x,w]$. By Proposition \[PMG/B\], $\tau_C(X,x)=\tau_D(X,x)$ if and only if $r_{{\alpha}}\Omega (T_{y}(X)^*)=r_{{\beta}}\Omega (T_{z}(X)^*)$, or, equivalently, $\Omega (T_{y}(X)^*)=r_{{\alpha}}r_{{\beta}}\Omega (T_{z}(X)^*)$, where $T_{y}(X)^*$ is the ${\mathbf
g}_{-{{\alpha}}}$-submodule of $T_y(G/B)$ defined in §\[G/B\]. Note that we are working in $T_y(G/B)$ instead of $T_x(X)$. Now ${\mathrm d}\dot{r_{{\alpha}}}{\mathrm d}\dot{r_{{\beta}}}(T_{z}(X)^*)$ is a ${\mathbf
g}_{r_{{\alpha}}({{\beta}})}$-module, and consequently, this implies that $T_{y}(X)^*$ is a module for the subalgebra of ${\mathbf g}$ generated by ${\mathbf g}_{-{{\alpha}}}$ and ${\mathbf g}_{r_{{\alpha}}({{\beta}})}$.
\[C=D\] Assuming the notation is as above, $\tau_C(X,x)=\tau_D(X,x)$ if and only if $T_y(X)^*$ is a ${\mathbf
g}_{r_{{\alpha}}({{\beta}})}$-submodule of $T_y(G/B)$ whose leading weights are the $r_{{{\alpha}}}r_{{{\beta}}}(\gamma)$, where $\gamma$ runs through the set of leading weights for the ${\mathbf g}_{-{{\beta}}}$-module $T_z(X)^*$. Moreover, if $C_1, C_2, \dots C_k\in E(X,x)$ where each $C_i^T=
\{x,y_i\}$ with $y_i=t_ix>x$ and if all $\tau_{C_i}(X,x)$ coincide, then every $T_{y_i}(X)^*$ is a module for the subalgebra ${\mathbf m}_i$ of ${\mathbf g}$ generated by $${\mathbf g}_{t_i({{\alpha}}_1)}\oplus {\mathbf g}_{t_i({{\alpha}}_2)}\oplus \cdots
\oplus
{\mathbf g}_{t_i({{\alpha}}_{t_k})}.$$
The last assertion is a consequence of the Jacoby identity. Note that in this result, there is no assumption that the $T$-curves be good.
The algorithm for determing $X^{\times}$ is now clear. Suppose one knows that $y\in X^{*T}$, and assume $x=r_{{\alpha}}y<y$ where ${{\alpha}}>0$. Clearly if
$${\mathrm d}\dot{r_{{\alpha}}}(\Omega (T_y(X)^*))\neq \{\g \mid
x^{-1}(\g)<0,r_\g x\leq
w\},$$ then $x\in X^{\times}$. If equality holds, then it suffices to apply Proposition \[C=D\] to any good $D\in E(X,x)$. Thus the algorithm requires checking whether $X$ is nonsingular at any $z\in X^{*T}$ with $z>x, z\neq y$ and $sz=x$ for some $s\in R$.
Generalizations to $G/P$ {#G/P}
========================
As usual, assume $G$ is semi-simple and has no $G_2$ factors, and suppose $P$ is a parabolic subgroup of $G$ containing $B$. In this section, we will indicate which results extend to $T$-varieties in $G/P$. Let $\pi:G/B\RA G/P$ be the natural projection. The extensions to $G/P$ are based on the following lemma.
\[GmodP\] Let $Y\subset G/P$ be closed and $T$-stable, and put $X=\pi^{-1}(Y)$. Then:
- the projection $\pi:X\RA Y$ is a smooth morphism, hence $X^*=\pi^{-1}(Y^*)$;
- for all $x\in X^T$, ${\rm d}\pi_x :T_x(X)\RA T_y(Y)$ is surjective and $${\rm d}\pi_x(\Theta_x(X))=\Theta_y(Y),$$ where $y=\pi(x)$;
- $\pi(E(X,x))=E(Y,y)$; and
- if $C\in E(X,x)$ is good and $\pi(C)$ is a curve, then $\pi(C)\in E(Y,y)$ is good and $${\rm d}\pi_x(\tau_C(X,x))=\tau_{\pi(C)}(Y,y).$$
The first statement (1) is standard. Moreover, ${\rm d}\pi_x$ is surjective for all $x\in X$ and ${\rm d}\pi_x$ maps the schematic tangent cone of $X$ at $x$ onto that of $Y$ at $y$. Consequently, it is also a surjection of the the associated reduced varieties. Since ${\rm d}\pi_x$ is linear, (2) is established. (3) is an immediate consequence of the fact that $\pi(E(G/B,x))=E(G/P,y)$. Part (4) follows from the existence of a local $T$-equivariant section of $\pi$, the smoothness of $\pi$ and Lemma \[FSL\].
\[NN\] Assume $G$ has no $G_2$ factors, and suppose $Y$ is any $T$-variety in $G/P$. If $\dim Y \ge 2$, then $Y$ is smooth at the $T$-fixed point $y$ if and only if $E(Y,y)$ contains two good $T$-curves and the reduced tangent cone to $Y$ at $y$ is linear.
Apply the previous lemma and Theorem \[THETAMIN\] to $X=\pi^{-1}(Y)$ at any $x\in \pi^{-1}(y)^T$, which, by the Borel Fixed Point Theorem, is non-empty since $y\in Y^T$.
If $G$ is simply laced, there is more.
Assume $G$ is simply laced. Then for any $T$-variety $Y$ in $G/P$ and $y\in Y^T$, $$\Theta_y(Y)=TE(Y,y).$$ In particular, if $\dim Y\ge 2$, then $Y$ is smooth at $y$ if and only if $|E(Y,y)|=\dim Y$ and $y$ lies on at least two good $T$-curves.
We also have
\[RSG/P\] If $G$ is simply laced, then every rationally smooth $T$-fixed point of a Schubert variety $Y$ in $G/P$ is nonsingular.
Let $y\in Y^T$ be a rationally smooth $T$-fixed point of $Y$. Using the relative order, we may without loss of generality assume that if $z\in Y^T$ and $z>y$, then $Y$ is nonsingular at $z$. By the relative version of Deodhar’s Inequality and the fact that the singular locus of $Y$ has codimension at least two (as $Y$ is normal), there are at least two good $T$-curves in $E(Y,y)$. Since $|E(Y,y)|=\dim Y$ ([@bri]), the proof is done.
Finally, we state a $G/P$ analog of Corollary \[RSGmodP\].
If $G$ is simply laced, a Schubert variety in $G/P$ is nonsingular if and only if the Poincaré polynomial of $Y$ is symmetric if and only if $|E(Y,y)|=\dim Y$ for every $y\in Y^T$.
A Remark and Two Problems {#QAR}
==========================
Although we have not yet given an explicit example, it is definitely not true that in the simply laced setting, every rationally smooth $T$-variety in $G/B$ is nonsingular. In fact, there are $T$-orbit closures in types $D_n$ if $n>4$ and in $E_6, E_7, E_8$ which are rationally smooth but non-normal, hence singular. For more information, see [@car-kur; @mo]. A final comment is that one of the most basic open problems about Schubert varieties in our context is to describe the $T$-lines in the linear span of the tangent cone at a $T$-fixed point in the non-simply laced setting. Once this is settled, we will have a complete picture of the singular loci of all Schubert varietes. Another unsolved problem is to identify all the $T$-lines in the tangent space at at $T$-fixed point. There are results in this direction in papers of Lakshmibai and Seshadri [@ls], Lakshmibai [@la] and Polo [@po]. The natural conjecture that these tangent spaces are spanned by Peterson translates is, in light of Theorem 1.3, seems to be only true for type $A$.
[l]{} James B. Carrell\
Department of Mathematics\
University of British Columbia\
Vancouver, Canada V6T 1Z2\
E-mail. [email protected][a]{}\
\
Jochen Kuttler\
Department of Mathematics\
University of California at San Diego\
LaJolla, CA 92093\
and\
Mathematisches Institut\
Universität Basel\
Rheinsprung 21\
CH-4051 Basel\
Switzerland E-mail. [email protected]\
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Combining the recent scanning tunneling microscopy (STM) and angle-resolved photoemission spectroscopy (ARPES) measurements, we construct a tight-binding model suitable for describing the band structure of monolayer FeSe grown on SrTiO$_{3}$. Then we propose a possible pairing function, which can well describe the gap anisotropy observed by ARPES and has a hidden sign-changing characteristic. At last, as a test of this pairing function we further study the nonmagnetic impurity-induced bound states, to be verified by future STM experiments.'
author:
- 'Yi Gao,$^{1}$ Yan Yu,$^{1}$ Tao Zhou,$^{2}$ Huaixiang Huang,$^{3}$ and Qiang-Hua Wang $^{4,5}$'
title: 'Hidden sign-changing $s$-wave superconductivity in monolayer FeSe'
---
=1
Recently, one monolayer (1ML) FeSe thin film grown on SrTiO$_{3}$ has attracted much attention due to its intriguing interfacial properties and high superconducting (SC) transition temperature T$_{c}$ [@xueqk1; @zhouxj1; @zhouxj2; @fengdl1; @fengdl2; @fengdl3; @xueqk2; @shenzx1; @jiajf; @chenxh; @hoffman]. On the one hand, its properties are drastically different from bulk FeSe. For example, compared to bulk FeSe whose T$_{c}$ is below 10K [@wumk], T$_{c}$ in 1ML FeSe can exceed 50K. Meanwhile, the SC gap in 1ML FeSe can be as large as 10-20meV, in contrast to its value in bulk FeSe (below 3.5meV) [@matsuda]. Furthermore, the SC gap structure in bulk FeSe shows a nodal behavior [@xueqk3] whereas full gap opens in 1ML FeSe.
On the other hand, 1ML FeSe is quite different from most other iron-based superconductors as well. Detailed investigation of the electronic structure by angle-resolved photoemission spectroscopy (ARPES) shows that, in 1ML FeSe, there are only electron pockets around the $M$ points of the 2Fe/cell Brillouin zone (BZ), while the usual hole pockets around $\Gamma$ in most iron-based superconductors sink and are located at about 80meV below the Fermi energy, leaving no pockets around $\Gamma$ [@fengdl1; @fengdl2; @shenzx1; @zhouxj1; @zhouxj2]. In this case, 1ML FeSe is about $10\%$ electron doped and the Fermi momentum $k_{F}/\pi\approx0.25$ [@zhouxj1].
Up to now, various theories have been constructed to account for the SC mechanism and pairing symmetry in 1ML FeSe. In most iron-based superconductors with hole pockets around $\Gamma$, it has been widely accepted that the pairing symmetry is $s_{\pm}$-wave. The pairing order parameter $\Delta_{\mathbf{k}}$ changes sign between the $\Gamma$ hole pockets and the $M$ electron pockets while it can be qualitatively described as $\Delta_{\mathbf{k}}\sim \cos k_{x}+\cos k_{y}$ (or $\cos k_{x}\cos k_{y}$ defined in the 1Fe/cell BZ), corresponding to the next-nearest-neighbor (NNN) pairing between the Fe atoms. However for 1ML FeSe, the situation is completely different. At the beginning, it was suggested that, if the SC mechanism is due to the spin fluctuation (resulted from the electron-electron correlation), then since there are no hole pockets, the pairing symmetry should be nodeless $d$-wave [@maier; @leedh]. Later it was found that, with the electron-phonon interaction between 1ML FeSe and the SrTiO$_{3}$ substrate, the pairing symmetry may change from $d$-wave to $s$-wave and this $s$-wave symmetry can be thought of as the usual $s_{\pm}$ symmetry restricted to the exposed electron pockets [@wangqh].
Experimentally, by measuring the quasiparticle interference in the presence and absence of a magnetic field, Ref. [@fengdl3] rules out any sign change of the pairing order parameter on the Fermi surfaces and excludes the $d$-wave pairing symmetry. The latest high-resolution ARPES found that, there are two electron pockets $\delta_{1}$ and $\delta_{2}$ around $M$. The SC gap on the outer pocket $\delta_{2}$ is slightly larger than that on the inner one $\delta_{1}$. The gap is anisotropic with its maxima located along the $\Gamma-M$ line and minima located along the $X-M$ line [@shenzx2]. The authors show that the usual $s_{\pm}$ symmetry (even if restricted to the exposed electron pockets) is not consistent with their data since the gap minima and maxima are located at wrong positions. They further inferred that mixing of different gap functions with the same symmetry may explain the observed gap anisotropy.
Combining the above mentioned scanning tunneling microscopy (STM) and ARPES measurements [@fengdl3; @shenzx2], in this work, first we construct a tight-binding model suitable for describing the band structure of 1ML FeSe, then we propose a possible pairing function, which can well describe the gap anisotropy observed by ARPES. At last, based on this pairing function we further study the nonmagnetic impurity-induced in-gap bound states as a test of our pairing function, which can be verified by future STM experiments.
In most iron-based superconductors, the Fe atoms form a two-dimensional lattice, with the Se/As atoms located below and above the Fe plane at exactly the same distance. Therefore the glide mirror symmetry \[$z\leftrightarrow-z$ reflection with respect to the Fe plane followed by a translation to nearest-neighbor (NN) Fe\] is present, in this case people can take only the Fe atoms into account and work in the 1Fe/cell BZ. Then by using a folding scheme to fold the band into the 2Fe/cell BZ, people can compare the calculated band structure to that observed by ARPES. However for 1ML FeSe grown on SrTiO$_{3}$, the glide mirror symmetry is explicitly broken since the up and down Se atoms reside in completely different environment and their distances to the Fe plane may differ. Thus the folding scheme does not work anymore and any tight-binding model must take this into account and must be built in the 2Fe/cell BZ at the first place. Besides, since we are mostly interested in the STM and ARPES experiments, which mainly probe the surface properties of materials where the glide mirror symmetry is also broken. Therefore we follow the idea proposed in Ref. [@zhangdg] and build a phenomenological model in the 2Fe/cell BZ to fit the basic characteristics of the ARPES-measured band structure. Previously this idea has been successfully applied to explain the vortex states observed in STM experiment [@gaoy; @wenhh].
In the two-dimensional Fe lattice, since the broken of the glide mirror symmetry, each unit cell contains two inequivalent sublattices $A$ and $B$. The coordinate of the sublattice $A$ in the unit cell $(i,j)$ is $\mathbf{R}_{ij}=(i,j)$ while that for the sublattice $B$ is $\mathbf{R}_{ij}+\mathbf{d}$, with $\mathbf{d}$ being $(0.5,0.5)$. The Hamiltonian can be written as $$\begin{aligned}
\label{h}
H&=&\sum_{\mathbf{k}}\psi_{\mathbf{k}}^{\dag}A_{\mathbf{k}}\psi_{\mathbf{k}},\nonumber\\
\psi_{\mathbf{k}}^{\dag}&=&(c_{\mathbf{k}A1\uparrow}^{\dag},c_{\mathbf{k}A2\uparrow}^{\dag},c_{\mathbf{k}B1\uparrow}^{\dag},c_{\mathbf{k}B2\uparrow}^{\dag},\nonumber\\
&&c_{-\mathbf{k}A1\downarrow},c_{-\mathbf{k}A2\downarrow},c_{-\mathbf{k}B1\downarrow},c_{-\mathbf{k}B2\downarrow}),\nonumber\\
A_{\mathbf{k}}&=&\begin{pmatrix}
M_{\mathbf{k}}&\Delta_{\mathbf{k}}\\\Delta_{\mathbf{k}}^{\dag}&-M_{-\mathbf{k}}^{T}
\end{pmatrix},\nonumber\\
M_{\mathbf{k}}&=&\begin{pmatrix}
\epsilon_{A,\mathbf{k}}&\epsilon_{xy,\mathbf{k}}&\epsilon_{T,\mathbf{k}}&0\\
\epsilon_{xy,\mathbf{k}}&\epsilon_{A,\mathbf{k}}&0&\epsilon_{T,\mathbf{k}}\\
\epsilon_{T,\mathbf{k}}&0&\epsilon_{B,\mathbf{k}}&\epsilon_{xy,\mathbf{k}}\\
0&\epsilon_{T,\mathbf{k}}&\epsilon_{xy,\mathbf{k}}&\epsilon_{B,\mathbf{k}}
\end{pmatrix},\end{aligned}$$ where $c_{\mathbf{k}A1\uparrow}^{\dag}/c_{\mathbf{k}A2\uparrow}^{\dag}$ creates a spin up electron with momentum $\mathbf{k}$ and on the $d_{xz}/d_{yz}$ orbital of the sublattice $A$. $\epsilon_{A,\mathbf{k}}=-2(t_{2}\cos k_{x}+t_{3}\cos k_{y})-\mu$, $\epsilon_{B,\mathbf{k}}=-2(t_{2}\cos k_{y}+t_{3}\cos k_{x})-\mu$, $\epsilon_{xy,\mathbf{k}}=-2t_{4}(\cos k_{x}+\cos k_{x})$ and $\epsilon_{T,\mathbf{k}}=-4t_{1}\cos(k_{x}/2)\cos(k_{y}/2)$. The broken of the glide mirror symmetry is manifested as $t_{2}\neq t_{3}$ since they are the NNN hoppings mediated by the up and down Se. In addition, $M_{\mathbf{k}}$ and $\Delta_{\mathbf{k}}$ are the tight-binding and pairing parts of the system, respectively. Throughout this work, the momentum $\mathbf{k}$ is defined in the 2Fe/cell BZ. In the following we set $t_{1-4}=1.6,0.4,-2,0.04$ and $\mu=-1.9$ to fit the band structure measured by ARPES. Under this set of parameters, the average electron occupation number is $n\approx2.1$, leading the system to be about $10\%$ electron doped. The calculated band structure and Fermi surfaces are shown in Fig. \[band\]. As we can see, the $\Gamma$ hole pockets sink below the Fermi energy while two electron pockets $\delta_{1}$ and $\delta_{2}$ exist around $M$ with their sizes similar to the ARPES data ($k_{F}/\pi\approx0.25$). Therefore, both the electron number and the Fermi surface topology are consistent with the ARPES measurements [@zhouxj1; @shenzx2].
![\[band\] (color online) (a) Calculated band structure along the high-symmetry directions in the 2Fe/cell BZ. The energy is defined with respect to the Fermi energy (the black dotted line). (b) The Fermi surfaces in the first quadrant of the 2Fe/cell BZ.](fig1.pdf){width="1\linewidth"}
Then we come to the pairing function $\Delta_{\mathbf{k}}$. From Ref. [@fengdl3] we know that $\Delta_{\mathbf{k}}$ should have generally an $s$-wave symmetry. Ref. [@shenzx2] shows that the SC gap is larger on $\delta_{2}$ than it is on $\delta_{1}$ while on both $\delta_{1}$ and $\delta_{2}$, the gap maxima are located along the $\Gamma-M$ line \[at $\theta=\pi/4$ where $\theta$ is defined in Fig. \[band\](b)\] and the minima are located along the $X-M$ line. Combining all these above, we propose that $\Delta_{\mathbf{k}}$ can be written as $$\begin{aligned}
\label{dk}
\Delta_{\mathbf{k}}&=&\begin{pmatrix}
\Delta_{0}&0&\Delta_{1\mathbf{k}}&0\\
0&\Delta_{0}&0&\Delta_{1\mathbf{k}}\\
\Delta_{1\mathbf{k}}&0&\Delta_{0}&0\\
0&\Delta_{1\mathbf{k}}&0&\Delta_{0}
\end{pmatrix},\end{aligned}$$ where $\Delta_{0}=-0.1$ and $\Delta_{1\mathbf{k}}=0.5\cos(k_{x}/2)\cos(k_{y}/2)$. Here $\Delta_{0}$ is momentum-independent and originates from the on-site intraorbital pairing, with the pairing symmetry being conventional $s$-wave. On the other hand, $\Delta_{1\mathbf{k}}$ is due to the NN intraorbital pairing (inter-sublattice) and its symmetry is also $s$-wave. Generally speaking, if the pairing mechanism is due to the spin fluctuation, then since the $\Gamma$ hole pockets are absent, the electrons can only be scattered between the electron pockets and the low-energy effective interaction can well be described by a $J_{1}-J_{2}$ model with the NN interaction $J_{1}$ being dominant. In this case, $\Delta_{1\mathbf{k}}$ should have the form factor $\sin(k_{x}/2)\sin(k_{y}/2)$ and the symmetry should be $d$-wave [@leedh]. However, there are both experimental and theoretical evidences suggesting that the electron-phonon interaction between 1ML FeSe and the SrTiO$_{3}$ substrate plays a vital role in boosting the SC gap magnitude and T$_{c}$ [@shenzx1; @leedh2]. The electron-phonon interaction may produce an effective on-site pairing interaction and this interaction may suppress the $\sin(k_{x}/2)\sin(k_{y}/2)$ component in $\Delta_{1\mathbf{k}}$ and finally change $\Delta_{1\mathbf{k}}$ into $\cos(k_{x}/2)\cos(k_{y}/2)$. Meanwhile it results in the on-site pairing $\Delta_{0}$. The transition of the NN pairing symmetry induced by the onsite pairing has also been found in other systems [@gaoy2]. In Fig. \[gap&dos\](a) we plot the magnitude of the SC gap on $\delta_{1}$ and $\delta_{2}$. We can see that the pocket $\delta_{2}$ has a slightly larger gap magnitude than $\delta_{1}$ while on both $\delta_{1}$ and $\delta_{2}$, the gap maxima are at $\theta=\pi/4$ and the minima are located along the $X-M$ line. Furthermore, the gap minima on these two pockets are equal to each other since along $X-M$, we have $\Delta_{1\mathbf{k}}=0$ ($k_{x}=\pi$ or $k_{y}=\pi$). All the characteristics of the magnitude and distribution of the SC gaps agree quite well with the ARPES measurement [@shenzx2]. In addition, this pairing symmetry is different from most iron-based superconductors since their $\Delta_{1\mathbf{k}}\sim(\cos k_{x}+\cos k_{y})$, which is resulted from the NNN pairing (intra-sublattice). However the gap distribution of the NNN pairing is not consistent with the ARPES data (see Fig. 4 of Ref. [@shenzx2] and Fig. 4 of Ref. [@wangqh2]).
A closer inspection of Eq. (\[dk\]) shows that, the phase difference between $\Delta_{0}$ and $\Delta_{1\mathbf{k}}$ is $\pi$, that is, if we change $\Delta_{0}$ into $0.1$, then the gap distribution along $\delta_{1}$ and $\delta_{2}$ will not match the ARPES data. The $\pi$ phase difference leads to the following consequences. Since $\delta_{1}$ and $\delta_{2}$ are close to $M$ where $(k_{x},k_{y})=(\pi,\pi)$, on these two pockets $|\Delta_{1\mathbf{k}}|$ is tiny and $|\Delta_{1\mathbf{k}}|\ll|\Delta_{0}|$, therefore on $\delta_{1}$ and $\delta_{2}$, the sign of the SC gap follows that of $\Delta_{0}$ ($-$). However on the bands close to $\Gamma$ where $(k_{x},k_{y})=(0,0)$, we have $|\Delta_{1\mathbf{k}}|\gg|\Delta_{0}|$ and the sign of the SC gap on those bands follows the sign of $\Delta_{1\mathbf{k}}$ there ($+$). Since the bands close to $\Gamma$ are below the Fermi energy, thus we denote this pairing symmetry as a hidden sign-changing $s$-wave symmetry $s^{*}$. This pairing symmetry is fully gapped and its density of states (DOS) can be found in Fig. \[gap&dos\](b), which shows a $U$-shaped profile near $\omega=0$.
![\[gap&dos\] (color online) (a) The magnitude of the SC gap along $\delta_{1}$ (red) and $\delta_{2}$ (black). (b) The DOS in the normal state (black solid), the $s^{*}$-wave pairing state (red dashed) and the conventional $s$-wave pairing state (blue dotted).](fig2.pdf){width="1\linewidth"}
In the following, we study the in-gap bound states around a nonmagnetic impurity, in order to distinguish the $s^{*}$-wave symmetry from the conventional $s$-wave symmetry where this is no sign change. When a single nonmagnetic impurity (behaves as a potential scatterer) is placed at the sublattice $A$ in the unit cell $(0,0)$, the impurity Hamiltonian can be written as $$\begin{aligned}
H_{imp}&=&V_{s}\sum_{\alpha=1}^{2}\sum_{\sigma=\uparrow,\downarrow}c_{00A\alpha\sigma}^{\dag}c_{00A\alpha\sigma}\nonumber\\
&=&\frac{V_{s}}{N}\sum_{\alpha=1}^{2}\sum_{\sigma=\uparrow,\downarrow}\sum_{\mathbf{k},\mathbf{k}^{'}}c_{\mathbf{k}A\alpha\sigma}^{\dag}c_{\mathbf{k}^{'}A\alpha\sigma},\end{aligned}$$ where $N$ is the number of the unit cells and $V_{s}$ is the scattering strength of the nonmagnetic impurity. Following the standard $T$-matrix procedure [@zhujx], we define the Green’s function matrix as $g(\mathbf{k},\mathbf{k}^{'},\tau)=-\langle T_{\tau}\psi_{\mathbf{k}}(\tau)\psi_{\mathbf{k}^{'}}^{\dag}(0)\rangle$ and $$\begin{aligned}
\label{gw}
g^{r/a}(\mathbf{k},\mathbf{k}^{'},\omega)&=&\delta_{\mathbf{k}\mathbf{k}^{'}}g_{0}^{r/a}(\mathbf{k},\omega)\nonumber\\
&+&g_{0}^{r/a}(\mathbf{k},\omega)T^{r/a}(\omega)g_{0}^{r/a}(\mathbf{k}^{'},\omega).\end{aligned}$$ Here $r$ and $a$ refer to the retarded and advanced Green’s function, respectively and $$\begin{aligned}
\label{g0}
g_{0}^{r/a}(\mathbf{k},\omega)&=&[(\omega\pm i0^{+})I-A_{\mathbf{k}}]^{-1},\nonumber\\
T^{r/a}(\omega)&=&[I-\frac{U}{N}\sum_{\mathbf{q}}g_{0}^{r/a}(\mathbf{q},\omega)]^{-1}\frac{U}{N},\end{aligned}$$ where $I$ is a $8\times8$ unit matrix and the nonzero elements of the matrix $U$ are $U_{11}=U_{22}=-U_{55}=-U_{66}=V_{s}$. The experimentally measured local density of states (LDOS) at the sublattice $A$ is expressed as $$\begin{aligned}
\label{roura}
\rho_{A}(\mathbf{R}_{ij},\omega)&=&-\frac{1}{\pi}\sum_{\alpha=1}^{2}\sum_{\sigma=\uparrow,\downarrow}{\rm Im}\langle\langle c_{ijA\alpha\sigma}|c_{ijA\alpha\sigma}^{\dag}\rangle\rangle_{\omega+i0^{+}}\nonumber\\
&=&-\frac{1}{\pi N}\sum_{\alpha=1}^{2}\sum_{\mathbf{k},\mathbf{k}^{'}}{\rm Im}\Big{\{}[g_{\alpha\alpha}^{r}(\mathbf{k},\mathbf{k}^{'},\omega)\nonumber\\
&-&g_{\alpha+4\alpha+4}^{a}(\mathbf{k},\mathbf{k}^{'},-\omega)]e^{-i(\mathbf{k}-\mathbf{k}^{'})\cdot\mathbf{R}_{ij}}\Big{\}},\end{aligned}$$ and similar expressions can be derived for the sublattice $B$. In fully gapped superconductors, the poles of $T^{r/a}(\omega)$ below the SC gap determine the location of the impurity-induced in-gap bound states [@zhujx], which should show up when $p(\omega)=det[I-\frac{U}{N}\sum_{\mathbf{q}}g_{0}^{r/a}(\mathbf{q},\omega)]=0$. In Fig. \[location1\](a), we plot $\omega_{0}$ as a function of $V_{s}$ where $p(\omega_{0})$ is the minimum of $p(\omega)$ when $\omega$ is between the two SC coherence peaks shown in Fig. \[gap&dos\](b). We found that $p(\omega_{0})=p(-\omega_{0})$, suggesting that the in-gap bound states, if exist, will always appear in pairs and their locations will be symmetric with respect to $\omega=0$. So in Fig. \[location1\], we show only the result at $\omega_{0}\geq0$. From Fig. \[location1\](b) we can see that from $V_{s}=3$ to $7$, $p(\omega_{0})\approx0$, so in-gap bound states should show up. In Fig. \[bound1\] we take $V_{s}=5$ as an example. Indeed, on the impurity site, there are two impurity-induced in-gap states whose locations are exactly the same as $p(\omega)$ reaches its minimum. The intensities of these two states far exceed those of the SC coherence peaks and they are located at about half of the SC gap. Similar behaviors exist for $V_{s}=3$ to $7$, with the locations of these in-gap states being away from the gap edge. Therefore these in-gap states should easily be observed in STM experiments.
![\[location1\] For the $s^{*}$-wave pairing symmetry. (a) $\omega_{0}$ as a function of $V_{s}$. The gray dotted line denotes the location of the SC coherence peaks. (b) $p(\omega_{0})$ as a function of $V_{s}$. See the definition of $\omega_{0}$ and $p(\omega_{0})$ in the text.](fig3.pdf){width="1\linewidth"}
![\[bound1\] For the $s^{*}$-wave pairing symmetry. (a) $p(\omega)$ at $V_{s}=5$. (b) The LDOS at the impurity site. The gray dotted lines in both (a) and (b) denote the location of the SC coherence peaks.](fig4.pdf){width="1\linewidth"}
In contrast, if we take $\Delta_{1\mathbf{k}}=0$ and $\Delta_{0}=-0.15$ in Eq. (\[dk\]), then the pairing symmetry is conventional $s$-wave and the DOS is shown in Fig. \[gap&dos\](b), which is very similar to the $s^{*}$ pairing case, except for a higher intensity of the SC coherence peaks. However in this case, after repeating the above calculation we found that $\omega_{0}$ is always located at the gap edges so there are no in-gap bound states and this is shown in Fig. \[bound2\] where we take $V_{s}=4$ as an example (for $V_{s}=1$ to $9$, the behavior is similar). We can see that although the intensity of the SC coherence peaks is greatly enhanced, there are no in-gap bound states, in sharp contrast to the $s^{*}$ pairing case.
![\[bound2\] The same as Fig. \[bound1\], but for the conventional $s$-wave pairing symmetry and at $V_{s}=4$.](fig5.pdf){width="1\linewidth"}
In summary, we construct a tight-binding model suitable for describing the band structure of 1ML FeSe in the absence of the glide mirror symmetry. Then we propose a possible pairing function that can well describe the gap anisotropy observed by ARPES and based on this pairing function we further study the nonmagnetic impurity-induced bound states. The pairing function we proposed has a hidden sign-changing characteristic and clear in-gap bound states can be induced by a nonmagnetic impurity, while in the conventional $s$-wave pairing case, no in-gap bound states exist. Therefore with the help of the STM experiments, the $s^{*}$-wave pairing can be clearly distinguished from the conventional $s$-wave pairing symmetry.
This work is supported by NSFC (Grant No. 11374005) and NSF of Shanghai (Grant No. 13ZR1415400). QHW is supported by NSFC (under grant No.11574134).
[99]{} Q. Y. Wang *et al.*, Chin. Phys. Lett. **29**, 037402 (2012).
D. F. Liu *et al.*, Nat. Commun. **3**, 931 (2012).
S. L. He *et al.*, Nat. Mater. **12**, 605 (2013).
S. Y. Tan *et al.*, Nat. Mater. **12**, 634 (2013).
W. H. Zhang *et al.*, Chin. Phys. Lett. **31**, 017401 (2014).
R. Peng *et al.*, Nat. Commun. **5**, 6044 (2014).
J. J. Lee *et al.*, Nature **515**, 245-248 (2014).
J. F. Ge *et al.*, Nat. Mater. **14**, 285 (2015).
Q. Fan *et al.*, Nat. Phys. **11**, 946 (2015).
D. Huang *et al.*, Phys. Rev. Lett. **115**, 017002 (2015).
B. Lei *et al.*, Phys. Rev. Lett. **116**, 077002 (2016).
F. C. Hsu *et al.*, Proc. Natl. Acad. Sci. U.S.A. **105** 14262 (2008).
S. Kasahara *et al.*, Proc. Natl. Acad. Sci. U.S.A. **111** 16309 (2014).
C. L. Song *et al.*, Science **332** 1410 (2011).
T. A. Maier *et al.*, Phys. Rev. B **83**, 100515 (2011).
F. Wang *et al.*, Europhys. Lett. **93**, 57003 (2011).
Y. Y. Xiang *et al.*, Phys. Rev. B **86**, 134508 (2012).
Y. Zhang *et al.*, arXiv:1512:06322.
D. Zhang, Phys. Rev. Lett. **103**, 186402 (2009).
L. Shan *et al.*, Nat. Phys. **7**, 325 (2011).
Y. Gao *et al.*, Phys. Rev. Lett. **106**, 027004 (2011).
Z. X. Li *et al.*, arXiv:1512.06179.
Y. Gao, arXiv:1304.2102.
Y. Y. Xiang *et al.*, Phys. Rev. B **88**, 104516 (2013).
A. V. Balatsky, I. Vekhter, and J. X. Zhu, Rev. Mod. Phys. **78**, 373 (2006).
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'This short example shows a contrived example on how to format the authors’ information for [*IJCAI–PRICAI–20 Proceedings*]{} using LaTeX.'
author:
- 'First Author$^1$[^1]'
- Second Author$^2$
- |
Third Author$^{2,3}$Fourth Author$^4$\
$^1$First Affiliation\
$^2$Second Affiliation\
$^3$Third Affiliation\
$^4$Fourth Affiliation\
{first, second}@example.com, [email protected], [email protected]
title: 'IJCAI–PRICAI–20 Example on typesetting multiple authors'
---
Introduction
============
This short example shows a contrived example on how to format the authors’ information for [*IJCAI–PRICAI–20 Proceedings*]{}.
Author names
============
Each author name must be followed by:
- A newline [\\\\]{} command for the last author.
- An [\\And]{} command for the second to last author.
- An [\\and]{} command for the other authors.
Affiliations
============
After all authors, start the affiliations section by using the [\\affiliations]{} command. Each affiliation must be terminated by a newline [\\\\]{} command. Make sure that you include the newline on the last affiliation too.
Mapping authors to affiliations
===============================
If some scenarios, the affiliation of each author is clear without any further indication (*e.g.*, all authors share the same affiliation, all authors have a single and different affiliation). In these situations you don’t need to do anything special.
In more complex scenarios you will have to clearly indicate the affiliation(s) for each author. This is done by using numeric math superscripts [\${\^$i,j, \ldots$}\$]{}. You must use numbers, not symbols, because those are reserved for footnotes in this section (should you need them). Check the authors definition in this example for reference.
Emails
======
This section is optional, and can be omitted entirely if you prefer. If you want to include e-mails, you should either include all authors’ e-mails or just the contact author(s)’ ones.
Start the e-mails section with the [\\emails]{} command. After that, write all emails you want to include separated by a comma and a space, following the same order used for the authors (*i.e.*, the first e-mail should correspond to the first author, the second e-mail to the second author and so on).
You may “contract" consecutive e-mails on the same domain as shown in this example (write the users’ part within curly brackets, followed by the domain name). Only e-mails of the exact same domain may be contracted. For instance, you cannot contract “[email protected]" and “[email protected]" because the domains are different.
[^1]: Contact Author
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'In this study, the numerical solutions of reaction-diffusion systems are investigated via the trigonometric quintic B-spline nite element collocation method. These equations appear in various disciplines in order to describe certain physical facts, such as pattern formation, autocatalytic chemical reactions and population dynamics. The Schnakenberg, Gray-Scott and Brusselator models are special cases of reaction-diffusion systems considered as numerical examples in this paper. For numerical purposes, Crank-Nicolson formulae are used for the time discretization and the resulting system is linearized by Taylor expansion. In the finite element method, a uniform partition of the solution domain is constructed for the space discretization. Over the mentioned mesh, dirac-delta function and trigonometric quintic B-spline functions are chosen as the weighted function and the bases functions, respectively. Thus, the reaction-diffusion system turns into an algebraic system which can be represented by a matrix equation so that the coeffcients are block matrices containing a certain number of non-zero elements in each row. The method is tested on different problems. To illustrate the accuracy, error norms are calculated in the linear problem whereas the relative error is given in other nonlinear problems. Subject to the character of the nonlinear problems, the occurring spatial patterns are formed by the trajectories of the dependent variables. The degree of the base polynomial allows the method to be used in high-order differential equation solutions. The algorithm produces accurate results even when the time increment is larger. Therefore, the proposed Trigonometric Quintic B-spline Collocation method is an effective method which produces acceptable results for the solutions of reaction-diffusion systems.'
author:
- |
Aysun Tok Onarcan$^{1}$,Nihat Adar$^{2}$, İdiris Dag$^{2}$\
Informatics Department$^{1}$,\
Computer Engineering Department$^{2}$,\
Eskisehir Osmangazi University, 26480, Eskisehir, Turkey
title: 'Numerical Solutions of Reaction-Diffusion Equation Systems with Trigonometric Quintic B-spline Collocation Algorithm'
---
Introduction
============
The reaction diffusion (RD) system is used to model chemical exchange reactions, the transport of ground water in an aquifer, pattern formation in the study of biology, chemistry and ecology. The RD system exhibits very rich dynamics behavior including periodic and quasi-periodic solutions. Theoretical studies have been developed to describe such dynamic behaviors. Most reaction-diffusion systems includes the nonlinear reaction term making it is diffcult to solve analytically. Attempts have been made to look for the numerical solutions to reveal more dynamic behaviors of the RD system.
The spline functions of various degrees are accompanied to construct numerical methods to solve di erential equations of certain order, since the resulting matrix system is always diagonal and can be solved easily and approximate solutions having the degree accuracy of less than the degree of the spline functions, can be set up. High order continuous di erentiable approximate solutions can be produced by way of using high order spline functions as solutions of the di erential equations. B-splines are de ned as a basis of the spline space [@b21]. Polynomial B-splines are extensively used for nding numerical solutions of di erential equations, function approximation and computer-aided design. The numerical procedure based on the B-spline collocation method has been increasingly applied for nonlinear evolution equations in various elds of science. However, application of trigonometric B-spline collocation methods to nonlinear evolution problems is few in comparison with the collocation method based on polynomial B-spline functions. The numerical methods for solving types of ordinary di erential equations with quadratic and cubic trigonometric B-spline are given by A. Nikolis [@b1; @b8]. Linear two point boundary value problems of the order of two are solved using the trigonometric cubic B- spline(TCB) interpolation method [@b16]. Another numerical method employing the TCB is set up to solve a class of linear two-point singular boundary value problems in the study [@b18]. Recently, a collocation nite di erence scheme based on the TCB has been developed for the numerical solution of a one-dimensional hyperbolic equation (wave equation) with a nonlocal conservation condition [@b19]. A new two-time level implicit technique based on the TCB, is proposed for the approximate solution of a nonclassical di usion problem with a nonlocal boundary condition in the study [@b20]. A new three-time level implicit approach, based on the TCB is presented for the approximate solution of the Generalized Nonlinear Klien-Gordon equation with Dirichlet boundary conditions [@kgo]. Some research in the literature [@b15] has established spline-based numerical approaches for solving reaction-difussion equation systems but without the trigonometric B-spline, to our knowledge. In this paper, trigonometric quintic B-splines(TQB) are used to establish a collocation method with suggested numerical method being applied to nd numerical solutions of a reaction-diffusion equation system. As a result, the present method makes it possible to approximate solutions as well as derivatives up to an order of four at each point of the problem domain.
When reaction-diffusion systems are studied, it can be understood that different species interact with each other, and also that in chemical reactions two different chemical substances generate new substances, for example. For modeling these types of events, which have more than one dependent variable, differential equation systems have been used. One-dimensional time-dependent reaction-diffusion equation systems can be defined as follows:
$$\begin{tabular}{l}
$\dfrac{\partial U}{\partial t}=D_{u}\dfrac{\partial ^{2}U}{\partial x^{2}}%
+F(U,V)$ \\
$\dfrac{\partial V}{\partial t}=D_{v}\dfrac{\partial ^{2}V}{\partial x^{2}}%
+G(U,V)$%
\end{tabular}
\label{r3}$$
where $U=U(x,t),V=V(x,t),\Omega \subset R^{2}$ is a problem domain, $D_{u}$ and $D_{v}$ are the diffusion coefficients of $U$ and $V$ respectively, $F$ and $G$ are the growth and interaction functions that represents the reactions of the system. $F$ and $G$ are always nonlinear functions. A general one dimensional reaction-diffusion equation system which includes all models we mentioned in this paper, is expressed as:
$$\begin{tabular}{l}
$\dfrac{\partial U}{\partial t}=a_{1}\dfrac{\partial ^{2}U}{\partial x^{2}}%
+b_{1}U+c_{1}V+d_{1}U^{2}V+e_{1}UV+m_{1}UV^{2}+n_{1}$ \\
\\
$\dfrac{\partial V}{\partial t}=a_{2}\dfrac{\partial ^{2}V}{\partial x^{2}}%
+b_{2}U+c_{2}V+d_{2}U^{2}V+e_{2}UV+m_{2}UV^{2}+n_{2}$%
\end{tabular}
\label{r1}$$
The solution region of the problem$(-\infty ,\infty )$ should be restricted as $(x_{0},x_{N})$ for computational purpose. In this case, system (\[r1\])’s initial conditions are either the homogeny Dirichlet boundary conditions $$\begin{tabular}{l}
$U(x_{0},t)=U(x_{N},t)=0,$ \\
$V(x_{0},t)=V(x_{N},t)=0,$%
\end{tabular}
\label{r2}$$or homogeny Neumann boundary conditions$$\begin{tabular}{l}
$U_{x}(x_{0},t)=U_{x}(x_{N},t)=0,$ \\
$V_{x}(x_{0},t)=V_{x}(x_{N},t)=0$%
\end{tabular}
\label{r4}$$will be used. Appropriate coefficients of the system (\[r1\]) for each test problem will be selected depending on the characteristics of each model in the following sections and documented in Table 1:
$$\begin{tabular}{|l|}
\hline
Table 1: The coefficient regulations for model system \\ \hline
\begin{tabular}{lllllllllllllll}
Test Problem & $a_{1}$ & $a_{2}$ & $b_{1}$ & $b_{2}$ & $c_{1}$ & $c_{2}$ & $%
d_{1}$ & $d_{2}$ & $e_{1}$ & $e_{2}$ & $m_{1}$ & $m_{2}$ & $n_{1}$ & $n_{2}$
\\ \hline
Linear & $d$ & $d$ & $-a$ & $0$ & $1$ & $-b$ & $0$ & $0$ & $0$ & $0$ & $0$ &
$0$ & $0$ & $0$ \\
Brusselator & $\varepsilon _{1}$ & $\varepsilon _{2}$ & $-(B+1)$ & $B$ & $0$
& $0$ & $1$ & $-1$ & $0$ & $0$ & $0$ & $0$ & $A$ & $0$ \\
Schnakenberg & $1$ & $d$ & $-\gamma $ & $0$ & $0$ & $0$ & $\gamma $ & $%
-\gamma $ & $0$ & $0$ & $0$ & $0$ & $\gamma a$ & $\gamma b$ \\
Gray-Scott & $\varepsilon _{1}$ & $\varepsilon _{2}$ & $-f$ & $0$ & $0$ & $%
-(f+k)$ & $0$ & $0$ & $0$ & $0$ & $-1$ & $1$ & $f$ & $0$%
\end{tabular}
\\ \hline
\end{tabular}%$$
The **Trigonometric Quintic B-spline Collocation Method**
=========================================================
Consider the solution space of the differential problem $[a=x_{0},b=x_{N}]$ is partitioned into a mesh of uniform length $h=x_{m+1}-x_{m}$ by knots $%
x_{m}$ where $m=-2,\ldots ,N+2.$ On this partition, together with additional knots $x_{N-2},x_{N-1},x_{N+1},x_{N+2}$ outside the problem domain, the trigonometric quintic B-spline $T_{m}^{5}(x)$ basis functions at knots is given by
$$T_{m}^{5}(x)=\frac{1}{\theta }\left \{
\begin{tabular}{ll}
$p^{5}(x_{m-3}),$ & $x\in \left[ x_{m-3},x_{m-2}\right] $ \\ \hline
$-p^{4}(x_{m-3})p(x_{m-1})-p^{3}(x_{m-3})p(x_{m})p(x_{m-3})$ & \\
$-p^{2}(x_{m-3})p(x_{m+1})p^{2}(x_{m-2})-p(x_{m-3})p(x_{m+2})p^{3}(x_{m-2})$
& \\
$-p(x_{m+3})p^{4}(x_{m-2}),$ & $x\in \left[ x_{m-2},x_{m-1}\right] $ \\
\hline
$p^{3}(x_{m-3})p^{2}(x_{m})+p^{2}(x_{m-3})p(x_{m+1})p(x_{m-2})p(x_{m})$ &
\\
$%
+p^{2}(x_{m-3})p^{2}(x_{m+1})p(x_{m-1})+p(x_{m+3})p(x_{m+2})p^{2}(x_{m-2})p(x_{m})
$ & \\
$%
+p(x_{m-3})p(x_{m+2})p(x_{m-2})p(x_{m+1})p(x_{m-1})+p(x_{m-3})p^{2}(x_{m+2})p^{2}(x_{m-1})
$ & \\
$%
+p(x_{m+3})p^{3}(x_{m-2})p(x_{m})+p(x_{m+3})p^{2}(x_{m-2})p(x_{m+1})p(x_{m-1})
$ & \\
$+p(x_{m+3})p(x_{m-2})p(x_{m+2})p^{2}(x_{m-1})+p^{2}(x_{m+3})p^{3}(x_{m-1}),$
& $x\in \left[ x_{m-1},x_{m}\right] $ \\ \hline
$-p^{2}(x_{m-3})p^{3}(x_{m+1})-p(x_{m-3})p(x_{m+2})p(x_{m-2})p^{2}(x_{m+1})$
& \\
$%
-p(x_{m-3})p^{2}(x_{m+2})p(x_{m-1})p(x_{m+1})-p(x_{m-3})p^{3}(x_{m+2})p(x_{m})
$ & \\
$%
-p(x_{m+3})p^{2}(x_{m-2})p^{2}(x_{m})-p(x_{m+3})p(x_{m-2})p(x_{m+2})p(x_{m-1})p(x_{m+1})
$ & \\
$-p(x_{m+3})p(x_{m-2})p^{2}(x_{m+2})p(x_{m})-p^{2}(x_{m+3})p^{2}(x_{m-3})$ &
\\
$-p^{2}(x_{m+3})p(x_{m-1})p(x_{m+2})p(x_{m})-p^{3}(x_{m+3})p^{2}(x_{m}),$ & $%
x\in \left[ x_{m},x_{m+1}\right] $ \\ \hline
$%
p(x_{m-3})p^{4}(x_{m+2})+p(x_{m+3})p(x_{m-2})p^{3}(x_{m+2})+p^{2}(x_{m+3})p(x_{m-1})p^{2}(x_{m+2})
$ & \\
$+p^{3}(x_{m+3})p(x_{m})p(x_{m+2})+p^{4}(x_{m+3})p(x_{m+1}),$ & $x\in \left[
x_{m+1},x_{m+2}\right] $ \\ \hline
$-p^{5}(x_{m+3}),$ & $x\in \left[ x_{m+2},x_{m+3}\right] $ \\ \hline
$0,$ & $dd$%
\end{tabular}%
\right. \label{r12}$$
where the $p(x_{m}),$ $\Theta $ and$\ m$ are:$$\begin{tabular}{l}
$p(x_{m})=\sin (\frac{x-x_{m}}{2}),$ \\
$\Theta =\sin (\frac{5h}{2})\sin (2h)\sin (\frac{3h}{2})\sin (h)\sin (\frac{h%
}{2}),$ \\
$m=O(1)N$%
\end{tabular}%$$
The $T_{m}^{5}(x)$ functions and its principle derivatives varnish outside the region $\left[ x_{m-3},x_{m+3}\right] $. The set of those B-splines $T_{m}^{5}(x)$ $,m=-2,...,N+2$ are a basis for the trigonometric spline space. An approximate solution $U_{N}(x,t)$ and $V_{N}(x,t)$ to the unknown solution $U(x,t)$ and $V(x,t)$ can be assumed of the forms
$$\begin{tabular}{ll}
$U_{N}(x,t)=\underset{i=-2}{\overset{N+2}{\dsum }}T_{i}^{5}(x)\delta _{i}(t)$
& $V_{N}(x,t)=\underset{i=-2}{\overset{N+2}{\dsum }}T_{i}^{5}(x)\gamma
_{i}(t)$%
\end{tabular}
\label{r14}$$
where $\delta _{i}$ and $\gamma _{i}$ are time dependent parameters to be determined from the collocation points $x_{i},$ $i=0,...,N$ with boundary and initial conditions.
$T_{m}^{5}(x)$ trigonometric quintic B-spline functions are zero behind the interval $[x_{m-3},x_{m+3}]$ and $T_{m}^{5}(x)$ functions sequentially covers six elements in the interval $[x_{m-3},x_{m+3}]$ so that, each $%
[x_{m},x_{m+1}]$ finite element is covered by the six $%
T_{m-2}^{5},T_{m-1}^{5},T_{m}^{5},T_{m+1}^{5},T_{m+2}^{5},$ and $T_{m+3}^{5}$ trigonometric quintic B-spline. In this case (\[r14\]) the approach is given as:
$$\begin{array}{c}
U_{N}(x,t)=\underset{i=m-2}{\overset{m+3}{\dsum }}T_{i}^{5}(x)\delta
_{i}=T_{m-2}^{5}(x)\delta _{m-2}+T_{m-1}^{5}(x)\delta
_{m-1}+T_{m}^{5}(x)\delta _{m}+T_{m+1}^{5}(x)\delta _{m+1} \\
\multicolumn{1}{r}{+T_{m+2}^{5}(x)\delta _{m+2}+T_{m+3}^{5}(x)\delta _{m+3}}
\\
V_{N}(x,t)=\underset{i=m-2}{\overset{m+3}{\dsum }}T_{i}^{5}(x)\gamma
_{i}=T_{m-2}^{5}(x)\gamma _{m-2}+T_{m-1}^{5}(x)\gamma
_{m-1}+T_{m}^{5}(x)\gamma _{m}+T_{m+1}^{5}(x)\gamma _{m+1} \\
\multicolumn{1}{r}{+T_{m+2}^{5}(x)\gamma _{m+2}+T_{m+3}^{5}(x)\gamma _{m+3}}%
\end{array}
\label{r15}$$
In these numerical approaches, the approximate solutions at the knots can be written in terms of the time parametes using $T_{m}^{5}(x)$ and Eq.(\[r14\]). After this, by also making necessary calculations, we can write $%
T_{m}^{5}(x)$ functions for $U_{m}$ and $V_{m}$ and its first, second,third and fourth derivatives at the knots $x_{m}$ are given in terms of parameters by the following relationships.
$$\begin{tabular}{l}
\begin{tabular}{l}
$U_{m}=\alpha _{1}\delta _{m-2}+\alpha _{2}\delta _{m-1}+\alpha _{3}\delta
_{m}+\alpha _{2}\delta _{m+1}+\alpha _{1}\delta _{m+2}$ \\
$U_{m}^{\prime }=-\alpha _{4}\delta _{m-2}-\alpha _{5}\delta _{m-1}+\alpha
_{5}\delta _{m+1}-\alpha _{4}\delta _{m+2}$ \\
$U_{m}^{\prime \prime }=\alpha _{6}\delta _{m-2}+\alpha _{7}\delta
_{m-1}+\alpha _{8}\delta _{m}+\alpha _{7}\delta _{m+1}+\alpha _{6}\delta
_{m+2}$ \\
$U_{m}^{\prime \prime \prime }=-\alpha _{9}\delta _{m-2}+\alpha _{10}\delta
_{m-1}-\alpha _{10}\delta _{m+1}-\alpha _{9}\delta _{m+2}$ \\
$U_{m}^{\prime \prime \prime \prime }=\alpha _{11}\delta _{m-2}+\alpha
_{12}\delta _{m-1}+\alpha _{13}\delta _{m}+\alpha _{12}\delta _{m+1}+\alpha
_{11}\delta _{m+2}$%
\end{tabular}
\\
\begin{tabular}{l}
$V_{m}=\alpha _{1}\gamma _{m-2}+\alpha _{2}\gamma _{m-1}+\alpha _{3}\gamma
_{m}+\alpha _{2}\gamma _{m+1}+\alpha _{1}\gamma _{m+2}$ \\
$V_{m}^{\prime }=-\alpha _{4}\gamma _{m-2}-\alpha _{5}\gamma _{m-1}+\alpha
_{5}\gamma _{m+1}+\alpha _{4}\gamma _{m+2}$ \\
$V_{m}^{\prime \prime }=\alpha _{6}\gamma _{m-2}+\alpha _{7}\gamma
_{m-1}+\alpha _{8}\gamma _{m}+\alpha _{7}\gamma _{m+1}+\alpha _{6}\gamma
_{m+2}$ \\
$V_{m}^{\prime \prime \prime }=-\alpha _{9}\gamma _{m-2}+\alpha _{10}\gamma
_{m-1}-\alpha _{10}\gamma _{m+1}+\alpha _{9}\gamma _{m+2}$ \\
$V_{m}^{\prime \prime \prime \prime }=\alpha _{11}\gamma _{m-2}+\alpha
_{12}\gamma _{m-1}+\alpha _{13}\gamma _{m}+\alpha _{12}\gamma _{m+1}+\alpha
_{11}\gamma _{m+2}$%
\end{tabular}%
\end{tabular}
\label{r16}$$
where the coefficients are:
$$\begin{array}{l}
\alpha _{1}=\dfrac{\sin ^{5}(\frac{h}{2})}{\Theta } \\
\\
\alpha _{2}=\dfrac{2\sin ^{5}(\frac{h}{2})\cos (\frac{h}{2})(16\cos ^{2}(%
\frac{h}{2})-3)}{\Theta } \\
\\
\alpha _{3}=\dfrac{2(1+48\cos {}^{4}(\frac{h}{2})-16\cos ^{2}(\frac{h}{2}%
)\sin {}^{5}(\frac{h}{2}))}{\Theta } \\
\\
\alpha _{4}=\dfrac{\frac{5}{2}\sin ^{4}(\frac{h}{2})\cos (\frac{h}{2})}{%
\Theta } \\
\\
\alpha _{5}=\dfrac{5\sin ^{4}(\frac{h}{2})\cos ^{2}(\frac{h}{2})(8\cos ^{2}(%
\frac{h}{2})-3)}{\Theta } \\
\\
\alpha _{6}=\dfrac{\frac{5}{4}\sin ^{3}(\frac{h}{2})(5\cos ^{2}(\frac{h}{2}%
)-1)}{\Theta } \\
\\
\alpha _{7}=\dfrac{\frac{5}{2}\sin ^{3}(\frac{h}{2})(\cos (\frac{h}{2}%
)(-15\cos ^{2}(\frac{h}{2})+3+16\cos ^{4}(\frac{h}{2}))}{\Theta } \\
\\
\alpha _{8}=\dfrac{-\frac{5}{2}\sin ^{3}(\frac{h}{2})(16\cos ^{6}(\frac{h}{2}%
)-5\cos ^{6}(\frac{h}{2})+1)}{\Theta } \\
\\
\alpha _{9}=\dfrac{\frac{5}{8}\sin ^{2}(\frac{h}{2})\cos (\frac{h}{2}%
)(25\cos ^{2}(\frac{h}{2})-13)}{\Theta } \\
\\
\alpha _{10}=\dfrac{-\frac{5}{4}\sin ^{2}(\frac{h}{2})(\cos ^{2}(\frac{h}{2}%
)(8\cos ^{4}(\frac{h}{2})-35\cos ^{2}(\frac{h}{2})+15)}{\Theta } \\
\alpha _{11}=\dfrac{\frac{5}{16}(125\cos ^{4}(\frac{h}{2})-114\cos ^{2}(%
\frac{h}{2})+13)\sin (\frac{h}{2}))}{\Theta } \\
\\
\alpha _{12}=\dfrac{-\frac{5}{8}\sin (\frac{h}{2})\cos (\frac{h}{2})(176\cos
^{6}(\frac{h}{2})-137\cos ^{4}(\frac{h}{2})-6\cos ^{2}(\frac{h}{2})+15)}{%
\Theta } \\
\\
\alpha _{13}=\dfrac{\frac{5}{8}(92\cos ^{6}(\frac{h}{2})-117\cos ^{4}(\frac{h%
}{2})+62\cos ^{2}(\frac{h}{2})-13)(-1+4\cos ^{2}(\frac{h}{2})\sin (\frac{h}{2%
}))}{\Theta }%
\end{array}%$$
The Crank–Nicholson scheme $$\begin{tabular}{ll}
$U_{t}=\dfrac{U^{n+1}-U^{n}}{\Delta t},$ & $U=\dfrac{U^{n+1}+U^{n}}{2}$ \\
$V_{t}=\dfrac{V^{n+1}-V^{n}}{\Delta t},$ & $V=\dfrac{V^{n+1}+V^{n}}{2}$%
\end{tabular}
\label{r17}$$is used to discretize time variables of the unknown $U$ and $V$ and their derivatives, to have the time integrated reaction-difussion equation system:
$$\begin{tabular}{r}
$\dfrac{U^{n+1}-U^{n}}{\Delta t}-a_{1}\dfrac{U_{xx}^{n+1}+U_{xx}^{n}}{2}%
-b_{1}\dfrac{U^{n+1}+U^{n}}{2}-c_{1}\dfrac{V^{n+1}+V^{n}}{2}-d_{1}\dfrac{%
(U^{2}V)^{n+1}+(U^{2}V)^{n}}{2}$ \\
$-e_{1}\dfrac{(UV)^{n+1}+(UV)^{n}}{2}-m_{1}\dfrac{(UV^{2})^{n+1}+(UV^{2})^{n}%
}{2}-n_{1}=0$ \\
$\dfrac{V^{n+1}-V^{n}}{\Delta t}-a_{2}\dfrac{V_{xx}^{n+1}+V_{xx}^{n}}{2}%
-b_{2}\dfrac{U^{n+1}+U^{n}}{2}-c_{2}\dfrac{V^{n+1}+V^{n}}{2}-d_{2}\dfrac{%
(U^{2}V)^{n+1}+(U^{2}V)^{n}}{2}$ \\
$-e_{2}\dfrac{(UV)^{n+1}+(UV)^{n}}{2}-m_{2}\dfrac{(UV^{2})^{n+1}+(UV^{2})^{n}%
}{2}-n_{2}=0$%
\end{tabular}
\label{r18}$$
where $U^{n+1}=U(x,t)$ and $V^{n+1}=V(x,t)$ are the solutions of the equations at the $(n+1)$th time level. Here $t_{n+1}=t_{n}+\Delta t$ and $%
\Delta t$ is the time step, superscripts denote the $n$ th level $%
t_{n}=n\Delta t.$
The nonlinear terms $(U^{2}V)^{n+1},$ $(UV^{2})^{n+1}$and $(UV)^{n+1}$ in equation (\[r18\]) is linearized by using the following forms \[r19\] .
$$\begin{tabular}{l}
$%
(U^{2}V)^{n+1}=U^{n+1}U^{n}V^{n}+U^{n}U^{n+1}V^{n}+U^{n}U^{n}V^{n+1}-2U^{n}U^{n}V^{u}
$ \\
$%
(UV^{2})^{n+1}=U^{n+1}V^{n}V^{n}+U^{n}V^{n+1}V^{n}+U^{n}V^{n}V^{n+1}-2U^{n}V^{n}V^{u}
$ \\
$(UV)^{n+1}=U^{n+1}V^{n}+U^{n}V^{n+1}-U^{n}V^{n}$%
\end{tabular}
\label{r19}$$
When we substitute (\[r19\]) in (\[r18\]), the linearized general model equation system takes the form as shown below,
$$\begin{aligned}
-\dfrac{a_{1}}{2}U_{xx}^{n+1}+\beta _{m1}U^{n+1}+\beta _{m2}V^{n+1} &=&%
\dfrac{a_{1}}{2}U_{xx}^{n}+\beta _{m3}U^{n}+\beta _{m4}V^{n}+n_{1}
\label{r20} \\
-\dfrac{a_{2}}{2}V_{xx}^{n+1}+\beta _{m5}U^{n+1}+\beta _{m6}V^{n+1} &=&%
\dfrac{a_{2}}{2}V_{xx}^{n}+\beta _{m7}U^{n}+\beta _{m8}V^{n}+n_{2} \notag\end{aligned}$$
where$$\begin{tabular}{l}
$\beta _{m1}=\dfrac{1}{\Delta t}-\dfrac{b_{1}}{2}-d_{1}U^{n}V^{n}-\dfrac{%
e_{1}}{2}Vn-\dfrac{m_{1}}{2}(V^{n})^{2}$ \\
$\beta _{m2}=\dfrac{1}{\Delta t}-\dfrac{c_{1}}{2}-\dfrac{d_{1}}{2}%
(U^{n})^{2}-\dfrac{e_{1}}{2}Un-m_{1}U^{n}V^{n}$ \\
$\beta _{m3}=\dfrac{1}{\Delta t}+\dfrac{b1}{2}-\dfrac{m1}{2}(V^{n})^{2}$ \\
$\beta _{m4}=\dfrac{c1}{2}-\dfrac{d1}{2}(U^{n})^{2}$ \\
$\beta _{m5}=-\dfrac{b_{2}}{2}-d_{2}U^{n}V^{n}-\dfrac{e_{2}}{2}Vn-\dfrac{%
m_{2}}{2}(V^{n})^{2}$ \\
$\beta _{m6}=\dfrac{1}{\Delta t}-\dfrac{c_{2}}{2}-\dfrac{d_{2}}{2}%
(U^{n})^{2}-\dfrac{e_{2}}{2}Un-m_{2}U^{n}V^{n}$ \\
$\beta _{m7}=\dfrac{b_{2}}{2}-\dfrac{m_{2}}{2}(V^{n})^{2}$ \\
$\beta _{m8}=\dfrac{1}{\Delta t}+\dfrac{c_{2}}{2}-\dfrac{d_{2}}{2}%
(U^{n})^{2}.$%
\end{tabular}%$$
To discrete the model system (\[r1\]) fully by space respectively, we substitute the approximate solution (\[r16\]) into (\[r20\]) yielding the fully-discretized equations.
$$\begin{aligned}
&&%
\begin{tabular}{l}
$\nu _{m1}\delta _{m-2}^{n+1}+\nu _{m2}\gamma _{m-2}^{n+1}+\nu _{m3}\delta
_{m-1}^{n+1}+\nu _{m4}\gamma _{m-1+}^{n+1}+\nu _{m5}\delta _{m}^{n+1}+\nu
_{m6}\gamma _{m}^{n+1}+$ \\
$\nu _{m7}\delta _{m+1}^{n+1}+\nu _{m8}\gamma _{m+1}^{n+1}+\nu _{m9}\delta
_{m+2}^{n+1}+\nu _{m10}\gamma _{m+2}^{n+1}=$ \\
$\nu _{m11}\delta _{m-2}^{n}+\nu _{m12}\gamma _{m-2}^{n}+\nu _{m13}\delta
_{m-1}^{n}+\nu _{m14}\gamma _{m-1}^{n}+\nu _{m15}\delta _{m}^{n}+\nu
_{m16}\gamma _{m}^{n}+$ \\
$\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \nu _{m17}\delta
_{m+1}^{n}+\nu _{m18}\gamma _{m+1}^{n}+\nu _{m19}\delta _{m+2}^{n}+\nu
_{m20}\gamma _{m+2}^{n}+n_{1}$%
\end{tabular}
\label{r21} \\
&&%
\begin{tabular}{l}
$\nu _{m21}\delta _{m-2}^{n+1}+\nu _{m22}\gamma _{m-2}^{n+1}+\nu
_{m23}\delta _{m-1}^{n+1}+\nu _{m24}\gamma _{m-1}^{n+1}+\nu _{m25}\delta
_{m}^{n+1}+\nu _{m26}\gamma _{m}^{n+1}+$ \\
$\nu _{m27}\delta _{m+1}^{n+1}+\nu _{m28}\gamma _{m+1}^{n+1}+\nu
_{m29}\delta _{m+2}^{n+1}+\nu _{m30}\gamma _{m+2}^{n+1}=$ \\
$\nu _{m31}\delta _{m-2}^{n}+\nu _{m32}\gamma _{m-2}^{n}+\nu _{m33}\delta
_{m-1}^{n}+\nu _{m34}\gamma _{m-1}^{n}+\nu _{m35}\delta _{m}^{n}+\nu
_{m36}\gamma _{m}^{n}+$ \\
$\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \nu
_{m37}\delta _{m+1}^{n}+\nu _{m38}\gamma _{m+1}^{n}+\nu _{m39}\delta
_{m+2}^{n}+\nu _{m40}\gamma _{m+2}^{n}+n_{2}$%
\end{tabular}
\notag\end{aligned}$$
where the $\nu _{m}$ coefficients are:
$$\begin{tabular}{llll}
$\nu _{m1}=\beta _{m1}\alpha _{1}-\dfrac{a_{1}}{2}\alpha _{6}$ & $\nu
_{m21}=\beta _{m5}\alpha _{1}$ & $\nu _{m11}=\beta _{m3}\alpha _{1}+\dfrac{%
a_{1}}{2}\alpha _{6}$ & $\nu _{m31}=\beta _{m7}\alpha _{1}$ \\
$\nu _{m2}=\beta _{m2}\alpha _{1}$ & $\nu _{m22}=\beta _{m6}\alpha _{1}+%
\dfrac{a_{2}}{2}\alpha _{6}$ & $\nu _{m12}=\beta _{m4}\alpha _{1}$ & $\nu
_{m32}=\beta _{m8}\alpha _{1}-\dfrac{a_{2}}{2}\alpha _{6}$ \\
$\nu _{m3}=\beta _{m1}\alpha _{2}-\dfrac{a_{1}}{2}\alpha _{7}$ & $\nu
_{m23}=\beta _{m5}\alpha _{2}$ & $\nu _{m13}=\beta _{m3}\alpha _{2}+\dfrac{%
a_{1}}{2}\alpha _{7}$ & $\nu _{m33}=\beta _{m7}\alpha _{2}$ \\
$\nu _{m4}=\beta _{m2}\alpha _{2}$ & $\nu _{m24}=\beta _{m6}\alpha _{2}+%
\dfrac{a_{2}}{2}\alpha _{7}$ & $\nu _{m14}=\beta _{m4}\alpha _{2}$ & $\nu
_{m34}=\beta _{m8}\alpha _{2}-\dfrac{a_{2}}{2}\alpha _{7}$ \\
$\nu _{m5}=\beta _{m1}\alpha _{3}-\dfrac{a_{1}}{2}\alpha _{8}$ & $\nu
_{m25}=\beta _{m5}\alpha _{3}$ & $\nu _{m15}=\beta _{m3}\alpha _{3}+\dfrac{%
a_{1}}{2}\alpha _{8}$ & $\nu _{m35}=\beta _{m7}\alpha _{3}$ \\
$\nu _{m6}=\beta _{m2}\alpha _{3}$ & $\nu _{m26}=\beta _{m6}\alpha _{3}+%
\dfrac{a_{2}}{2}\alpha _{8}$ & $\nu _{m16}=\beta _{m4}\alpha _{3}$ & $\nu
_{m36}=\beta _{m8}\alpha _{3}-\dfrac{a_{2}}{2}\alpha _{8}$ \\
$\nu _{m7}=\beta _{m1}\alpha _{2}-\dfrac{a_{1}}{2}\alpha _{7}$ & $\nu
_{m27}=\beta _{m5}\alpha _{2}$ & $\nu _{m17}=\beta _{m3}\alpha _{2}+\dfrac{%
a_{1}}{2}\alpha _{7}$ & $\nu _{m37}=\beta _{m7}\alpha _{2}$ \\
$\nu _{m8}=\beta _{m2}\alpha _{2}$ & $\nu _{m28}=\beta _{m6}\alpha _{2}+%
\dfrac{a_{2}}{2}\alpha _{7}$ & $\nu _{m18}=\beta _{m4}\alpha _{2}$ & $\nu
_{m38}=\beta _{m8}\alpha _{2}-\dfrac{a_{2}}{2}\alpha _{7}$ \\
$\nu _{m9}=\beta _{m1}\alpha _{1}-\dfrac{a_{1}}{2}\alpha _{6}$ & $\nu
_{m29}=\beta _{m5}\alpha _{1}$ & $\nu _{m19}=\beta _{m3}\alpha _{1}+\dfrac{%
a_{1}}{2}\alpha _{6}$ & $\nu _{m39}=\beta _{m7}\alpha _{1}$ \\
$\nu _{m10}=\beta _{m2}\alpha _{1}$ & $\nu _{m30}=\beta _{m6}\alpha _{1}+%
\dfrac{a_{2}}{2}\alpha _{6}$ & $\nu _{m20}=\beta _{m4}\alpha _{1}$ & $\nu
_{m40}=\beta _{m8}\alpha _{1}-\dfrac{a_{2}}{2}\alpha _{6}$%
\end{tabular}
\label{r22}$$
The system (\[r21\]) can be converted into the following matrix system:
$$A\mathbf{x}^{n+1}=B\mathbf{x}^{n}+F \label{r23}$$
$$\begin{tabular}{l}
$A=\left[
\begin{array}{cccccccccccccc}
\nu _{m1} & \nu _{m2} & \nu _{m3} & \nu _{m4} & \nu _{m5} & \nu _{m6} & \nu
_{m7} & \nu _{m8} & \nu _{m9} & \nu _{m10} & & & & \\
\nu _{m21} & \nu _{m22} & \nu _{m23} & \nu _{m24} & \nu _{m25} & \nu _{m26}
& \nu _{m27} & \nu _{m28} & \nu _{m29} & \nu _{m30} & & & & \\
& & \nu _{m1} & \nu _{m2} & \nu _{m3} & \nu _{m4} & \nu _{m5} & \nu _{m6} &
\nu _{m7} & \nu _{m8} & \nu _{m9} & \nu _{m10} & & \\
& & \nu _{m21} & \nu _{m22} & \nu _{m23} & \nu _{m24} & \nu _{m25} & \nu
_{m26} & \nu _{m27} & \nu _{m28} & \nu _{m29} & \nu _{m30} & & \\
& & & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... \\
& & & & \nu _{m1} & \nu _{m2} & \nu _{m3} & \nu _{m4} & \nu _{m5} & \nu
_{m6} & \nu _{m7} & \nu _{m8} & \nu _{m9} & \nu _{m10} \\
& & & & \nu _{m21} & \nu _{m22} & \nu _{m23} & \nu _{m24} & \nu _{m25} &
\nu _{m26} & \nu _{m27} & \nu _{m28} & \nu _{m29} & \nu _{m30}%
\end{array}%
\right] $ \\
\\
$B=\left[
\begin{array}{cccccccccccccc}
\nu _{m11} & \nu _{m12} & \nu _{m13} & \nu _{m14} & \nu _{m15} & \nu _{m16}
& \nu _{m17} & \nu _{m18} & \nu _{m19} & \nu _{m20} & & & & \\
\nu _{m31} & \nu _{m32} & \nu _{m33} & \nu _{m34} & \nu _{m35} & \nu _{m36}
& \nu _{m37} & \nu _{m38} & \nu _{m39} & \nu _{m40} & & & & \\
& & \nu _{m11} & \nu _{m12} & \nu _{m13} & \nu _{m14} & \nu _{m15} & \nu
_{m16} & \nu _{m17} & \nu _{m18} & \nu _{m19} & \nu _{m20} & & \\
& & \nu _{m31} & \nu _{m32} & \nu _{m33} & \nu _{m34} & \nu _{m35} & \nu
_{m36} & \nu _{m37} & \nu _{m38} & \nu _{m39} & \nu _{m40} & & \\
& & & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... \\
& & & & \nu _{m11} & \nu _{m12} & \nu _{m13} & \nu _{m14} & \nu _{m15} &
\nu _{m16} & \nu _{m17} & \nu _{m18} & \nu _{m19} & \nu _{m20} \\
& & & & \nu _{m31} & \nu _{m32} & \nu _{m33} & \nu _{m34} & \nu _{m35} &
\nu _{m36} & \nu _{m37} & \nu _{m38} & \nu _{m39} & \nu _{m40}%
\end{array}%
\right] $%
\end{tabular}
\label{r24}$$
The system (\[r24\]) is consists of a $2N+2$ linear equation in $2N+10$ unknown parameters with $x^{n+1},x^{n}$ and $F$ being the vectors as shown below:
$$\begin{aligned}
\mathbf{x}^{n+1} &=&[\delta _{-2}^{n+1},\gamma _{-2}^{n+1},\delta
_{-1}^{n+1},\gamma _{-1}^{n+1},\delta _{0}^{n+1},\gamma _{0}^{n+1}...,\delta
_{N+1}^{n+1},\gamma _{N+1}^{n+1},\delta _{N+2}^{n+1},\gamma _{N+2}^{n+1}]^{T}
\\
\mathbf{x}^{n} &=&[\delta _{-2}^{n},\gamma _{-2}^{n},\delta _{-1}^{n},\gamma
_{-1}^{n},\delta _{0}^{n},\gamma _{0}^{n}...,\delta _{N+1}^{n},\gamma
_{N+1}^{n},\delta _{N+2}^{n},\gamma _{N+2}^{n}]^{T} \\
F &=&[n_{1},n_{2},n_{1},n_{2},,,n_{1},n_{2}]^{T}\end{aligned}$$
To obtain a unique solution an additional eight constraints are needed.While $m=0$ and $m=N$ by imposing the Dirichlet boundary conditions or the Neumann boundary conditions this will lead us to new relationships to eliminate parameters
$\delta _{-2},$ $\delta _{-1},\delta _{N+1},\delta _{N+2},\gamma
_{-2},\gamma _{-1},\gamma _{N+1},\gamma _{N+2}$ from the system (\[r23\]). When we eliminate these parameters the resulting $(2N+2)\times (2N+2)$ matrix system can be solved by the Gauss elimination algorithm.
The initial parameters of $\mathbf{x}^{0}=(\delta _{-2}^{0},\gamma
_{-2}^{0},\delta _{-1}^{0},\gamma _{-1}^{0},\delta _{0}^{0},\gamma
_{0}^{0}...,\delta _{N+1}^{0},\gamma _{N+1}^{0},\delta _{N+2}^{0},\gamma
_{N+2}^{0})$ must be found to start the iteration process by using both initial and boundary conditions. The recurrence relationship (\[r23\]) gives the time evolution of vector $\mathbf{x}^{n}$. Thus, the nodal values $%
U_{N}(x,t)$ and $V_{N}(x,t)$ can be computed via the equations (\[r16\]) at the knots.
Results of The Numerical Solutions
----------------------------------
In this section, we will compare the efficiency and accuracy of the suggested method on the given reaction-diffusion equation system models. The obtained results for each model will compare with [@b15] and [@b3]. The accuracy of the schemes is measured in terms of the following discrete error norm
$L_{2}$ $=|U-U_{N}|_{2}$=$\sqrt{h\sum_{j=0}^{N}(U_{j}-(U_{N})_{j}^{n})}$and $%
L_{\infty }=|U-U_{N}|_{\infty }=\underset{j}{\max }|U_{j}-(U_{N})_{j}^{n}|$.
The relative error $=\sqrt{\dfrac{\sum_{j=0}^{N}|U_{j}^{n+1}-U_{j}^{n}|^{2}}{%
\sum_{j=0}^{N}|U_{j}^{n+1}|}}$ is used to measure errors of solutions of the reaction-diffusion systems that do not have an analytic solution.
### Linear Problem
It is stated that the terms $F(U,V)$ and $G(U,V)$ are always nonlinear in the system (\[r3\]). However, it is not possible to calculate error norms because of the limitations of the analytical solutions of the nonlinear system. The linear problem has been solved to examine error norms for testing this method:
$$\begin{tabular}{l}
$\dfrac{\partial U}{\partial t}=d\dfrac{\partial ^{2}U}{\partial x^{2}}-aU+V$
\\
$\dfrac{\partial V}{\partial t}=d\dfrac{\partial ^{2}V}{\partial x^{2}}-bV.$%
\end{tabular}
\label{r6}$$
The given equation system described above is a linear reaction-diffusion system, which has analytical solutions given as:
$$\begin{tabular}{l}
$U(x,t)=(e^{-(a+d)t}+e^{-(b+d)t})\cos (x),$ \\
$V(x,t)=(a-b)(e^{-(b+d)t})\cos (x).$%
\end{tabular}
\label{r7}$$
Solutions were obtained by solving the reaction-diffusion system (\[r6\]) in this section. Three different cases were considered in numerical computation of coefficients in thesystem (\[r6\]). This system’s initial conditions can be obtained, when $t=0$ in (\[r7\]) the solutions. When a solution region is selected as $(0,\dfrac{\pi }{2})$ interval, the boundary conditions are described as:
$$\begin{tabular}{ll}
$U_{x}(0,t)=0$ & $U(\pi /2,t)=0,$ \\
$V_{x}(0,t)=0$ & $V(\pi /2,t)=0.$%
\end{tabular}
\label{r27}$$
In numerical calculations, the programme is going to run up to time $t=1$ for various $N$ and $\Delta t$ and the reaction and diffusion mechanism is examined for different selections of constants $a,b,$and $d.$ The error values $L_{2}$ and $L_{\infty }$ that have emerged in the solution, are presented in the tables.
Firstly, the equation system (\[r6\]) coefficients are chosen as $a=0.1,$ $%
b=0.01$ and $d=1$ which is a diffusion dominated case. The boundary and initial conditions are chosen to coincide with the polynomial quintic B-spline collocation method (PQBCM) [@b15]. The programme is run up to $%
t=1$ and the obtained results for $U,$ in terms of $L_{2}$ and $L_{\infty }$ norms are given in Table 3.
In Table 3, $L_{2}$ and $L_{\infty }$error norms are calculated for both $U$ and $V,$ for $N=512$ and various $\Delta t$ with results of [@b15] and [@b3] is also given in the same table. When Table 3 is examined, it seems that, the accuracy of the obtained results for function $V$ are more efficient than obtained results for function $U$. When we compare the results, the proposed method has better accuracy aganist the other references under the same conditions.
$$\begin{tabular}{|l|l|l|l|l|l|l|l|l|l|}
\hline
\multicolumn{10}{|l|}{Table 3: Error norms $L_{2}$ and $L_{\infty }$ for
diffusion dominant case for $a=0.1,$ $b=0.01,$ $d=1$} \\ \hline
\multicolumn{6}{|l}{TQB} & \multicolumn{4}{|l|}{Polynomial quintic B-spline,%
\cite{b15}} \\ \hline
$N$ & $\Delta t$ & $U$ & & $V$ & & $U$ & & $V$ & \\ \hline
& & $L_{2}\times 10^{4}$ & $L_{\infty }\times 10^{4}$ & $L_{2}\times 10^{6}$
& $L_{\infty }\times 10^{6}$ & $L_{2}\times 10^{4}$ & $L_{\infty }\times
10^{4}$ & $L_{2}\times 10^{6}$ & $L_{\infty }\times 10^{6}$ \\ \hline
$512$ & $0.005$ & \multicolumn{1}{|r|}{$0,008090$} & \multicolumn{1}{|r|}{$%
0,009120$} & \multicolumn{1}{|r|}{$0,029344$} & \multicolumn{1}{|r|}{$%
0,033079$} & $0,015123$ & $0,017048$ & $0,062416$ & $0,070361$ \\ \hline
& $0.01$ & \multicolumn{1}{|r|}{$0,053460$} & \multicolumn{1}{|r|}{$0,060265$%
} & \multicolumn{1}{|r|}{$0,216594$} & \multicolumn{1}{|r|}{$0,244162$} & $%
0,060493$ & $0,068193$ & $0,249667$ & $0,281444$ \\ \hline
& $0.02$ & \multicolumn{1}{|r|}{$0,234949$} & \multicolumn{1}{|r|}{$0,264853$%
} & \multicolumn{1}{|r|}{$0,965627$} & \multicolumn{1}{|r|}{$1,088530$} & $%
0,241983$ & $0,272782$ & $0,998702$ & $1,125815$ \\ \hline
& $0.04$ & \multicolumn{1}{|r|}{$0,961033$} & \multicolumn{1}{|r|}{$1,083353$%
} & \multicolumn{1}{|r|}{$3,962253$} & \multicolumn{1}{|r|}{$4,466566$} & $%
0,968068$ & $1,091283$ & $3,995334$ & $4,503855$ \\ \hline
\multicolumn{10}{|l|}{CN-MG method \cite{b3}} \\ \hline
$512$ & $0.005$ & & $0.0116$ & & & & & & \\ \hline
& $0.01$ & & $0.0627$ & & & & & & \\ \hline
& $0.02$ & & $0.267$ & & & & & & \\ \hline
& $0.04$ & & $1.09$ & & & & & & \\ \hline
\end{tabular}%$$
Secondly, the constants of system equation (\[r6\]) are selected as $%
a=2,b=1,d=0.001$ which is a reaction dominated case. The programme is run up to $t=1,$ and the obtained results in terms of $L_{2}$ and $L_{\infty }$ norms are given in Table 4.
In Table 4, $L_{2}$ and $L_{\infty }$ error norms are calculated both for $U$ and $V,$ for $N=512$ and various $\Delta t$ and the results of [@b15] and [@b3] are given in the same table.
$$\begin{tabular}{|l|l|l|l|l|l|l|l|l|l|}
\hline
\multicolumn{10}{|l|}{Table 4: Error norms $L_{2}$ and $L_{\infty }$ for
reaction dominated case for $a=2,$ $b=1,d=0.001$} \\ \hline
\multicolumn{6}{|l}{TQB} & \multicolumn{4}{|l|}{Polynomial quintic B-spline,%
\cite{b15}} \\ \hline
$N$ & $\Delta t$ & $U$ & & $V$ & & $U$ & & $V$ & \\ \hline
& & $L_{2}\times 10^{4}$ & $L_{\infty }\times 10^{4}$ & $L_{2}\times 10^{5}$
& $L_{\infty }\times 10^{5}$ & $L_{2}\times 10^{4}$ & $L_{\infty }\times
10^{4}$ & $L_{2}\times 10^{3}$ & $L_{\infty }\times 10^{3}$ \\ \hline
$512$ & $0.005$ & \multicolumn{1}{|r|}{$0,026827$} & \multicolumn{1}{|r|}{$%
0,030241$} & \multicolumn{1}{|r|}{$0,068087$} & \multicolumn{1}{|r|}{$%
0,076753$} & \multicolumn{1}{|r|}{$0,026832$} & \multicolumn{1}{|r|}{$%
0,030247$} & \multicolumn{1}{|r|}{$0,068124$} & \multicolumn{1}{|r|}{$%
0,076795$} \\ \hline
& $0.01$ & \multicolumn{1}{|r|}{$0,107324$} & \multicolumn{1}{|r|}{$0,120984$%
} & \multicolumn{1}{|r|}{$0,272462$} & \multicolumn{1}{|r|}{$0,307141$} &
\multicolumn{1}{|r|}{$0,107329$} & \multicolumn{1}{|r|}{$0,120989$} &
\multicolumn{1}{|r|}{$0,272499$} & \multicolumn{1}{|r|}{$0,307183$} \\ \hline
& $0.02$ & \multicolumn{1}{|r|}{$0,429339$} & \multicolumn{1}{|r|}{$0,483984$%
} & \multicolumn{1}{|r|}{$1,089996$} & \multicolumn{1}{|r|}{$1,228729$} &
\multicolumn{1}{|r|}{$0,429344$} & \multicolumn{1}{|r|}{$0,483990$} &
\multicolumn{1}{|r|}{$1,090033$} & \multicolumn{1}{|r|}{$1,228771$} \\ \hline
& $0.04$ & \multicolumn{1}{|r|}{$1,717837$} & \multicolumn{1}{|r|}{$1,936481$%
} & \multicolumn{1}{|r|}{$4,360663$} & \multicolumn{1}{|r|}{$4,915683$} &
\multicolumn{1}{|r|}{$1,717842$} & \multicolumn{1}{|r|}{$1,936487$} &
\multicolumn{1}{|r|}{$4,360700$} & \multicolumn{1}{|r|}{$4,915725$} \\ \hline
\multicolumn{10}{|l|}{CN-MG method \cite{b3}} \\ \hline
$512$ & $0.005$ & & $0.0302$ & & & & & & \\ \hline
& $0.01$ & & $0.121$ & & & & & & \\ \hline
& $0.02$ & & $0.484$ & & & & & & \\ \hline
& $0.04$ & & $1.94$ & & & & & & \\ \hline
\end{tabular}%$$
Last, we will obtain a numerical solution of the reaction-diffusion equation for $a=100,b=1,d=0.001$ which is a reaction dominated case with stiff reaction.
In Table 5, $L_{2}$ and $L_{\infty }$ error norms are calculated both for $U$ and $V,$ for $N=512$ and various $\Delta t.$
$$\begin{tabular}{|l|l|l|l|l|l|l|l|l|l|}
\hline
\multicolumn{10}{|l|}{Table 5: Error norms $L_{2}$ and $L_{\infty }$ for
diffusion dominated case with stiff reaction} \\ \hline
\multicolumn{10}{|l|}{for $a=100,$ $b=1,d=0.001$} \\ \hline
\multicolumn{6}{|l}{TQB} & \multicolumn{4}{|l|}{Polynomial quintic B-spline,%
\cite{b15}} \\ \hline
$N$ & $\Delta t$ & $U$ & & $V$ & & $U$ & & $V$ & \\ \hline
& & $L_{2}\times 10^{5}$ & $L_{\infty }\times 10^{5}$ & $L_{2}\times 10^{3}$
& $L_{\infty }\times 10^{3}$ & $L_{2}\times 10^{5}$ & $L_{\infty }\times
10^{5}$ & $L_{2}\times 10^{3}$ & $L_{\infty }\times 10^{3}$ \\ \hline
$512$ & $0.005$ & \multicolumn{1}{|r|}{$0,068087$} & \multicolumn{1}{|r|}{$%
0,076753$} & \multicolumn{1}{|r|}{$0,067406$} & \multicolumn{1}{|r|}{$%
0,075986$} & \multicolumn{1}{|r|}{$0,068124$} & \multicolumn{1}{|r|}{$%
0,076795$} & \multicolumn{1}{|r|}{$0,067443$} & \multicolumn{1}{|r|}{$%
0,076027$} \\ \hline
& $0.01$ & \multicolumn{1}{|r|}{$0,272462$} & \multicolumn{1}{|r|}{$0,307141$%
} & \multicolumn{1}{|r|}{$0,269738$} & \multicolumn{1}{|r|}{$0,30407$} &
\multicolumn{1}{|r|}{$0,272499$} & \multicolumn{1}{|r|}{$0,307183$} &
\multicolumn{1}{|r|}{$0,269774$} & \multicolumn{1}{|r|}{$0,304111$} \\ \hline
& $0.02$ & \multicolumn{1}{|r|}{$1,089996$} & \multicolumn{1}{|r|}{$1,228729$%
} & \multicolumn{1}{|r|}{$1,079096$} & \multicolumn{1}{|r|}{$1,216442$} &
\multicolumn{1}{|r|}{$1,090033$} & \multicolumn{1}{|r|}{$1,228771$} &
\multicolumn{1}{|r|}{$1,079133$} & \multicolumn{1}{|r|}{$1,216484$} \\ \hline
& $0.04$ & \multicolumn{1}{|r|}{$4,360663$} & \multicolumn{1}{|r|}{$4,915684$%
} & \multicolumn{1}{|r|}{$4,317057$} & \multicolumn{1}{|r|}{$4,866527$} &
\multicolumn{1}{|r|}{$4,360700$} & \multicolumn{1}{|r|}{$4,915725$} &
\multicolumn{1}{|r|}{$4,317093$} & \multicolumn{1}{|r|}{$4,866568$} \\ \hline
\multicolumn{10}{|l|}{CN-MG method \cite{b3}} \\ \hline
$512$ & $0.005$ & & & & $0.0760$ & & & & \\ \hline
& $0.01$ & & & & $0.304$ & & & & \\ \hline
& $0.02$ & & & & $1.22$ & & & & \\ \hline
& $0.04$ & & & & $4.87$ & & & & \\ \hline
\end{tabular}%$$
### Nonlinear Problem (Brusselator Model)
The Brusselator model is a general nonlinear reaction-diffusion system that models predicting oscillations in chemical reactions. The system was firs presented by Prigogine and Lefever [@b6] showing two variable autocatalytic reactions. This is one of the simplest reaction-diffusion equations exhibiting Turing instability, and that large-scale studies have been conducted on this model with the system being investigated both analytically and numerically. The general reaction-diffusion equation system for this model given as:
$$\begin{tabular}{l}
$\dfrac{\partial U}{\partial t}=\varepsilon _{1}\dfrac{\partial ^{2}U}{%
\partial x^{2}}+A+U^{2}V-(B+1)U$ \\
$\dfrac{\partial V}{\partial t}=\varepsilon _{2}\dfrac{\partial ^{2}V}{%
\partial x^{2}}+BU-U^{2}V$%
\end{tabular}
\label{r9}$$
where $\varepsilon _{i},i=1,2$ are diffusion constants, $x$ is the spatial coordinate and $U,V$ are functions of $x$ and $t$ representing concentrations The initial conditions are selected similar to the reference [@b9].
$$\begin{tabular}{ll}
$U(x,0)=0.5,$ & $V(x,0)=1+5x$%
\end{tabular}
\label{r28}$$
and the additional boundary conditions $$\begin{tabular}{ll}
$U_{xx}(x_{0},t)=0$ & $U_{xx}(x_{N},t)=0,$ \\
$V_{xx}(x_{0},t)=0$ & $V_{xx}(x_{N},t)=0.$%
\end{tabular}%$$In the equation system (\[r9\]), the coefficients are taken as $\
\varepsilon _{1}=\varepsilon _{2}=10^{-4},$ $A=1,$ $B=3.4.$The solutions are obtained in the region $x\in \left[ 0,1\right] $, and the programme is run by the time $t=15$; for space discretization $N=200$ split point and for time discretization $\Delta t=0.01$ time step is used. The solutions under these selections, are given in Fig. 1 and Fig. 2. which show changes of density of the functions. When wave action is examined, we observe that both $U$ and $V$ exhibit periodic wave motion under these conditions.
$$\begin{array}{c}
\FRAME{itbpF}{3.0242in}{2.3999in}{0in}{}{}{figure1.png}{\special{language
"Scientific Word";type "GRAPHIC";maintain-aspect-ratio TRUE;display
"USEDEF";valid_file "F";width 3.0242in;height 2.3999in;depth
0in;original-width 5.073in;original-height 4.0205in;cropleft "0";croptop
"1";cropright "1";cropbottom "0";filename 'Figure1.png';file-properties
"XNPEU";}} \\
\text{Figure 1: Periodic wave motion for U} \\
\text{for }N=200\text{ }\Delta t=0.01%
\end{array}%
\begin{array}{c}
\FRAME{itbpF}{3.0441in}{2.4042in}{0.0623in}{}{}{figure2.png}{\special%
{language "Scientific Word";type "GRAPHIC";maintain-aspect-ratio
TRUE;display "USEDEF";valid_file "F";width 3.0441in;height 2.4042in;depth
0.0623in;original-width 5.0315in;original-height 3.9686in;cropleft
"0";croptop "1";cropright "1";cropbottom "0";filename
'Figure2.png';file-properties "XNPEU";}} \\
\text{Figure 2: Periodic wave motion for V} \\
\text{for }N=200\text{ }\Delta t=0.01%
\end{array}%$$
The density values for periodic motion are given in Table 6. We see that this wave is observed as a period of about $7.8$; whereas the period $7.7$ is found when the polynomial quintic B-spline collocation algorithm is implemented (\[r15\])
$$\begin{tabular}{|llllllll|}
\hline
\multicolumn{8}{|l|}{Table 6: Density values for periodic motion for TQB} \\
\hline
$Density$ & \multicolumn{1}{|l}{$t$} & \multicolumn{1}{|l}{$x=0.0$} &
\multicolumn{1}{|l}{$x=0.2$} & \multicolumn{1}{|l}{$x=0.4$} &
\multicolumn{1}{|l}{$x=0.6$} & \multicolumn{1}{|l}{$x=0.8$} &
\multicolumn{1}{|l|}{$x=1.0$} \\ \hline
$U$ & \multicolumn{1}{|l}{$3$} & \multicolumn{1}{|r}{0,284595} &
\multicolumn{1}{|r}{0,317799} & \multicolumn{1}{|r}{0,377380} &
\multicolumn{1}{|r}{0,604709} & \multicolumn{1}{|r}{1,623703} &
\multicolumn{1}{|r|}{0,691906} \\ \hline
& \multicolumn{1}{|l}{$10.8$} & \multicolumn{1}{|r}{0,344555} &
\multicolumn{1}{|r}{0,321243} & \multicolumn{1}{|r}{0,376194} &
\multicolumn{1}{|r}{0,605486} & \multicolumn{1}{|r}{1,715194} &
\multicolumn{1}{|r|}{0,716792} \\ \hline
& \multicolumn{1}{|l}{$6$} & \multicolumn{1}{|r}{0,400865} &
\multicolumn{1}{|r}{0,687572} & \multicolumn{1}{|r}{2,884364} &
\multicolumn{1}{|r}{0,549937} & \multicolumn{1}{|r}{0,323697} &
\multicolumn{1}{|r|}{0,348838} \\ \hline
& \multicolumn{1}{|l}{$13.8$} & \multicolumn{1}{|r}{0,398971} &
\multicolumn{1}{|r}{0,680057} & \multicolumn{1}{|r}{2,911740} &
\multicolumn{1}{|r}{0,533798} & \multicolumn{1}{|r}{0,322405} &
\multicolumn{1}{|r|}{0,347582} \\ \hline
& & \multicolumn{1}{r}{} & \multicolumn{1}{r}{} & \multicolumn{1}{r}{} &
\multicolumn{1}{r}{} & \multicolumn{1}{r}{} & \multicolumn{1}{r|}{} \\ \hline
$V$ & \multicolumn{1}{|l}{$3$} & \multicolumn{1}{|r}{3,363723} &
\multicolumn{1}{|r}{4,250910} & \multicolumn{1}{|r}{5,066610} &
\multicolumn{1}{|r}{5,546754} & \multicolumn{1}{|r}{1,650507} &
\multicolumn{1}{|r|}{2,507119} \\ \hline
& \multicolumn{1}{|l}{$10.8$} & \multicolumn{1}{|r}{3,309473} &
\multicolumn{1}{|r}{4,240150} & \multicolumn{1}{|r}{5,062313} &
\multicolumn{1}{|r}{5,651837} & \multicolumn{1}{|r}{1,591938} &
\multicolumn{1}{|r|}{2,473710} \\ \hline
& \multicolumn{1}{|l}{$6$} & \multicolumn{1}{|r}{5,258678} &
\multicolumn{1}{|r}{5,632343} & \multicolumn{1}{|r}{1,073700} &
\multicolumn{1}{|r}{2,739517} & \multicolumn{1}{|r}{4,300681} &
\multicolumn{1}{|r|}{4,755329} \\ \hline
& \multicolumn{1}{|l}{$13.8$} & \multicolumn{1}{|r}{5,241915} &
\multicolumn{1}{|r}{5,634312} & \multicolumn{1}{|r}{1,065232} &
\multicolumn{1}{|r}{2,769906} & \multicolumn{1}{|r}{4,269058} &
\multicolumn{1}{|r|}{4,737755} \\ \hline
\end{tabular}%$$
$$\begin{tabular}{|llllllll|}
\hline
\multicolumn{8}{|l|}{Table 7: Density values for periodic motion for quintic
B-spline \cite{b15}} \\ \hline
Density & \multicolumn{1}{|l}{$t$} & \multicolumn{1}{|l}{$x=0.0$} &
\multicolumn{1}{|l}{$x=0.2$} & \multicolumn{1}{|l}{$x=0.4$} &
\multicolumn{1}{|l}{$x=0.6$} & \multicolumn{1}{|l}{$x=0.8$} &
\multicolumn{1}{|l|}{$x=1.0$} \\ \hline
$U$ & \multicolumn{1}{|l}{$3$} & \multicolumn{1}{|l}{0,284657} &
\multicolumn{1}{|l}{0,317966} & \multicolumn{1}{|l}{0,377959} &
\multicolumn{1}{|l}{0,612881} & \multicolumn{1}{|l}{1,519483} &
\multicolumn{1}{|l|}{0,648434} \\ \hline
& \multicolumn{1}{|l}{$10.7$} & \multicolumn{1}{|l}{0,347747} &
\multicolumn{1}{|l}{0,321168} & \multicolumn{1}{|l}{0,376204} &
\multicolumn{1}{|l}{0,611218} & \multicolumn{1}{|l}{1,626310} &
\multicolumn{1}{|l|}{0,680742} \\ \hline
& \multicolumn{1}{|l}{$6$} & \multicolumn{1}{|l}{0,401741} &
\multicolumn{1}{|l}{0,706734} & \multicolumn{1}{|l}{2,716642} &
\multicolumn{1}{|l}{0,510302} & \multicolumn{1}{|l}{0,326204} &
\multicolumn{1}{|l|}{0,352411} \\ \hline
& \multicolumn{1}{|l}{$13.7$} & \multicolumn{1}{|l}{0,398904} &
\multicolumn{1}{|l}{0,691408} & \multicolumn{1}{|l}{2,769059} &
\multicolumn{1}{|l}{0,500480} & \multicolumn{1}{|l}{0,324523} &
\multicolumn{1}{|l|}{0,350579} \\ \hline
& & & & & & & \\ \hline
$V$ & \multicolumn{1}{|l}{$3$} & \multicolumn{1}{|l}{3,363896} &
\multicolumn{1}{|l}{4,251219} & \multicolumn{1}{|l}{5,066734} &
\multicolumn{1}{|l}{5,537413} & \multicolumn{1}{|l}{1,732740} &
\multicolumn{1}{|l|}{2,580615} \\ \hline
& \multicolumn{1}{|l}{$10.7$} & \multicolumn{1}{|l}{3,299664} &
\multicolumn{1}{|l}{4,233913} & \multicolumn{1}{|l}{5,056668} &
\multicolumn{1}{|l}{5,637796} & \multicolumn{1}{|l}{1,659946} &
\multicolumn{1}{|l|}{2,534846} \\ \hline
& \multicolumn{1}{|l}{$6$} & \multicolumn{1}{|l}{5,257254} &
\multicolumn{1}{|l}{5,606791} & \multicolumn{1}{|l}{1,137215} &
\multicolumn{1}{|l}{2,825295} & \multicolumn{1}{|l}{4,355469} &
\multicolumn{1}{|l|}{4,798749} \\ \hline
& \multicolumn{1}{|l}{$13.7$} & \multicolumn{1}{|l}{5,234725} &
\multicolumn{1}{|l}{5,613815} & \multicolumn{1}{|l}{1,119445} &
\multicolumn{1}{|l}{2,846165} & \multicolumn{1}{|l}{4,317357} &
\multicolumn{1}{|l|}{4,774541} \\ \hline
\end{tabular}%$$
### **Nonlinear Problem (Schnakenberg Model)**
The Schnakenberg model is a well-known reaction-diffusion model which is a simplified version of the Brusselator model. It is a relatively easy system for modeling the reaction-diffusion mechanism. There are many studies in the literature on this model. Firstly it is modeled by Schakenberg [@b4] and given as:
$$\begin{tabular}{l}
$\dfrac{\partial U}{\partial t}=\dfrac{\partial ^{2}U}{\partial x^{2}}%
+\gamma (a-U+U^{2}V)$ \\
$\dfrac{\partial V}{\partial t}=d\dfrac{\partial ^{2}V}{\partial x^{2}}%
+\gamma (b-U^{2}V)$%
\end{tabular}
\label{r10}$$
where U and V denote the concentration of activator and inhibitor respectively, $d$ is diffusion coefficient, $\gamma $, $a$ and $b$ are rate constants of the biochemical reactions. The oscillation problem is taken into account for the Schnakenberg Model. Accordingly, the parameters for system (\[r10\]) are selected as $a=0.126779$,$b=0.792366,d=10$ and $%
\gamma =10^{4}.$The problem’s initial conditions:
$$\begin{aligned}
U(x,0) &=&0.919145+0.001\underset{j=1}{\overset{25}{\sum }}\frac{\cos (2\pi
jx)}{j} \label{r30} \\
V(x,0) &=&0.937903+0.001\underset{j=1}{\overset{25}{\sum }}\frac{\cos (2\pi
jx)}{j} \notag\end{aligned}$$
are on the interval $[-1.1]$. The boundary conditions left, right and additional boundary conditions are:$$\begin{tabular}{ll}
$U_{x}(x_{0},t)=0$ & $U_{x}(x_{N},t)=0,$ \\
$V_{x}(x_{0},t)=0$ & $V_{x}(x_{N},t)=0.$%
\end{tabular}%$$$$\begin{tabular}{ll}
$U_{xxx}(x_{0},t)=0$ & $U_{xxx}(x_{N},t)=0,$ \\
$V_{xxx}(x_{0},t)=0$ & $V_{xxx}(x_{N},t)=0.$%
\end{tabular}%$$Computations are performed to the $t=2.5$ for space/time combinations given in Table. 8. Obtained relative error values are given in Table 8 together with the results of quintic B-spline collocation method [@b15].
$$\begin{tabular}{|c|c|c|c|c|c|}
\hline
\multicolumn{6}{|c|}{Tablo 8: Relative error values for $N=100$ in $t=2.5$}
\\ \hline
$\Delta t$ & Nu. of steps & $U$ & $U$\cite{b15} & $V$ & $V$\cite{b15} \\
\hline
$5\times 10^{-6}$ & 500000 & $0$ & $5.7160\times 10^{-14}$ & $5.4418\times
10^{-17}$ & $5.4564\times 10^{-14}$ \\ \hline
$5\times 10^{-5}$ & 50000 & $6.2202\times 10^{-17}$ & $1.5653\times 10^{-10}$
& $1.6794\times 10^{-16}$ & $1.1105\times 10^{-10}$ \\ \hline
$1\times 10^{-4}$ & 25000 & $1.7593\times 10^{-16}$ & $9.8744\times 10^{-10}$
& $2.4423\times 10^{-16}$ & $8.8599\times 10^{-10}$ \\ \hline
$1.20\times 10^{-4}$ & 20833 & $1.5668\times 10^{-16}$ & $1.5055\times
10^{-09}$ & $2.2996\times 10^{-16}$ & $1.3790\times 10^{-09}$ \\ \hline
$1.32\times 10^{-4}$ & 18939 & $1.4610\times 10^{-16}$ & $1.0564\times
10^{-01}$ & $2.9664\times 10^{-16}$ & $1.0301\times 10^{-01}$ \\ \hline
$1\times 10^{-3}$ & 2500 & $2.5895\times 10^{-14}$ & - & $2.0341\times
10^{-14}$ & - \\ \hline
$2\times 10^{-3}$ & 1250 & $5.4591\times 10^{-09}$ & - & $3.9448\times
10^{-09}$ & - \\ \hline
$5\times 10^{-3}$ & 500 & $5.4960\times 10^{-06}$ & - & $4.7003\times
10^{-06}$ & - \\ \hline
\end{tabular}%$$
As can be seen from Table 8, the algorithm produces accurate results even when the time increment is larger The Figure 3 was drawn to show the oscillation movements for values $\Delta t=5\times 10^{-5}$, $N=100$ and $%
N=200$ It is shown in Fig. 3 that the functions $U$ and $V$ make 9 oscillations when $N=200$ and $N=100.$This result with the references [b10]{} and [@b11] shows that a finer mesh is necessary for accurate solutions.
$$\begin{tabular}{l}
\FRAME{itbpF}{6.3771in}{2.8297in}{0in}{}{}{figure3.png}{\special{language
"Scientific Word";type "GRAPHIC";maintain-aspect-ratio TRUE;display
"USEDEF";valid_file "F";width 6.3771in;height 2.8297in;depth
0in;original-width 10.6458in;original-height 4.708in;cropleft "0";croptop
"1";cropright "1";cropbottom "0";filename 'Figure3.png';file-properties
"XNPEU";}} \\
Fig. 3: The oscillation movement for $N=100$ and $N=200$ in the moment $%
t=2.5 $%
\end{tabular}%$$
### **Nonlinear Problem (Gray-Scott Model)**
The Gray-Scott model is a reaction-difussion system which models the forming of certain spatial patterns by a few chemical species, that exit in the nature. It was put forward by Gray and Scott [@b2] and the reaction-diffusion system is given as:
$$\begin{tabular}{l}
$\dfrac{\partial U}{\partial t}=\varepsilon _{1}\dfrac{\partial ^{2}U}{%
\partial x^{2}}-U^{2}V+f(1-U)$ \\
$\dfrac{\partial V}{\partial t}=\varepsilon _{2}\dfrac{\partial ^{2}V}{%
\partial x^{2}}+U^{2}V-(f+k)V$%
\end{tabular}
\label{r11}$$
In this section, the numerical method was tested with repetitive spot patterns on Gray-Scott Model. The parameters for system (\[r11\]) were chosen as the reference [@b22]$$\begin{tabular}{llll}
$\varepsilon _{1}=1,$ & $\varepsilon _{2}=0.01,$ & $a=9$ & $b=0.4$%
\end{tabular}%$$with these parameters the initial conditions of system (\[r11\]) were taken as
$$\begin{tabular}{l}
$U(x,0)=1-\frac{1}{2}\sin ^{100}(\pi \frac{(x-L)}{2L})$ \\
$V(x,0)=\frac{1}{4}\sin ^{100}(\pi \frac{(x-L)}{2L})$%
\end{tabular}
\label{r31}$$
and solutions were investigated in interval $[-L,L]$ and $L=50$. For space discretization $N=400$ and for time discretization $\Delta t=0.2$ were selected. Dirichlet boundary condititions
$$\begin{tabular}{l}
$U(x_{0},t)=U(x_{N},t)=1,$ \\
$V(x_{0},t)=V(x_{N},t)=0$%
\end{tabular}%$$
together with additonal Neuman boundary conditions
$$\begin{tabular}{l}
$U_{x}(x_{0},t)=U_{x}(x_{N},t)=0,$ \\
$V_{x}(x_{0},t)=V_{x}(x_{N},t)=0$%
\end{tabular}%$$
are used. Numerical computations were made until $t=100$ and $t=500$ so that repetitive patterns were obtained. Under these initial conditions, primarily two pulses were created and separated from each other, with each pulse then being split into two again to form four pulses, as shown in Fig. 5. until time $t=1000,$ as time evolved. This self-replicating process goes on to cover the spatial domain. These splitting movements of the functions $%
U $ and $V$ due to time and space are presented in Figs 4-5.
$$\begin{tabular}{l}
\begin{tabular}{l}
\FRAME{itbpF}{6.7205in}{2.802in}{0in}{}{}{figure4.png}{\special{language
"Scientific Word";type "GRAPHIC";maintain-aspect-ratio TRUE;display
"USEDEF";valid_file "F";width 6.7205in;height 2.802in;depth
0in;original-width 10.5308in;original-height 4.3751in;cropleft "0";croptop
"1";cropright "1";cropbottom "0";filename 'Figure4.png';file-properties
"XNPEU";}}%
\end{tabular}
\\
Figure 4: The splitting process of repetitive spot pattern of waves for $%
t=100$ and $t=500$%
\end{tabular}%$$
$$\begin{tabular}{l}
\begin{tabular}{l}
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \FRAME{itbpF}{3.1946in}{2.4621in}{0in%
}{}{}{figure5.png}{\special{language "Scientific Word";type
"GRAPHIC";display "USEDEF";valid_file "F";width 3.1946in;height
2.4621in;depth 0in;original-width 5.8332in;original-height 4.3751in;cropleft
"0";croptop "1";cropright "1";cropbottom "0";filename
'Figure5.png';file-properties "XNPEU";}}%
\end{tabular}
\\
Figure 5: The splitting process of repetitive spot pattern of waves for $%
t=1000$%
\end{tabular}%$$
The intensity changes of functions $U$ and $V$ due to time and space are presented in Fig. 6 and Fig. 7, respectively. These spatial patterns, which known as repetitive spot patterns, initially starting with two waves of splitting movement, seem to cover the whole domain with branching over time.
$$\begin{tabular}{l}
\FRAME{itbpF}{3.5587in}{3.2344in}{0in}{}{}{figure6.png}{\special{language
"Scientific Word";type "GRAPHIC";display "USEDEF";valid_file "F";width
3.5587in;height 3.2344in;depth 0in;original-width 7.0136in;original-height
4.875in;cropleft "0";croptop "1";cropright "1";cropbottom "0";filename
'Figure6.png';file-properties "XNPEU";}} \\
Fig. 6: Repetitive spot pattern of waves of the function U%
\end{tabular}%$$
$$\begin{tabular}{l}
\FRAME{itbpF}{3.6729in}{3.1021in}{0in}{}{}{figure7.png}{\special{language
"Scientific Word";type "GRAPHIC";display "USEDEF";valid_file "F";width
3.6729in;height 3.1021in;depth 0in;original-width 7.542in;original-height
5.4864in;cropleft "0";croptop "1";cropright "1";cropbottom "0";filename
'Figure7.png';file-properties "XNPEU";}} \\
Fig. 7: Repetitive spot pattern of waves of the function V%
\end{tabular}%$$
Discussion
----------
Proposed algorithm has been used for calculating numerical solutions of reaction-diffusion equation systems. Solutions of linear and nonlinear RD systems are shown on the models of certain chemical problems: the Brusselator model, Schnakenberg model and Gray-Scott models are simulated suitably by use of the suggested algorithm. The proposed TQB algorithm is an alternative method to the more usual polynomial quintic B-spline collocation methods (PQBCM). The results of the suggested algorithm are documented together with those obtained with PQBCM and Crank-Nicolson Multigrid solver method (CN-MG) for the test problem. It can be seen from the tables (3-5) that the accuracy of the algorithms are almost the same with that for the PQBCM and are better than the CN-MG. Solutions of the nonlinear problems, which have no analytical solutions in general, are given graphically. Model solutions are represented fairly and can be compared with the equivalent graphs given in the studies [@b9; @b10; @b11; @b15]. Use of the trigonometric B-spline having continuity of the order four allow us to have an approximate functions in the order of four. Therefore, differential equations in the order of four can be solved numerical by using the trigonometric B-spline functions to have solutions of continuity in the order of four. Consequently, the TQB collocation method produces fairly acceptable results for numerical solutions of reaction-diffusion systems. Thus, it is also recommended to finding solutions of the other partial dfferential equations.
**Competing Interest**
The authors declare that they have no competing interests.
**Authors’ contributions**
AOT carried out the algorithm implementation and conducted test studies, participated in the sequence alignment and drafted the manuscript. NA conceived of the study, participated in developing algorithm for Numeric Solutions of trigonometric quintic B-splines and helped to draft the manuscript. ID participated in its design and coordination and helped to draft the manuscript. All authors read and approved the final manuscript.
**Acknowledgement**
The paper is presented at the conference in Ankara, International Conference on Applied Mathematics and Analysis (ICAMA2016).
[99]{} Nikolis A., Numerical solutions of ordinary differential equations with quadratic trigonometric splines, Applied Mathematics E-Notes, 4(1995),142-149.
Gray P. and Scott S.K., Autocatalytic reactions in the isothermal, continuous stirred tank reactor: oscillations and instabilities in the system A+2B 3B, B C, Chem. Eng.Sci., 39(1984), 1087-1097.
Chou C., Zhang Y.,Zhao R.and Nie Q., Numerical methods for stiff reaction-diffusion systems, Discrete and Continuous Dynamical Systems-Series B, 7(2007), 515-525.
Schnakenberg, J.Simple chemical reaction systems with limit cycle behavior, J.Theoret. Biol., 81(1979), 389-400.
Prigogine, I. and Lefever, R. Symmetry breaking instabilities in dissipative systems, J. Chem. Phys., 48(1968), 1695-1700.
Nikolis, A. and Seimenis, I. Solving dynamical systems with cubic trigonometric splines, Applied Mathematics E-notes, 5(2005), 116-123.
Zegeling, P.A and Kok, H.P. Adaptive moving mesh computations for reaction-diffusion systems, Journal of Computational and Applied Mathematics, 168(2004), 519-528.
Madzvamuse, A. Wathen, A.J. and Maini, P.K. A moving grid finite element method applied to a biological pattern generator, Journal of Computational Physics, 190(2003), 478-500.
Ruuth, S.J. Implicit-explicit methods for reaction-diffusion problems in pattern formation, Journal of Mathematical Biology, 34(1995), 148-176.
Sahin, A: Numerical solutions of the reaction-diffusion equations with B-spline finite element method. Dissertation, Eskişehir Osmangazi University, Eskişehir, Turkey (2009).
Hamid, N.N.A. Majid, A.A. and Ismail, A.I.M. Cubic trigonometric B-Spline applied to linear two-point boundary value problems of order,World Academy of Science, Engineering and Technology, 70(2010), 798-803.
Gupta, Y.and Kumar, M. A computer based numerical method for singular boundary value problems, International Journal of Computer Applications,30(1)(2011), 21-25,.
Abbas, M. Majid, A. A. Ismail, A. I. M and Rashid, A. The application of the cubic trigonometric B-spline to the numerical solution of the hyperbolic problems, Applied Mathematica and Computation, 239(2014) 74-88.
Abbas, M. Majid, A. A. Ismail, A. I. M and Rashid, A. Numerical method using cubic trigonometric B-spline tecnique for nonclassical diffusion problems, Abstract and applied analysis, 2014(2014),1-12.
Zin, S. M. Abbas, M. Majid, A.A and Ismail, A.I. M. A new trigonometric spline approach to numerical solution of generalized nonlinear Klien-Gordon equation, PLOS one, 9(5)(2014),1-9
Schoenberg, I. J. On trigonometric spline interpolation, J. Math. Mech.13(5)(1964), 795–825.
Craster, R.V. and Sassi, R. Spectral algorithms for reaction-diffusion equations,Technical Report. Note del Polo, No. 99 (2006).
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'A characteristic spectrum of relic gravitational radiation is produced by a period of “stringy inflation" in the early universe. This spectrum is unusual, because the energy-density rises rapidly with frequency. We show that correlation experiments with the two gravitational wave detectors being built for the Laser Interferometric Gravitational Observatory (LIGO) could detect this relic radiation, for certain ranges of the parameters that characterize the underlying string cosmology model.'
address:
- |
Department of Physics\
University of Wisconsin - Milwaukee\
PO Box 413\
Milwaukee, WI 53211, USA\
email: [email protected]
- |
Department of Physics\
Ben-Gurion University\
Beer-Sheva 84105, Israel\
email: [email protected]
author:
- Bruce Allen
- Ram Brustein
title: Detecting relic gravitational radiation from string cosmology with LIGO
---
==łł==\#1\#2 \#1\#2[[\^2[\#1]{}\^2]{}]{} 2[N\^2 ]{} 2
-0.1in
Preprint Numbers: WISC-MILW-96-TH-34,BGU-PH-96/09
INTRODUCTION
============
Because the gravitational interaction is so weak, a stochastic background of gravitational radiation (the graviton background) decouples from the matter in the universe at very early times. For this reason, the stochastic background of gravitational radiation, which is in principle observable at the present time, carries with it a picture of the state of the universe at very early times, when energy densities and temperatures were very large.
The most interesting features of string theory are associated with its behavior at very high energies, near the Planck scale. Such high energies are unobtainable in present-day laboratories, and are unlikely to be reached for quite some time. They were, however, available during the very early history of the universe, so string theory can be probed by the predictions which it makes about that epoch.
Recent work has shown how the early universe might behave, if superstring theories are a correct description of nature [@1; @2]. One of the robust predictions of this “string cosmology" is that our present-day universe would contain a stochastic background of gravitational radiation [@bggv; @gg], with a spectrum which is quite different than that predicted by many other early-universe cosmological models [@grishchuk; @turner; @myreview]. In particular, the spectrum of gravitational waves predicted by string cosmology has rising amplitude with increasing frequency. This means that the radiation might have large enough amplitude to be observable by ground-based gravity-wave detectors, which operate at frequencies above $\approx 10
\> \rm Hz$. This also allows the spectrum to be consistent with observational bounds arising at $10^{-18} \> \rm Hz$ from observations of the Cosmic Background Radiation and at $10^{-8} \> \rm Hz$ from observation of millisecond pulsar timing residuals.
In this short paper, we examine the spectrum of radiation produced by string cosmology, and determine the region of parameter space for which this radiation would be observable by the (initial and advanced versions of the) LIGO detectors [@science92].
Stochastic background in string cosmology {#s:first}
=========================================
In models of string cosmology [@bggv], the universe passes through two early inflationary stages. The first of these is called the “dilaton-driven" period and the second is the “string" phase. Each of these stages produces stochastic gravitational radiation; the contribution of the dilaton-driven phase is currently better understood than that of the string phase.
In order to describe the background of gravitational radiation, it is conventional to use a spectral function $\Omega_{\rm GW}(f)$ which is determined by the energy density of the stochastic gravitational waves. This function of frequency is defined by \[e:defomegagw\] \_[GW]{}(f) = [1 \_[critical]{}]{} [d \_[GW]{} d f]{}. Here $ d \rho_{\rm GW}$ is the (present-day) energy density in stochastic gravitational waves in the frequency range $ d \ln f$, and $\rho_{\rm critical}$ is the critical energy-density required to just close the universe. This is given by \[e:crit\] \_[critical]{} = [ 3 c\^2 H\_0\^2 8 G]{} 1.6 10\^[-8]{} [h]{}\_[100]{}\^2 ergs/cm\^3, where the Hubble expansion rate $H_0$ is the rate at which our universe is currently expanding, \[e:hubble\] H\_0 = [h]{}\_[100]{} 100 = 3.2 10\^[-18]{}[h]{}\_[100]{} [1 sec]{} . Because $H_0$ is not known exactly, it is defined in terms of a dimensionless parameter ${\rm h}_{100}$ which is believed to lie in the range $1/2 < {\rm h}_{100} < 1$.
The spectrum of gravitational radiation produced in the dilaton-driven and string phase was discussed in [@bggv]. In the simplest model, which we will use in this paper, it depends upon four parameters. The first pair of these are the frequency ${f_{\rm S}}$ and the fractional energy density $\Omega^{\rm S}_{\rm GW} $ produced at the end of the dilaton-driven phase. The second pair of parameters are the maximal frequency $f_1$ above which gravitational radiation is not produced and the maximum fractional energy density $ \Omega^{\rm max}_{GW} $, which occurs at that frequency. This is illustrated in Fig. \[f:spec\].
An approximate form for the spectrum is [@rb] \[e:approx\] \_[GW]{}(f)= where = is the logarithmic slope of the spectrum produced in the string phase.
If we assume that there is no late entropy production and make reasonable choices for the number of effective degrees of freedom, then two of the four parameters may be determined in terms of the Hubble parameter $H_{\rm r}$ at the onset of radiation domination immediately following the string phase of expansion [@bgv], f\_1= 1.3 10\^[10]{} ( [ H\_[r]{} 5 10\^[17]{} ]{} )\^[1/2]{} and \_[GW]{}\^[max]{} = 1 10\^[-7]{} [h]{}\_[100]{}\^[-2]{} ( [ H\_[r]{} 5 10\^[17]{} ]{} )\^2. More complicated models and spectra were discussed in [@mg; @maggiore; @occ].
The ratios $ \left( \Omega^{\rm S}_{\rm GW}/\Omega_{\rm GW}^{\rm max}
\right)$ and $ \left( f_{\rm S}/f_1\right)$ are determined by the basic physical parameters of string cosmology models, the values of the Hubble parameter and the string coupling parameter at the end of the dilaton-driven phase and the onset of the string phase [@bggv; @rb; @v96].
Detecting a stochastic background {#s:second}
=================================
A number of authors [@mich; @chris; @flan; @myreview] have shown how one can use a network of two or more gravitational wave antennae to detect a stochastic background of gravitational radiation. The basic idea is to correlate the signals from separated detectors, and to search for a correlated strain produced by the gravitational-wave background, which is buried in the intrinsic instrumental noise. It has been shown by these authors that after correlating signals for time $T$ (we take $T=10^7 \> {\rm sec} = 3 \> {\rm months}$) the ratio of “Signal" to “Noise" (squared) is given by an integral over frequency $f$: \[e:sovern\] ( [S N]{} )\^2 = [9 H\_0\^4 50 \^4]{} T \_0\^df .
In order to detect a stochastic background with $90\%$ confidence the ratio $S \over N$ needs to be at least $1.65$. In this equation, several different functions appear, which we now define. The instrument noise in the detectors is described by the one-sided noise power spectral densities $P_i(f)$. The LIGO project is building two identical detectors, one in Hanford Washington and one in Livingston Louisiana, which we will refer to as the “initial" detectors. After several years of operation, these detectors will be upgraded to so-called “advanced" detectors. Since the two detectors are identical in design, $P_1(f)=P_2(f)$. The design goals for the detectors specify these functions [@science92]. They are shown in Fig. \[f:noise\]. The next quantity which appears is the overlap reduction function $\gamma(f)$. This function is determined by the relative locations and orientations of the arms of the two detectors, and is identical for both the initial and advanced LIGO detectors. For the pair of LIGO detectors (f) = -0.124842 j\_[0]{}(x) - 2.90014 + 3.00837 , where the $j_i$ are spherical Bessel functions. The dimensionless frequency variable is $x=2 \pi f \tau$ with $\tau=10.00 \> \rm msec$ being the light-travel-time between the two LIGO detector sites. This function is shown in Fig. \[f:overlap\].
Equation (\[e:sovern\]) allows us to assess the detectability (using initial or advanced LIGO) of any particular stochastic background $\Omega_{\rm GW}(f)$.
Detecting a string cosmology stochastic background {#s:third}
==================================================
Making use of the prediction from string cosmology, we may use equation (\[e:sovern\]) to assess the detectability of this stochastic background. For any given set of parameters we may numerically evaluate the signal to noise ratio $S/N$; if this value is greater than $1.65$ then with at least 90% confidence, the background can be detected by LIGO. The regions of detectability in parameter space are shown in Fig. \[f:init\] for the initial LIGO detectors, and in Fig. \[f:adva\] for the advanced LIGO detectors. For these figures we have assumed ${\rm h}_{100}=0.65$ and $H_{\rm r}=5 \times 10^{17} \>
{\rm GeV}$. The observable regions for different values of these parameters can be obtained by simple scaling of the presented results.
At the moment, the most restrictive observational constraint on the spectral parameters comes from the standard model of big-bang nucleosynthesis (NS) [@ns1]. This restricts the total energy density in gravitons to less than that of approximately one massless degree of freedom in thermal equilibrium. This bound implies that \_[GW]{}(f) d f = \_[GW]{}\^[S]{} < 0.7 10\^[-5]{} [h]{}\^[-2]{}\_[100]{}. \[nucleo\] where we have assumed an allowed $N_\nu=4$ at NS, and have substituted in the spectrum (\[e:approx\]). This bound is shown on Figs. \[f:init\],\[f:adva\]. We also show the weaker “Dilaton Only" bound, assuming NO stochastic background is produced during the (more poorly-understood) string phase of expansion: \_[GW]{}\^[S]{} < 2.1 10\^[-5]{} [h]{}\^[-2]{}\_[100]{}. This is obtained by setting $f_1=f_{\rm S}$ in the previous equation, i.e. assuming that $\Omega_{\rm GW}$ vanishes for ${f_{\rm S}} <f < f_1$. We note that if the “Dilaton + String" spectrum is correct, then the NS bounds rule out any hopes of observation by initial LIGO. On the other hand, in the “Dilaton Only" case, a detectable background is not ruled out by NS bounds; it would be observable if the spectral peak falls into the detection bandpass between 50 and 200 Hz (figure 7 of [@myreview]).
Because the function $\left(\frac{\gamma(f)}{P(f)}\right)^2$ decays rapidly at high and low frequencies, the asymptotic behavior of the 90% confidence contours for the advanced LIGO detector in the high and low $f_{\rm S}$ regions can be estimated analytically as follows (\_[GW]{}\^[S]{})\_[90%]{}= 7.4 10\^[-10]{} [h]{}\_[100]{}\^[-2]{} (f\_[S]{}/100 )\^[3]{}, f\_[S]{}100 (\_[GW]{}\^[S]{})\_[90%]{}= 2.6 10\^[-11]{} [h]{}\_[100]{}\^[-2]{} (f\_[S]{}/20 )\^[0.4]{}, f\_[S]{}20 Similar estimates may be obtained for the initial LIGO detector.
Conclusion
==========
In this short paper, we have shown how data from the the initial and advanced versions of LIGO will be able to constrain string cosmology models of the early universe. In principle, this might also constrain the fundamental parameters of superstring theory. The initial LIGO is sensitive only to a narrow region of parameter space and is only marginally above the required sensitivity, while the advanced LIGO detector has far better detection possibilities. The simultaneous operation of other types of gravitational wave detectors which operate at higher frequencies, such as bar and resonant designs, ought to provide additional increase in sensitivity and therefore further constrain the parameter space.
This work has been partially supported by the National Science Foundation grant PHY95-07740 and by the Israel Science Foundation administered by the Israel Academy of Sciences and Humanities.
-0.5in G. Veneziano, Phys. Lett. [**B265**]{} (1991) 287. M. Gasperini and G. Veneziano, Astropart. Phys. [**1**]{} (1993) 317; Mod. Phys. Lett. [**A8**]{} (1993) 3701. R. Brustein, M. Gasperini, M. Giovannini and G. Veneziano Phys. Lett. [**B361**]{} (1995) 45. M. Gasperini and M. Giovannini, Phys. Rev. [**D47**]{} (1993) 1519. L.P. Grishchuk, Sov. Phys. JETP [**40**]{} (1975) 409. M. S. Turner, [*Detectability of inflation produced gravitational waves*]{}, preprint Fermilab-pub-96-169-A (astro-ph/9607066). B. Allen, [*The stochastic gravity-wave background: sources and detection*]{}, in Proceedings of the Les Houches School on Astrophysical Sources of Gravitational Radiation, Springer-Verlag, 1996. A. Abramovici, et. al., Science [**256**]{} (1992) 325. R. Brustein, [*Spectrum of cosmic gravitational wave background*]{}, preprint BGU-PH-96-08 (hep-th/9604159). R. Brustein, M. Gasperini and G. Veneziano, [*Peak and endpoint of the relic graviton background in string cosmology*]{}, preprint CERN-TH-96-37 (hep-th/9604084). M. Gasperini, [*Relic gravitons from the pre-big-bang: what we know and what we do not know*]{}, preprint CERN-TH-96-186 (hep-th/9607146). A. Buonanno, M. Maggiore and C. Ungarelli, [*Spectrum of relic gravitational waves in string cosmology*]{}, preprint IFUP-TH-25-96 (gr-qc/9605072). M. Galluccio, M. Litterio and F. Occhionero, [*Graviton spectra in string cosmology*]{}, Rome preprint (gr-qc/9608007). G. Veneziano, [*String cosmology and relic gravitational radiation*]{}, preprint CERN-TH-96-37 (hep-th/9606119). P. Michelson, Mon. Not. Roy. Astron. Soc. [**227**]{} (1987) 933. N. Christensen, Phys. Rev. [**D46**]{} (1992) 5250. E. Flanagan, Phys. Rev. [**D48**]{} (1993) 2389. $\quad$ Note that the second term on the right hand side of equation (b6) should read $-10
j_1(\alpha)$ rather than $-2 j_1(\alpha)$, and that the sliding delay function shown in Figure 2 on page 2394 is incorrect. V. F. Schwartzmann, JETP Lett. [**9**]{} (1969) 184; T. Walker et al., Ap. J. [**376**]{} (1991) 51.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'About half of nearby galaxies have a central surface brightness $\ge$1 magnitude below that of the sky. The overall properties of these low-surface-brightness galaxies (LSBGs) remain understudied, and in particular we know very little about their massive black hole population. This gap must be closed to determine the frequency of massive black holes at $z=0$ as well as to understand their role in regulating galaxy evolution. Here we investigate the incidence and intensity of nuclear, accretion-powered X-ray emission in a sample of 32 nearby LSBGs with the *Chandra X-ray Observatory*. A nuclear X-ray source is detected in 4 galaxies (12.5%). Based on an X-ray binary contamination assessment technique developed for normal galaxies, we conclude that the detected X-ray nuclei indicate low-level accretion from massive black holes. The active fraction is consistent with that expected from the stellar mass distribution of the LSBGs, but not their total baryonic mass, when using a scaling relation from an unbiased X-ray survey of normal galaxies. This suggests that their black holes co-evolved with their stellar population. In addition, the apparent agreement nearly doubles the number of galaxies available within $\sim$100 Mpc for which a measurement of nuclear activity can efficiently constrain the frequency of black holes as a function of stellar mass. We conclude by discussing the feasibility of measuring this occupation fraction to a few percent precision below $\lesssim 10^{10} M_{\odot}$ with high-resolution, wide-field X-ray missions currently under consideration.'
author:
- 'Edmund J. Hodges-Kluck'
- Elena Gallo
- Anil Seth
- Jenny Greene
- Vivienne Baldassare
title: 'Nuclear X-ray Activity in Low-Surface-Brightness Galaxies: Prospects for Constraining the Local Black Hole Occupation Fraction with a Chandra Successor Mission'
---
Introduction {#section.intro}
============
A census of massive black holes (MBHs) in the nuclei of local galaxies is an important quantity for several reasons. First, it provides the present-day boundary condition (the “fossil record”) on models for the formation and growth of MBHs [@volonteri12], and on behavior during galaxy mergers. Second, to the extent that MBHs co-evolve with their host galaxies [@kormendy13], it probes the importance of “feedback” in regulating galaxy growth. Third, the presence of an MBH is relevant to understanding stellar and gas dynamics in galactic nuclei even without feedback. Fourth, it is relevant to source rates from gravitational wave observatories and other probes of physics in strong gravity.
The local frequency of nuclear MBHs can be defined in terms of the occupation fraction ([$f_{\text{occ}}$]{}) which is the fraction of galaxies with nuclear MBHs regardless of their activity. In practice, [$f_{\text{occ}}$]{} cannot be reliably measured because of the limitations of different methods. Direct dynamical measurements (using stars or gas) are the gold standard, but existing samples are very biased relative to the galaxy population [@vandenbosch2015]. Meanwhile, the “active” fraction ([$f_{\text{active}}$]{}) provides only a lower limit to [$f_{\text{occ}}$]{}, and can be defined in different ways (e.g., through optical line ratios, broad optical lines, X-ray activity, bolometric luminosity, etc.).
Despite their limitations, statistical analyses with these methods have led to the conclusion that [$f_{\text{occ}}$]{}$\approx 1$ for large galaxies ($\log M_* \gtrsim 10$). On the other hand, most galaxies are smaller than this, and here [$f_{\text{occ}}$]{} is poorly known. This is largely because their MBHs are less massive [@gultekin09; @kormendy13] making them hard to detect dynamically, although recent measurements suggest a high [$f_{\text{occ}}$]{}[but see @nguyen2018; @nguyen2019]. Detecting accretion in these objects is challenging due to the presence of star formation and nuclear star clusters (NSCs), which are increasingly common in smaller galaxies [@seth08]. Using nuclear X-ray sources to trace MBHs, @miller15 found that [$f_{\text{occ}}$]{}$\ge$27% over a mass range $8 < \log{M_*} < 11.5$, not ruling out 100% even for small galaxies. Meanwhile, using spatially resolved optical spectroscopy to account for the contribution of starlight to the diagnostic line ratios, @trump15 argued that [$f_{\text{occ}}$]{} among low-mass galaxies is 10% of that among the higher mass ones. The apparent inconsistency of these approaches indicates that more work is necessary to understand systematic effects and obtain a reliable [$f_{\text{occ}}$]{} below $\log{M_*} \approx 10$.
An additional, potentially complicating, factor is that many small galaxies have a surface brightness fainter than that of the night sky [low surface brightness galaxies, or LSBGs; @impey97; @vollmer13]. These galaxies may make up about half of nearby galaxies by number, but they are under-represented in catalogs and almost completely unexplored with regard to their MBH population.
LSBGs include galaxies of all types and with a large range of masses, but differ from their “normal” counterparts in a few ways. Notably, they tend to have very large gas fractions [up to 95%; @schombert01] and mass-to-light ratios, as well as low star-formation rates. LSBGs are also numerous, accounting for $\sim$50% of nearby galaxies [@mcgaugh96; @bothun97; @dalcanton97; @minchin04; @haberzettl07], and this makes them important for measurements of [$f_{\text{occ}}$]{}. The formation of LSBGs remains an open and important question, but of particular importance here is that there appears to be no reason why they could not host MBHs at a similar rate as normal galaxies of the same dynamical mass, and their relatively slow evolution and lack of neighbors [@galaz11] may make them especially useful to distinguish between the “light” ($10^2-10^3 M_{\odot}$ Population III remnants) and “heavy” ($10^4-10^6 M_{\odot}$ direct-collapse black holes) MBH seed hypotheses [@volonteri12]. There are few studies of MBHs in LSBGs, but there are hints that they tend to fall below the $M-\sigma$ relation, even in well developed bulges [@ramya11; @subramanian16]. They are particularly under-studied in the X-rays; only a handful have been observed, and these were selected based on optical activity [@das09]. The majority of the work to identify AGNs in LSBGs has been done with optical line ratios [@schombert98; @mei09; @galaz11].
Yet X-rays are important. High-resolution X-rays are sensitive probes of very low level accretion onto MBHs and relatively insensitive to dust absorption. The traditional cutoff for “activity” is at $L_{\text{bol}}/L_{\text{Edd}} > 10^{-3}$, with “low luminosity” AGNs at $L_{\text{bol}}/L_{\text{Edd}} > 10^{-5}$, but X-rays can probe down to $L_{\text{bol}}/L_{\text{Edd}} < 10^{-9}$ in local, massive systems. The main contaminating source of nuclear X-rays is from low- and high-mass X-ray binaries (XRBs), but a corrected [$f_{\text{active}}$]{} remains one of the best ways to search for nuclear MBHs. This formed the basis of the *Chandra* X-ray Observatory **A**GN **MU**ltiwavelength **S**urvey of **E**arly-type galaxies programs [AMUSE; @gallo08; @miller12], as well as several subsequent studies that expand to late-type galaxies [@foord17; @she17; @lee19]. One important result from these works is that there appears to be a simple relationship between $L_X$ and $M_*$ with some intrinsic scatter. The number of X-ray detected galaxies can then be compared to the number expected from this relation to constrain [$f_{\text{occ}}$]{} [a framework developed by @miller15].
Thus, both to determine the X-ray nuclear properties of LSBGs, which have barely been studied, and to assess the potential to use them to constrain [$f_{\text{occ}}$]{} and study MBH in an unbiased sample, we present a [*Chandra*]{} survey of the nuclear activity in 32 LSBGs. The immediate scientific goal is to study the nuclear activity in LSBGs, as existing work is highly biased [e.g., @vandenbosch2015], and it is timely to study their utility as future X-ray survey targets because of high-resolution X-ray concepts currently being studied.
The remainder of this paper is organized as follows: Section \[section.sample\] describes the sample, Section \[section.data\] describes the observations and source detection method, and Section \[section.xrbs\] assesses the likelihood of contamination by X-ray binaries (XRBs). Section \[section.results\] presents the main result and discusses [$f_{\text{active}}$]{} in the context of other X-ray and LSBG studies. We argue that LSBGs are useful probes of [$f_{\text{occ}}$]{} and present an observing strategy that includes them in Section \[section.lynx\]. We close by summarizing our findings in Section \[section.summary\].
The distances adopted in this paper are based on the recessional velocity from the HyperLeda database [@makarov14] corrected for Virgo infall with a Hubble constant of 69.8 km s$^{-1}$.
[lllrrrrrrrr]{} LSBC F570-04 & Sa & N & 168.23874 & 18.762 & 8.6 & ... & 23.0$\pm$0.2 & -13.4$\pm$0.1 & 0.63$\pm$0.07 & 7.9$\pm$0.1\
LSBC F574-08 & S0 & N & 188.15065 & 18.023 & 14.1 & ... & 21.2$\pm$0.1 & -15.6$\pm$0.1 & 0.61$\pm$0.04 & 8.7$\pm$0.1\
LSBC F574-07 & S0 & ... & 189.87597 & 18.368 & 14.1 & ... & 23.6$\pm$0.3 & -14.3$\pm$0.1 & 0.58$\pm$0.08 & 8.2$\pm$0.1\
LSBC F574-09 & S0 & N & 190.5857 & 17.510 & 14.2 & ... & 21.1$\pm$0.2 & -15.3$\pm$0.1 & 0.61$\pm$0.05 & 8.6$\pm$0.1\
IC 3605 & Sd/Irr& N & 189.5873 & 19.541 & 14.3 & 8.55 & 22.0$\pm$0.1 & -15.1$\pm$0.1 & 0.19$\pm$0.06 & 7.9$\pm$0.1\
UGC 08839 & Im & H[ii]{} & 208.85398 & 17.795 & 17.6 & 10.02 & 23.8$\pm$0.3 & -16.2$\pm$0.1 & 0.25$\pm$0.04 & 8.4$\pm$0.1\
UGC 05675 & Sm & N & 157.12501 & 19.562 & 18.8 & 9.51 & 23.7$\pm$0.3 & -15.5$\pm$0.1 & 0.22$\pm$0.06 & 8.1$\pm$0.1\
UGC 05629 & Sm & N & 156.05453 & 21.050 & 21.6 & 10.41 & 23.8$\pm$0.3 & -16.2$\pm$0.1 & 0.47$\pm$0.05 & 8.8$\pm$0.1\
LSBC F750-04 & Sa & ... & 356.08417 & 10.118 & 23.8 & 8.39 & 23.0$\pm$0.2 & -15.5$\pm$0.1 & 0.39$\pm$0.08 & 8.3$\pm$0.1\
LSBC F570-06 & S0 & N & 169.40918 & 17.818 & 24.8 & ... & 22.3$\pm$0.1 & -16.8$\pm$0.1 & 0.67$\pm$0.04 & 9.3$\pm$0.1\
UGC 06151 & Sm & ... & 166.48456 & 19.826 & 24.8 & 8.79 & 22.2$\pm$0.2 & -17.2$\pm$0.1 & 0.41$\pm$0.04 & 9.0$\pm$0.1\
LSBC F544-01 & Sb & ... & 30.33708 & 19.981 & 35.4 & 9.03 & 23.8$\pm$0.3 & -16.2$\pm$0.1 & 0.33$\pm$0.09 & 8.5$\pm$0.1\
LSBC F612-01 & Sm & H[ii]{} & 22.56423 & 14.678 & 36.8 & 9.00 & 23.7$\pm$0.3 & -16.0$\pm$0.1 & 0.34$\pm$0.09 & 8.5$\pm$0.1\
UGC 09024 & S? & H[ii]{} & 211.66891 & 22.070 & 38.8 & 9.35 & 20.8$\pm$0.1 & -18.1$\pm$0.1 & 0.32$\pm$0.04 & 9.3$\pm$0.1\
LSBC F743-01 & Sd & ... & 319.68917 & 8.367 & 38.8 & 9.00 & 23.2$\pm$0.2 & -16.6$\pm$0.1 & 0.36$\pm$0.08 & 8.7$\pm$0.1\
LSBC F576-01 & Sc & H[ii]{} & 198.422 & 22.626 & 51.7 & 9.08 & 21.6$\pm$0.1 & -18.1$\pm$0.1 & 0.54$\pm$0.05 & 9.7$\pm$0.1\
LSBC F583-04 & Sc & N & 238.03887 & 18.798 & 57.4 & 8.90 & 23.9$\pm$0.3 & -17.5$\pm$0.1 & 0.46$\pm$0.07 & 9.2$\pm$0.1\
UGC 05005 & Im & H[ii]{} & 141.12242 & 22.275 & 57.8 & 10.98 & 23.7$\pm$0.3 & -18.3$\pm$0.1 & 0.25$\pm$0.05 & 9.2$\pm$0.1\
UGC 1230 & Sm & H[ii]{} & 26.38542 & 25.521 & 57.8 & 9.70 & 23.6$\pm$0.3 & -18.4$\pm$0.1 & 0.40$\pm$0.06 & 9.5$\pm$0.1\
UGC 04669 & Sm & H[ii]{} & 133.77864 & 18.935 & 61.2 & 9.31 & 21.9$\pm$0.1 & -19.0$\pm$0.1 & 0.22$\pm$0.05 & 9.5$\pm$0.1\
UGC 05750 & SBd & H[ii]{} & 158.93802 & 20.990 & 63.2 & 10.93 & 22.5$\pm$0.2 & -18.1$\pm$0.1 & 0.23$\pm$0.07 & 9.1$\pm$0.1\
UGC 4422 & SBc & AGN & 126.9251 & 21.479 & 64.6 & 9.91 & 19.8$\pm$0.1 & -21.4$\pm$0.1 & 0.57$\pm$0.01 & 11.0$\pm$0.1\
UGC 09927 & S0 & AGN & 234.11572 & 22.500 & 67.9 & ... & 19.11$\pm$0.04 & -19.9$\pm$0.1 & 0.81$\pm$0.03 & 10.7$\pm$0.1\
UGC 10017 & Im & N & 236.39031 & 21.420 & 69.1 & 10.74 & 23.5$\pm$0.3 & -18.1$\pm$0.1 & 0.36$\pm$0.07 & 9.3$\pm$0.1\
UGC 10015 & Sd & H[ii]{} & 236.41345 & 21.020 & 69.6 & 10.73 & 19.69$\pm$0.05 & -18.8$\pm$0.1 & 0.21$\pm$0.06 & 9.4$\pm$0.1\
UGC 3059 & Sd & AGN & 67.42687 & 3.682 & 69.6 & 10.00 & 22.4$\pm$0.2 & -21.2$\pm$0.1 & 0.23$\pm$0.05 & 10.3$\pm$0.1\
UGC 416 & Sd & H[ii]{} & 9.88753 & 3.933 & 70.2 & 9.93 & 22.4$\pm$0.2 & -18.5$\pm$0.1 & 0.45$\pm$0.06 & 9.6$\pm$0.1\
UGC 11578 & Sd & H[ii]{} & 307.6785 & 9.190 & 70.6 & 9.98 & 22.3$\pm$0.2 & -19.2$\pm$0.1 & 0.33$\pm$0.04 & 9.7$\pm$0.1\
UGC 12845 & Sd & AGN & 358.9245 & 31.900 & 74.3 & 9.90 & 22.0$\pm$0.2 & -20.1$\pm$0.1 & 0.41$\pm$0.03 & 10.2$\pm$0.1\
UGC 11754 & Scd & H[ii]{} & 322.38125 & 27.321 & 74.5 & 9.90 & 20.1$\pm$0.1 & -19.4$\pm$0.1 & 0.47$\pm$0.03 & 10.3$\pm$0.1\
LSBC F570-05 & S0 & H[ii]{} & 171.3237 & 17.808 & 74.5 & 9.61 & 20.5$\pm$0.1 & -19.5$\pm$0.1 & 0.67$\pm$0.04 & 10.4$\pm$0.1\
UGC 1455 & Sbc & AGN & 29.7000 & 24.892 & 76.5 & 9.97 & 19.36$\pm$0.04 & -21.1$\pm$0.1 & 0.82$\pm$0.02 & 11.2$\pm$0.1
Sample {#section.sample}
======
Parent Sample
-------------
We start with the @schombert92 LSBG catalog, which was produced by searching the Palomar Sky Survey plates in the 3850-5500Å band for galaxies fainter than the night sky. The advantage of using the @schombert92 sample is that most of the galaxies have cataloged H [i]{} masses, which is important considering the tendency of LSBGs to have larger gas fractions than normal galaxies. However, the sample may be unrepresentative in a few ways. First, it does not include a strict cutoff in surface brightness and includes galaxies with “normal” central surface brightness but substantial, extended, LSB features. Second, the galaxies are almost all within $z<0.05$. @rosenbaum09 found that LSBGs selected from the SDSS within this range tend to be dwarfs, whereas those at larger redshifts are luminous disks due to selection bias. Thus, we compared the @schombert92 galaxies to more recent samples drawn from deeper exposures.
There is no single definition of an LSBG. The most common definition is an object whose central surface brightness $\mu_0 > 22$ or 23 mag arcsec$^{-2}$ [@impey01]. For example, @rosenbaum09 and @galaz11 selected LSBGs with $\mu_0 > 22.5$ mag arcsec$^{-2}$ from the Sloan Digital SKy Survey [SDSS; @sdss12]. Other authors, such as @greco2018, define LSBGs based on their average surface brightness $\bar{mu}$, which includes nucleated galaxies with a “normal” $\mu_0$ but very low surface brightness disks [@bothun87; @sprayberry95]. A variant on this approach is to define LSBGs based on the $\mu_0$ from a model profile after excluding the nuclear star cluster or active nucleus [e.g., @graham2003].
Compared to these samples, the @schombert92 galaxies are closer to Earth and tend toward the brighter end of the LSBG distribution, but are otherwise representative. Most, but not all, of these galaxies are regular dwarfs, and this is the population of most interest for [$f_{\text{occ}}$]{}, and a key LSBG population to understanding the formation of LSB disks. It is also a good sample for an X-ray survey limited by the expected X-ray binary luminosity, considering that LSBGs are selected based on a broad observational, rather than physical, criterion.
Working Sample
--------------
We selected a subsample in order to compare [$f_{\text{active}}$]{} among LSBGs to normal galaxies in the AMUSE surveys. We adopted four criteria. First, we restricted the distance to $d<75$ Mpc to limit the exposure time required to achieve the same 0.3–10 keV $L_X \sim 10^{38}-10^{39}$ erg s$^{-1}$ sensitivity as the AMUSE surveys. 159 galaxies in the @schombert92 catalog meet this criterion, allowing for a 0.1 dex uncertainty in the distance. Secondly, we excluded galaxies without a well defined center in order to identify nuclear sources (about 35% of systems). Thirdly, we excluded “normal” galaxies with minor LSB features included in the @schombert92 catalog, but allowed nucleated and bulge-dominated galaxies with $\mu_0(g) < 22.5$ mag arcsec$^{-2}$ as long as the average surface brightness within $D_{25}$ exceeded 23 mag arcsec$^{-2}$.
Finally, we excluded galaxies with a total baryonic mass $\log (M_*+M_{\text{HI}}) < 7.5$ for consistency with the AMUSE survey. Here we use the total baryonic mass instead of $M_*$ because LSBGs tend to have high gas fractions whereas the gas fractions are very low for AMUSE galaxies, which are all early-type galaxies. The basis for the AMUSE restriction was concern that high-mass XRB (HMXB) contamination in late-type galaxies will be more severe than low-mass XRB (LMXB) contamination in early-type galaxies. However, LSBGs tend to have low SFR, and we show in Section \[section.xrbs\] that the potential for HMXB contamination is small. This also allows us to test whether the $L_X/M_*$ correlation found by @miller12 [@miller15] applies to LSBGs, or whether the correlation is instead between $L_X$ and the total baryonic mass. However, as far as we know no galaxy was included that would not also meet a $\log M_* > 7.5$ threshold. After making these cuts, 83 galaxies remained.
To measure the surface brightness and the stellar mass we used $g$ and $r$ band optical data. We used the masses from the @huchtmeier89 and @courtois09 catalogs. The main source of optical data was the SDSS [@sdss12], but in multiple cases no SDSS data were available and we used the Pan-STARRS 1 DR2 [@Chambers2016]. We downloaded the calibrated galaxy images in $g$ and $r$ and fitted them with 2D Sérsic profiles using the [Sersic2D]{} software from the astropy v4.0.1 Python library after masking surrounding point sources and obvious foreground or background objects coincident with the galaxy. The integrated $g$ band magnitudes and central surface brightness values are reported in Table \[table.sample\].
About 30% of systems from the @schombert92 sample that meet our distance and identifiable center criteria have $\mu_0(g) < 22.5$ mag arcsec$^{-2}$ for a single profile. Most are disky galaxies with a nuclear star cluster or other bright nuclear emission, and when allowing a second profile component for a nuclear point source the fits are improved and $\mu_0$ for the extended component typically falls into the LSBG threshold. However, some galaxies have a bright bulge surrounded by an extensive LSB disk or halo. In this case, adding a second profile component leads to one disky Sérsic component ($n<2$) and one spheroidal component ($n \sim 4$). We excluded galaxies where $\bar{\mu(g)}$ over $D_{25}$ is lower than 23 mag arcsec$^{-2}$. Several galaxies in the remaining sample are also included in the @graham2003 sample, who excised the central regions of nucleated sources to measure $\mu_0$, and our measurements are consistent with theirs.
We then used the integrated magnitude to estimate the stellar mass regardless of nuclear activity, following @bell03 to calculate the mass-to-light ratio as $\log(M/L) = 1.519(g-r)-0.499$ for each $g$ band absolute magnitude. We adopt 5.11 as the absolute $g$ band magnitude of the Sun. The $g$ magnitudes, $g-r$ colors, and stellar masses of the galaxies are listed in Table \[table.sample\]. The statistical uncertainties in the measured magnitudes are small, so the uncertainty in $M_*$ comes primarily from uncertainty in the distances. We adopt a uniform 0.1 dex uncertainty for the distances throughout this paper, which are based on redshifts corrected for the Virgo infall. We do not include uncertainty from scatter in the $M/L$ relation.
Because it contains nearby, relatively bright LSBGs, the @schombert92 catalog is already biased towards bright dwarf galaxies. The additional 75 Mpc distance cut does not materially change this. However, the criterion that each galaxy have a well defined center does bias the sample towards nucleated and spheroidal galaxies and against irregular galaxies. The mass cut also tends to exclude irregular galaxies and nearby dwarf ellipticals. On the other hand, and by design, this sample is well suited to the AMUSE-Field sample, which contains many normal dwarf galaxies with a similar mass range and is exclusively spheroidal.
After applying the mass cut at $\log M/M_{\odot} > 7.5$, a sample of 83 galaxies remained. We were awarded observing time on the [*Chandra*]{} X-ray Observatory for 27 of these galaxies, which were selected based on the most efficient observing plan and [*Chandra*]{} constraints. An additional five have existing [*Chandra*]{} data. The [*Chandra*]{} observation IDs and exposure times are summarized in Table \[table.obs\].
The working sample includes 26 late-type galaxies and 6 early-type galaxies. 7/32 galaxies have $\log M_* > 10$, with the rest clustered around $\log M_* \sim 9.0$. The two-sided Kolmogorov-Smirnov (K-S) test indicates that the 33-galaxy sample has a mass distribution that is consistent with being drawn from the 83-galaxy sample ($p=0.26$). The K-S test also shows that the $M_* + M_{\text{HI}}$ distribution is consistent with being drawn from the AMUSE-Field $M_*$ distribution ($p=0.21$), but the $M_*$ distribution alone is not ($p=0.01$). Figure \[figure.mstar\_compare\] shows these distributions. The gas fractions for most of the late-type galaxies are large, as expected for LSBGs.
The purpose of the AMUSE survey was to provide a view of nuclear activity unbiased by optical classification, but to compare our working sample to other LSBGs we investigated their optical nuclear properties. 22 of the 32 galaxies have SDSS spectra, of which 11 show clear emission lines that allow us to diagnose optical activity. Based on the pipeline line fluxes, only one (UGC 4422) has optical line ratios consistent with an AGN, but an additional four galaxies without SDSS spectra are candidate AGNs based on the @schombert98 analysis, bringing the number of candidate AGNs to 5/22 (16%). All of the AGN candidates are weak [@schombert98], with $L < 10^{40}$ erg s$^{-1}$. Meanwhile, 13/32 galaxies (41%) have emission-line ratios consistent with star formation.
To summarize, our working sample consists of 32 galaxies with X-ray observations. These galaxies tend to be nearby, brighter dwarf galaxies but also include some larger disk galaxies, and there are several AGN candidates. Compared to the larger LSBG population within $z<0.05$, these galaxies are more likely to be nucleated and tend to be more luminous than average [@greco2018]. We return to the peculiarities of this sample when interpreting our results below.
{width="1.0\linewidth"}
![The distribution of $M_*$ for the previously studied AMUSE-Field sample [black histogram; @miller12] and the LSBG sample (red histogram), along with the distribution of $M_*+M_{\text{HI}}$ for the LSBG sample (blue dashed histogram). By design, the $M_*+M_{\text{HI}}$ sample is consistent with being drawn from the AMUSE-Field $M_*$ population (the latter includes only early-type galaxies with little gas).[]{data-label="figure.mstar_compare"}](figures/mstar_compare.eps){width="1.0\linewidth"}
Observations and Source Detection {#section.data}
=================================
The new observations were obtained in the [*Chandra*]{} Cycle 19 (2018) using the Advanced CCD Imaging Spectrometer (ACIS) camera. We centered each galaxy at the nominal aimpoint on the ACIS-S3 detector, which is back-illuminated and more sensitive to soft photons[^1]. The archival observations also used the ACIS-S3 detector. Observation information is listed in Table \[table.obs\].
[llcr]{} LSBC F570-04 & 21006 & 2018-06-10 & 3.29\
LSBC F574-08 & 21008 & 2018-06-25 & 3.25\
LSBC F574-07 & 21009 & 2018-05-10 & 3.61\
LSBC F574-09 & 21012 & 2018-04-14 & 5.87\
IC 3605 & 21016 & 2018-04-03 & 7.35\
UGC 08839 & 21010 & 2018-04-03 & 3.44\
UGC 05675 & 21011 & 2018-03-21 & 4.79\
UGC 05629 & 21013 & 2018-07-02 & 6.07\
LSBC F750-04 & 21017 & 2018-08-26 & 8.93\
LSBC F570-06 & 21014 & 2018-11-25 & 6.75\
UGC 06151 & 21015 & 2018-03-21 & 6.9\
LSBC F544-01 & 21019 & 2018-11-14 & 6.47\
LSBC F612-01 & 21020 & 2018-09-24 & 7.06\
UGC 09024 & 21018 & 2018-04-04 & 6.37\
LSBC F743-01 & 21021 & 2018-09-02 & 10.32\
LSBC F576-01 & 21022 & 2018-08-13 & 12.49\
LSBC F583-04 & 21023 & 2018-05-24 & 14.88\
UGC 05005 & 21024 & 2018-06-19 & 5.77\
UGC 1230 & 21025 & 2018-11-14 & 5.69\
UGC 04669 & 21026 & 2018-05-23 & 6.66\
UGC 05750 & 7766 & 2006-12-27 & 2.9\
UGC 4422 & 21027 & 2018-03-21 & 6.71\
UGC 09927 & 21028 & 2018-05-06 & 7.56\
UGC 10017 & 21029 & 2018-05-17 & 7.86\
UGC 10015 & 21030 & 2018-05-07 & 7.75\
UGC 3059 & 7765 & 2007-01-01 & 3.3\
UGC 416 & 21033 & 2018-09-09 & 11.21\
UGC 11578 & 21031 & 2018-08-05 & 8.36\
UGC 12845 & 7768 & 2007-02-18 & 3.25\
UGC 11754 & 7767 & 2007-06-08 & 4.16\
LSBC F570-05 & 21032 & 2018-06-28 & 9.53\
UGC 1455 & 21032 & 2018-06-28 & 9.53
The data were processed and analyzed using the [*Chandra*]{} Interactive Analysis of Observations (CIAO) v4.10 software[^2].We downloaded the primary and secondary data products and performed the standard recommended processing using the [chandra\_repro]{} script, which filters out events with bad grades, identifies bad pixels, identifies good time intervals, and produces an analysis-ready level=2 events file. Most of the observations are very short, and none are significantly affected by particle background flares.
The ACIS effective collecting area below 1 keV has degraded due to the build-up of molecular contamination on the filter window[^3], and the decline has been particularly steep in the past few years. To optimize the sensitivity, the Cycle 7 data sets (obtained in 2007) were filtered to $0.3-8$ keV, while data sets from the past few years were filtered to $0.8-7$ keV. In Cycles 19 and 20, 90% of the $0.3-8$ keV source X-ray events (counts) from a power-law spectrum with $\Gamma=1.5-2.5$ will fall in this bandpass (assuming no pileup and modest Galactic absorption), whereas only 50% of the background will.
Source detection was performed using the CIAO Mexican-Hat wavelet [wavdetect]{} script [@freeman02]. We used wavelet radii of 1, 2, 4, and 8 pixels, with an input map of the [*Chandra*]{} psf for the ACIS-S3 chip constructed at $E=1.5$ keV for each observation. The other parameters were left as default. The source list was visually inspected to identify false detections (such as chip edges) and poorly separated sources. The filtered source list was then used with the CIAO [wcs\_match]{} tool with the USNO-B1.0 catalog [@monet03] to align the images. In several cases, there were insufficient matches and we did not apply a correction. However, the typical correction is smaller than 1 arcsec, so we treat the astrometry as reliable for all exposures.
Nuclear X-ray sources were identified as those sources for which the X-ray centroid error circle contains the position of the optical or IR nucleus, which also has some uncertainty. To estimate the uncertainty we used the centroid uncertainty from the best-fitting optical Sérsic profiles, which is generally a fraction of an arcsecond. This procedure finds three nuclear sources.
A second way to identify nuclear X-ray sources is to determine whether the number of counts in an $r=2$ arcsec aperture centered on the optical nucleus is higher than expected from the background. The half-power diameter of [*Chandra*]{} with ACIS-S is about 0.8 arcsec, so events are concentrated within this region. However, roughly half of events are distributed between $r=0.4-2$ arcsec, so a true (but faint) source may not be identified by [wavdetect]{}. With prior knowledge of where to look and a robust measurement of the background, such sources can be identified by comparison to the background rate. For most of the snapshot exposures, just three counts per aperture is sufficient to detect a source. The $0.8-7$ keV background rates expected in an $r=2$ arcsec aperture (based on a large region of blank sky) range from $1.5-2\times 10^{-5}$ counts s$^{-1}$. The exposure times range from 3-11 ks, for which we expect an average of $0.05-0.2$ counts per aperture. Taking these as the averages in Poisson distributions, the odds of seeing three counts by random chance is less than 0.1%. Since the nucleus positions are known, this is unaffected by the “look elsewhere” effect (although we note that other clusters of 3-4 counts detected with [wavdetect]{} often do have catalog counterparts). However, an excess of counts does not necessarily imply a point source centered at the nucleus or a single point source. This procedure finds four nuclear sources, including the three found with [wavdetect]{}.
Three of the detected sources have 3-4 counts, including the one not found with [wavdetect]{} (in UGC 9927). These are marginally detected in the sense that an integer number of counts must be detected and 2 counts is not significant. However, we estimated the likelihood of measuring 3 or more background counts in the nuclear apertures for our sample of 32 galaxies by simulating $10^8$ sets of observations with the average background in each aperture taken as the mean of a Poisson distribution. The odds of $N \ge 1$, 2, or 3 false positives are $P(N\ge 1) = 9\times 10^{-3}$, $P(N \ge 2) = 3\times 10^{-5}$, and $P(N \ge 3) = 2\times 10^{-7}$.
The detected fraction depends on the energy bandpass, since the background is higher in the standard $0.3-8$ keV bandpass. In this case, neither source with 3 counts is significant. In addition, [*Chandra*]{} ray-tracing simulations demonstrate that the concentration of events within the $r<2$ arcsec aperture is not a reliable way to distinguish sources and background, so apart from the small likelihood that the marginally detected sources are background fluctuations the spatial information is not useful. On balance, we conclude that the detected sources are astrophysical, and that at most one is a false positive. Additional observations would decisively settle the matter.
We converted the count rates and upper limits to $0.3-10$ keV luminosities by assuming a power-law spectrum with photon index $\Gamma=2$ and photoelectric absorption only from the Galaxy, using the Leiden-Argentine-Bonn survey[^4] [@kalberla05]. We ignore intrinsic absorption, but this will only lead to a small error as these are mostly face-on or early-type galaxies, for which we expect $N_{\text{H}} < 10^{21}$ cm$^{-2}$. At this column density, almost all absorption occurs below 0.8 keV where the ACIS-S effective area is very small. The number of counts in the detection cell and the $0.3-10$ keV luminosities or upper limits for each galaxy are given in Table \[table.luminosities\].
This approach may not account for obscured AGNs. For example, sources with $N_{\text{H}} > 10^{23}$ cm$^2$ but $L_X < 10^{42}$ erg s$^{-1}$ would not be detected. It is generally held that low luminosity AGNs (like the optical AGN candidates in our sample) lack such an obscuring torus, and anything as bright as $10^{42}$ erg s$^{-1}$ would be a bright infrared source. Since none of the galaxies are included in the infrared AllWISE AGN catalog [@Secrest2015], we have not missed any very obscured, luminous AGNs. On the other hand, high resolution infrared observations [e.g., @asmus2011] find some evidence for obscuring torii even in low luminosity systems, so we cannot rule this out. Such sources are unlikely to be found by increasing the X-ray sensitivity because their weak X-ray flux will be drowned out by the larger signal from X-ray binaries.
[lrccccrrr]{} LSBC F570-04 & 7.9 & 0.072$\pm$0.006 & 0.06$\pm$0.02 & 35.6$\pm$0.1 & 35.0$\pm$0.2 & 0 & $<37.6$ & $-2.68$\
LSBC F574-08 & 8.7 & 0.113$\pm$0.004 & 0.32$\pm$0.05 & 36.7$\pm$0.1 & 35.6$\pm$0.1 & 0 & $<38.0$ & $-2.02$\
LSBC F574-07 & 8.2 & 0.05$\pm$0.01 & 0.09$\pm$0.02 & 35.8$\pm$0.2 & 35.2$\pm$0.2 & 0 & $<38.0$ & $-2.86$\
LSBC F574-09 & 8.6 & 0.10$\pm$0.03 & 0.8$\pm$0.2 & 36.3$\pm$0.1 & 36.1$\pm$0.2 & 0 & $<38.1$ & $-2.46$\
IC 3605 & 7.9 & 0.06$\pm$0.02 & 1.35$\pm$0.09 & 35.7$\pm$0.2 & 36.3$\pm$0.1 & 0 & $<38.0$ & $-2.93$\
UGC 08839 & 8.4 & 0.014$\pm$0.004 & 0.9$\pm$0.2 & 35.6$\pm$0.1 & 36.2$\pm$0.2 & 0 & $<38.2$ & $-3.25$\
UGC 05675 & 8.1 & 0.021$\pm$0.007 & 0.36$\pm$0.09 & 35.6$\pm$0.1 & 35.8$\pm$0.2 & 0 & $<38.4$ & $-3.52$\
UGC 05629 & 8.8 & 0.023$\pm$0.007 & 0.4$\pm$0.1 & 36.1$\pm$0.1 & 35.6$\pm$0.2 & 0 & $<38.4$ & $-3.05$\
LSBC F750-04 & 8.3 & 0.073$\pm$0.008 & 0.85$\pm$0.04 & 36.1$\pm$0.1 & 36.1$\pm$0.1 & 0 & $<38.3$ & $-2.88$\
LSBC F570-06 & 9.3 & 0.056$\pm$0.004 & 0.42$\pm$0.04 & 37.0$\pm$0.1 & 35.8$\pm$0.1 & 2 & $<38.5$ & $-2.28$\
UGC 06151 & 9.0 & 0.016$\pm$0.003 & 1.75$\pm$0.07 & 36.2$\pm$0.1 & 36.4$\pm$0.2 & 0 & $<38.5$ & $-3.05$\
LSBC F544-01 & 8.5 & 0.06$\pm$0.01 & 7.2$\pm$0.6 & 36.2$\pm$0.1 & 37.0$\pm$0.2 & 0 & $<38.8$ & $-2.60$\
LSBC F612-01 & 8.5 & 0.062$\pm$0.008 & 2.5$\pm$0.1 & 36.2$\pm$0.1 & 36.6$\pm$0.1 & 0 & $<38.8$ & $-3.02$\
UGC 09024 & 9.3 & 0.077$\pm$0.003 & 14.4$\pm$0.2 & 37.1$\pm$0.1 & 37.3$\pm$0.1 & 0 & $<38.9$ & $-2.37$\
LSBC F743-01 & 8.7 & 0.075$\pm$0.005 & 4.2$\pm$0.1 & 36.5$\pm$0.1 & 36.8$\pm$0.1 & 0 & $<38.7$ & $-2.64$\
LSBC F576-01 & 9.7 & 0.117$\pm$0.004 & 11.1$\pm$0.3 & 37.6$\pm$0.1 & 37.1$\pm$0.1 & 0 & $<38.9$ & $-2.41$\
LSBC F583-04 & 9.2 & 0.04$\pm$0.01 & 27$\pm$4 & 36.7$\pm$0.2 & 36.5$\pm$0.2 & 0 & $<38.9$ & $-3.13$\
UGC 05005 & 9.2 & 0.020$\pm$0.006 & 4.4$\pm$0.1 & 36.6$\pm$0.1 & 36.8$\pm$0.1 & 0 & $<39.3$ & $-3.22$\
UGC 1230 & 9.5 & 0.019$\pm$0.005 & 4.2$\pm$0.2 & 37.0$\pm$0.1 & 36.7$\pm$0.1 & 0 & $<39.3$ & $-3.33$\
UGC 04669 & 9.5 & 0.038$\pm$0.007 & 11$\pm$2 & 37.4$\pm$0.1 & 37.1$\pm$0.2 & 4$^{+3}_{-1}$ & $39.6^{+0.2}_{-0.1}$ & $-3.18$\
UGC 05750 & 9.1 & 0.06$\pm$0.01 & 21$\pm$3 & 36.9$\pm$0.2 & 37.5$\pm$0.1 & 1 & $<39.6$ & $-2.79$\
UGC 4422 & 11.0 & 0.027$\pm$0.003 & 71$\pm$2 & 38.4$\pm$0.1 & 38.0$\pm$0.1 & 0 & $<39.3$ & $-1.94$\
UGC 09927 & 10.7 & 0.05$\pm$0.02 & 12$\pm$3 & 38.3$\pm$0.2 & 37.2$\pm$0.2 & 3$^{+3}_{-1}$ & $39.5^{+0.3}_{-0.2}$ & $-3.09$\
UGC 10017 & 9.3 & 0.04$\pm$0.01 & 5.2$\pm$0.8 & 36.9$\pm$0.2 & 36.8$\pm$0.2 & 0 & $<39.3$ & $-3.24$\
UGC 10015 & 9.4 & 0.040$\pm$0.007 & 12$\pm$3 & 37.9$\pm$0.1 & 37.2$\pm$0.2 & 1 & $<39.3$ & $-2.71$\
UGC 3059 & 10.3 & 0.032$\pm$0.008 & 3.0$\pm$0.6 & 36.6$\pm$0.1 & 36.6$\pm$0.2 & 0 & $<39.6$ & $-3.77$\
UGC 416 & 9.6 & 0.08$\pm$0.01 & 20.3$\pm$0.5 & 37.5$\pm$0.1 & 37.5$\pm$0.1 & 0 & $<39.2$ & $-2.44$\
UGC 11578 & 9.7 & 0.038$\pm$0.008 & 11$\pm$2 & 37.6$\pm$0.1 & 37.2$\pm$0.1 & 3$^{+3}_{-1}$ & $39.5^{+0.3}_{-0.2}$ & $-2.96$\
UGC 12845 & 10.2 & 0.024$\pm$0.005 & 11$\pm$1 & 37.9$\pm$0.1 & 37.2$\pm$0.2 & 0 & $<39.6$ & $-3.16$\
UGC 11754 & 10.3 & 0.026$\pm$0.005 & 12$\pm$1 & 37.8$\pm$0.1 & 37.2$\pm$0.2 & 1 & $<39.5$ & $-3.13$\
LSBC F570-05 & 10.4 & 0.053$\pm$0.003 & 36$\pm$8 & 38.5$\pm$0.1 & 37.7$\pm$0.2 & 1 & $<39.3$ & $-2.19$\
UGC 1455 & 11.2 & 0.057$\pm$0.003 & 11$\pm$1 & 38.9$\pm$0.1 & 37.2$\pm$0.1 & 10$^{+4}_{-1}$ & $40.4\pm0.1$ & $-4.52$
X-ray Binary Contamination {#section.xrbs}
==========================
X-rays are excellent at identifying very low levels of nuclear MBH activity, but X-rays alone do not distinguish between weakly accreting MBHs and near-Eddington stellar-mass compact objects. A deep radio survey could do so, as stellar-mass objects are much more radio weak than MBHs [@merloni03], but the necessary radio data do not yet exist. X-rays are also important counterparts, since there are radio contaminants as well (e.g., from star formation). Instead, we adopt a statistical approach based on @foord17 and [@lee19] to assess the likely XRB contamination in the sample.
XRB population studies in the Local Group and nearby galaxies have shown that the total luminosities of LMXBs and HMXBs in a galaxy correlate strongly with the stellar mass and star-formation rate (SFR), respectively[@gilfanov04; @lehmer10; @mineo12; @lehmer16]. Since HMXBs cannot move far from star-forming regions in their lifetimes and LMXBs appear to be well distributed[however, see @peacock16], we can assume that the same correlations apply just to the nucleus. These correlations depend on the metallicity, which we take to be near-Solar. Then, from tracers of the stellar mass and SFR we can estimate the total nuclear [$L_{\text{LMXB}}$]{} and [$L_{\text{HMXB}}$]{} that could be confused with an accreting MBH.
LMXBs and HMXBs are Poisson distributed and each follow an apparently universal X-ray luminosity function (XLF), which can be represented by a broken power law[@gilfanov04; @mineo12]. Thus, the average XRB luminosities from the scaling relations can be converted into probability distributions from which we can determine the likelihood of detecting a total nuclear [$L_{\text{LMXB}}$]{} or [$L_{\text{HMXB}}$]{} at or above a given luminosity $P_{\text{XRB}}(L>L_0)$. In this case, $L_0$ could either be the observational sensitivity or the luminosity of a detected source. As the most likely non-XRB possibility is an accreting MBH, $P_{\text{MBH}} = 1 - P_{\text{XRB}}$ for any source. It is also useful to estimate the likelihood of detecting $N$ XRBs in the sample, which is calculated jointly from each $P_{\text{XRB}}(L>L_{\text{sens}})$ in the sample.
We implement this scheme using the @lehmer10 expression for the 2-10 keV XRB luminosities: $$\begin{aligned}
L_{\text{LMXB}} &=& (9.05\pm0.37)\times 10^{28} \text{ erg s}^{-1} \times M_* \\
L_{\text{HMXB}} &=& (1.62\pm0.22)\times 10^{39} \text{ erg s}^{-1}\times \text{SFR}\end{aligned}$$ where $M_*$ and SFR are in units of $M_{\odot}$ and $M_{\odot}$ yr$^{-1}$, respectively. We adopt the @gilfanov04 XLF for the LMXBs: $$\begin{aligned}
dN/dL &= K_1 L^{-\alpha_1} & (L<10^{37}) \\
&= K_2 L^{-\alpha_2} & (10^{37} < L < 10^{38.5}) \\
&= K_3 L^{-\alpha_3} & (L>10^{38.5}) \end{aligned}$$ where $\alpha_1 = 1.0$, $\alpha_2 = 1.9$, and $\alpha_3 = 5.0$. The coefficients $K_1$, $K_2$, and $K_3$ are determined from $M_*$ such that [$L_{\text{LMXB}}$]{} is consistent with the @lehmer10 relation. The coefficients are slightly different in other studies [e.g., @gilfanov04], but this has little impact on our results. The HMXBs follow a two-zone XLF in which $\alpha = -1.6$ between $10^{35}$ and $10^{40}$ erg s$^{-1}$, and $\alpha \sim 3$ above $10^{40}$ erg s$^{-1}$ [@mineo12]. The XLF slope changes somewhat when accounting for supersoft sources [@sazonov17], but as we are insensitive to these sources the @mineo12 values are sufficient.
We estimate the projected nuclear stellar mass from a nuclear aperture whose size is determined by the $r=2$ arcsec X-ray detection cell (or centroid error circle in the case of a detection). We include a small aperture correction and do not correct for any potential AGN, since at the low implied luminosities it is unclear whether most of the optical light comes from the AGN or a nuclear star cluster. The nuclear mass is estimated by calculating the fraction of light in this aperture and multiplying by the total stellar mass, assuming a uniform mass-to-light ratio. The nuclear mass fractions are given in Table \[table.luminosities\].
We estimate the nuclear SFR from GALEX @morrissey05 NUV ($\lambda$2300Å) images in the same way using the @kennicutt98 relation, $\text{SFR} = 1.4\times 10^{-28} L_{\nu,\text{UV}}$ $M_{\odot}$ yr$^{-1}$, where $L_{\nu,\text{UV}}$ is in erg s$^{-1}$ Hz$^{-1}$. At 5.5 arcsec, the NUV PSF is considerably larger than the [*Chandra*]{} (0.8 arcsec HPD) or SDSS (1.3 arcsec) PSF, so the aperture correction is more important. We correct for Galactic extinction using the $E(B-V)$ value from NED, but not for unknown intrinsic extinction. The nuclear SFR values are listed in Table \[table.luminosities\], where the uncertainty listed is statistical alone and assumes no scatter in the @kennicutt98 relation and does not include uncertainty in the distance.
The nuclear $M_*$ and SFR, through the XRB scaling relations and XLF, yield the expected average number of nuclear XRBs per galaxy $\langle N_{\text{LMXB}} \rangle$ and $\langle N_{\text{HMXB}} \rangle$ (without mass matching). We then estimate the likelihood of detecting XRBs in a given galaxy by drawing $10^6$ Poisson deviates with $\langle N_{\text{LMXB}} \rangle$ and $\langle N_{\text{HMXB}} \rangle$ to simulate the range of possible numbers of XRBs. We randomly assign each XRB a luminosity by sampling the XLF, then sum the XRB luminosities to obtain a distribution of total nuclear $L_{\text{XRB}} = $[$L_{\text{LMXB}}$]{}$+$[$L_{\text{HMXB}}$]{}. We then calculate the likelihood of detecting nuclear X-rays from the XRBs, $P_{\text{XRB}}(L>L_X)$. Here $L_X$ refers either to the detected luminosity or the sensitivity in the event of a non-detection. These simulations take into account the uncertainty in the mass, SFR, and X-ray sensitivity or luminosity, which are dominated by uncertainty in the distance. We adopted a uniform 0.1 dex for this uncertainty. $P_{\text{XRB}}$ ranges from $10^{-4}$ to 0.02 for the galaxies in the sample (Table \[table.luminosities\]). The ranges for LMXBs or HMXBs alone are similar for the total sample, but differ from galaxy to galaxy.
The odds that $N\ge 1$ galaxies in our sample have detectable nuclear XRB emission are 0.071. The odds are 0.033 for LMXBs and 0.041 for HMXBs, individually. For HMXBs, any detectable emission is likely to be a single luminous ($L_X > 5 \times 10^{38}$ erg s$^{-1}$) source, whereas for LMXBs a detection would imply multiple sources with $L_X \sim 10^{38}$ erg s$^{-1}$, which would not necessarily appear point-like. A 7% chance is not negligible, so we consider the impact of our assumptions.
We assumed that LMXBs follow the starlight rather than globular clusters. If not, then a nuclear star cluster may produce more LMXBs than expected from its luminosity and [$L_{\text{LMXB}}$]{} would be underestimated. We have no way to assess this, but note that the X-ray detected fraction in nucleated galaxies is not particularly high [@foord17]. Secondly, we assume solar metallicity. [$L_{\text{HMXB}}$]{} is higher for low metallicities, and we may have underestimated [$L_{\text{HMXB}}$]{} by a factor of two [@douna15]. On the other hand, [$L_{\text{LMXB}}$]{} is lower at low metallicities by a similar factor [@kim13], so the net effect is minor for this sample. Thirdly, the FUV band is a better indicator of SFR, as early-type galaxies with almost no star formation can be bright in the NUV, which tends to overestimate [$L_{\text{HMXB}}$]{}. Unfortunately, FUV data are not available for all galaxies in the sample. However, the @kennicutt98 relation is valid over a broad UV band, so this is likely a minor effect. Finally, the aperture correction for GALEX is large because its PSF is much larger than the nuclear aperture based on the [*Chandra*]{} data. This increases the uncertainty in [$L_{\text{HMXB}}$]{}.
Another potential issue is uncertainties in the XLF slopes. The normalizations are fairly well constrained [e.g., @lehmer16], but there are signs that the XLF is not universal [@lehmer19]. For the @gilfanov04 or @mineo12 XLFs, most of the total luminosity is contained in the most luminous binaries. Since the odds of finding a luminous XRB in the nucleus are small, the more luminosity is contained in luminous sources the smaller the chance of contamination. Hence, steeper XLFs at the luminous end can actually increase $P_{\text{XRB}}$. There is not unlimited freedom here, since the XLF appears *close* to universal. For the LMXBs, we adopted uncertainties of $\Delta \alpha_1 = 0.5$, $\Delta \alpha_2 = 0.2$, and $\Delta \alpha_3 = 1$ based on @gilfanov04 and @lehmer19, whereas for the HMXBs we adopted uncertainties of $\Delta \alpha_1 = 0.25$ and $\Delta \alpha_2 = 0.5$ based on @mineo12 and @lehmer19. We repeated the $P_{\text{XRB}}$ calculation by randomly (uniformly) varying the slopes within these envelopes over 1000 trials, which leads to a range of $0.01 < P_{\text{XRB}} < 0.13$ for the whole sample. Thus, it is likely that the uncertainties in distance, $M_*$, and SFR are more significant and also that all of the nuclear sources reported here are MBHs.
The most likely number of *individually* detected XRBs above $L_{\text{sens}}$ in the whole sample, when considering entire galaxies (i.e., within $D_{25}$), is 3. The number of off-nuclear X-ray sources detected in this region in our sample is 4, which further supports the identification of the nuclear X-ray sources with MBHs. In Section \[section.lynx\] we discuss XRB contamination for higher sensitivity surveys.
Nuclear Activity in LSBGs {#section.results}
=========================
A nuclear X-ray source is detected in 4/32 galaxies ([$f_{\text{active}}$]{}$=12.5$%), or conservatively 3/32 ([$f_{\text{active}}$]{}$=9.4$%) based on the discussion in Sections \[section.data\]. This active fraction is significantly lower than [$f_{\text{active}}$]{} reported in AMUSE-Virgo [[$f_{\text{active}}$]{}$=68$%; @gallo08], AMUSE-Field [[$f_{\text{active}}$]{}$=$45%; @miller12], or the Fornax cluster [[$f_{\text{active}}$]{}$=$27%; @lee19]. One possible reason is that the galaxies in our sample tend to have smaller $M_*$ (all of the detected sources in our sample occur in galaxies with $\log M* > 9$), which is supported by the [$f_{\text{active}}$]{}$=11.2$ measured in low-mass nucleated galaxies by @foord17. Since the total baryonic mass is consistent between the LSBG and AMUSE-Field samples, perhaps the relationship between $M_*$ and $L_X$ found by @miller15 is indeed peculiar to stellar mass.
There are too few LSBG sources to independently determine a relationship between $L_X$ and any galaxy property, but we can test this hypothesis by comparing the measured X-ray luminosities and upper limits in our sample to the AMUSE-Field sample, using either $M_*$ or $M_* + M_{\text{HI}}$. Figure \[figure.lx\_mstar\] plots the detected LSBGs and upper limits on top of the AMUSE-Field results for both masses, and it is clear that there are too many undetected sources for the sensitivity if the LSBGs obey the best-fit AMUSE-Field relation, $$\begin{split}
\log L_X = & 38.4 - (0.04\pm0.12)+ \\
& (0.71\pm0.10)\times(\log M_{\text{gal}} - 9.8) \\
& \pm (0.73\pm0.09),
\end{split}$$ where the last term is the intrinsic scatter, and $L_X$ depends on total baryonic mass. We can further use this relation to calculate the expected number of detected MBHs in the LSBG sample for either $M_{\text{gal}} = M_*$ or total baryonic mass. Figure \[figure.field\_prediction\] shows the distributions of expected number of detected MBHs for a sample of the same size and with the same mass, distance, and sensitivity distribution as ours. The distributions account for the scatter in the AMUSE-Field relations and uncertainty in the masses and distances. Notably, if LSBGs follow the AMUSE-Field relation but the MBH luminosity is a function of total baryonic mass, there is only a 2.9% chance of detecting four or fewer MBHs. On the other hand, there is a 22% chance of detecting exactly four MBHs if the AMUSE-Field relation is instead particular to $M_*$.
Of course, this does not prove that LSBGs follow the relation; a larger sample is needed to independently test this. Indeed, since all the detections occur in galaxies closer to $L_*$ than dwarfs, it is not clear whether the dwarf galaxies that make up a large proportion of nearby LSBGs differ from the more luminous galaxies that make up most of the more distant LSBGs. Nevertheless, if the AMUSE assumption that $L_X$ and mass are related in the same way at all masses is true, we can conclude that LSBGs follow this relation only if $L_X$ is related to the stellar mass rather than the total baryonic or dynamical mass.
We do not distinguish dependence on the total stellar mass or bulge mass. Prior studies of AGNs in LSBGs found that [$f_{\text{active}}$]{} increases with bulge luminosity [@mei09; @galaz11]. The bulge contribution to the stellar mass in our sample varies strongly, but the four detected X-ray sources inhabit more massive galaxies whose bulges tend to be more massive relative to smaller galaxies (the one MBH candidate in an S0 galaxy, UGC 9927, is the least compelling detected source). A much larger X-ray study is needed to determine if [$f_{\text{active}}$]{} is correlated better with $M_*$ or $M_{*,\text{bulge}}$.
Two of the four X-ray detected nuclei (UGC 9927 and UGC 1455) are categorized as AGN by @schombert98, albeit with low luminosities. This is consistent with the X-ray luminosities, all of which are below $10^{41}$ erg s$^{-1}$. Three other galaxies in the sample (UGC 3059, UGC 4422, and UGC 12845) are also $L_*$ galaxies classified as AGN by @schombert98 but are not detected in the X-rays.
![*Left*: $L_X$ plotted as a function of $M_*$ for the AMUSE-Field sample (black) and LSBG sample (red). Open circles are upper limits and filled circles are detected sources. The best-fit linear relation from @miller12 is plotted as a dashed line, with the 1$\sigma$ scatter in dotted lines. *Right*: The same as at left, except $L_X$ is plotted as a function of $M_* + M_{\text{HI}}$ for the LSBG sample (blue). The sensitivities for the LSBG sample were chosen based on $M_* + M_{\text{HI}}$. []{data-label="figure.lx_mstar"}](figures/lx_mstar_plot.eps){width="1.0\linewidth"}
![The histograms show the expected number of X-ray detected MBHs for our sample, including the measured [*Chandra*]{} sensitivities, if the $L_X - M_*$ relation from @miller12 is correct (solid black line) and if the relation is instead $L_X - (M_*+M_{\text{HI}})$ (dashed black line). The observed number from the LSBG sample is shaded red. The width of each distribution is caused primarily by intrinsic scatter and secondarily by uncertainty in the slope, $M_*$, and $d$.[]{data-label="figure.field_prediction"}](figures/field_prediction.eps){width="1.0\linewidth"}
Early studies of AGNs in giant spiral LSBGs found [$f_{\text{active}}$]{}$\sim$50% [e.g., @schombert98], but larger surveys including more galaxy types found a much lower [$f_{\text{active}}$]{}$\sim$5% [@impey01; @galaz11]. These surveys also find that LSBGs have lower [$f_{\text{active}}$]{} than normal (high surface-brightness) galaxies over a similar mass (or absolute magnitude) range, which @galaz11 suggest is due to the low-density LSBG environments preventing the formation of bars or other instabilities that can fuel an AGN. These studies are based on optical emission-line diagnostics, which for our sample leads to [$f_{\text{active}}$]{}$\sim$15% (5/32), which is likely because the sample is biased towards brighter dwarf galaxies [especially compared to @galaz11] and includes some massive spirals. Our shallow X-ray survey finds [$f_{\text{active}}$]{}$\sim$10%, and two of the four detected sources are in nuclei classified as star-forming. None of the detected systems would be classified as *bona fide* X-ray AGNs.
Instead, the comparison with the AMUSE-Field sample indicates that weakly accreting MBHs in LSBGs are at least consistent with the high-surface-brightness galaxies of the same stellar mass. If LSBGs indeed show that there is a correlation between $L_X$ and stellar mass, but not baryonic or dynamical mass, this bears on black hole–galaxy co-evolution. In particular, we suggest that the inability of LSBGs to concentrate gas in the inner part of the galaxy is important to understanding their MBH growth. Although our sample is limited to relatively massive, isolated LSBGs, such a mechanism for limiting MBH growth would be relevant to most LSBGs.
LSBGs and [$f_{\text{occ}}$]{} {#section.lynx}
==============================
Nuclear X-ray activity in LSBGs is consistent with that in normal galaxies of the same stellar mass, although a deeper, larger survey is needed to firmly establish the relationship between $L_X$ and $M_*$ in these systems. This makes LSBGs important to measuring [$f_{\text{occ}}$]{} through the X-ray detection of weakly accreting MBHs, especially in the $\log M_*/M_{\odot} < 10$ regime where the heavy- and light-seed theories make different predictions. We emphasize that measuring [$f_{\text{occ}}$]{} is valuable regardless of its ability to constrain formation theories (for which merger histories will also be important) because it is a probe of the total MBH mass density and anchors theories for co-evolution of MBHs with their host galaxies.
In this section, we describe the logic behind an X-ray survey that could constrain the [$f_{\text{occ}}$]{} to 1-5% with a future wide-field, high resolution X-ray camera [expanding on ideas explored in the Astro2020 Decadal Survey white paper by @gallo19], or to $\sim$15% with [*Chandra*]{}. Then, we briefly explore how a survey could be constructed, including the expectation of many serendipitous LSBGs.
Framework
---------
For a given $M_*$, [$f_{\text{occ}}$]{}, and sensitivity the relation between the mean X-ray luminosity, $\bar{L}_X$, and $M_*$ predicts the measured [$f_{\text{active}}$]{}. For example, at a sensitivity 1$\sigma$ above $\bar{L}_X$, i.e., $\log L_{X,\text{thresh}} \log \bar{L}_X + 1\sigma$, one would expect [$f_{\text{active}}$]{}$=0.16$ at full occupation. Thus, measuring a lower-than-expected [$f_{\text{active}}$]{} would indicate [$f_{\text{occ}}$]{}$<$1. In this case, one would need to detect zero sources in a sample of 26 galaxies to rule out [$f_{\text{occ}}$]{}$=1$ at 99% confidence. At a worse sensitivity of $\log L_{X,\text{thresh}} = \log \bar{L}_X + 2\sigma$, 200 galaxies are needed to draw the same conclusion. In general, the number depends on the cumulative distribution function. For galaxies covering a range in $M_*$ ($8 < \log M_* < 12$), one can simultaneously constrain $L_X/M_*$ slope(s), scatter, and the most likely [$f_{\text{occ}}$]{} at each mass from the measured $L_X$ values and [$f_{\text{active}}$]{}. Using this approach, using 194 early type galaxies with *Chandra* [@miller15] estimate [$f_{\text{occ}}$]{}$>0.20$ below $M_*\gtrsim 10^{10} M_{\odot}$ (95% credible interval).
As a first step, we determined the number of galaxies needed to measure [$f_{\text{occ}}$]{} to a precision of about 5% assuming a power-law $L_X/M_*$ relation. We used a realistic mass distribution from @blanton09 among bins 0.5 dex wide in $M_*$ from $8 < \log M_*/M_{\odot} < 10$, the $L_X/M_*$ slope of 0.8 from @miller15, and a uniform $L_{X,\text{thresh}}=1$ or $2\times 10^{38}$ erg s$^{-1}$. The input [$f_{\text{occ}}$]{} is a function of mass, ranging from 20% at $\log M_*/M_{\odot} = 8$ to 100% at $\log M_* = 10$, again following @miller15. Figure \[figure.focc\_prediction\] shows the simulated posterior distributions for [$f_{\text{occ}}$]{} and the $L_X/M_*$ slope for either 1,000 or 10,000 galaxies. With 10,000 galaxies, [$f_{\text{occ}}$]{} is measured in these bins to 1-5% precision.
![Predictions for the constraints (posterior distributions) on [$f_{\text{occ}}$]{} and $L_X/M_*$ slope for 1,000 and 10,000 galaxies (top and bottom rows), with sensitivity thresholds of $10^{38.3}$ and $10^{38}$ erg s$^{-1}$, assuming a realistic mass distribution between $8 < \log M_*/M_{\odot} < 10$ in bins of 0.5 dex (blue, green, red, and black regions). The input $L_X/M_*$ and [$f_{\text{occ}}$]{} in each mass bin for these simulations are from @miller15.[]{data-label="figure.focc_prediction"}](figures/occufracall.eps){width="1\linewidth"}
Fewer galaxies are needed when using a mass-dependent sensitivity (e.g., if $L_{X,\text{thresh}}-\bar{L}_X$ is constant). For a uniform $P_{\text{XRB}}$, the number of galaxies needed to overcome XRB contamination is proportional to $P_{\text{XRB}}$, since the inferred [$f_{\text{active}}$]{} will depend on $1-P_{\text{XRB}}$. So, $$N_{\text{gal}}^{\text{need}} \propto P_{\text{XRB}}\times \biggl[1-\text{CDF}(L_{X,\text{thresh}},\bar{L}_X)\biggr]$$ where CDF is the normal cumulative distribution function for the case of Gaussian scatter. The feasibility of a tight [$f_{\text{occ}}$]{} measurement depends on minimizing $N_{\text{gal}}^{\text{need}}$. As we show below, both high sensitivity and high angular resolution over a wide field are important.
Future X-ray Missions
---------------------
There are two mission concepts relevant to this work: *Lynx* [@gaskin18] and the Advanced X-ray Imaging Satellite [AXIS; @mushotzky18]. The *Lynx* High Definition X-ray Imager (HDXI) has an effective collecting area of 20,000 cm$^2$ at 1 keV with a half-power diameter of $<1$ arcsec across the $23\times23$ arcmin field of view. AXIS is a similar instrument with 7,000 cm$^2$ effective area at 1 keV and $<$1 arcsec HPD across the $24\times24$ arcmin field of view. The high resolution is essential for two reasons. First, it enables the detection and centroiding of very weak, background-limited sources. Secondly, high resolution reduces confusion with individual luminous XRBs and reduces contamination by resolving out most of the luminosity.
XRB Contamination
-----------------
Whereas this study adopted a sensitivity threshold greater than $10^{38}$ erg s$^{-1}$ to limit XRB contamination, similar snapshot exposures with the HDXI would achieve a sensitivity of $L_{X,\text{sens}} \sim 3\times 10^{36}$ erg s$^{-1}$. This will lead to far more nuclear “sources” that are the sum of unresolved, lower luminosity XRBs, so we performed *Lynx* and AXIS simulations to determine the impact, and how $P_{\text{XRB}}$ depends on distance $d$, resolution $\theta$, exposure time $t_{\text{exp}}$, and other factors.
We simulated galaxies with $8 < \log M_*/M_{\odot} < 10$ in bins of 0.5 dex, with 10,000 galaxies per bin. We assumed that each galaxy is described by an exponential disk with a core radius $r_c = 2$ kpc that is independent of mass. We used the methods from Section \[section.xrbs\] to populate each galaxy with XRBs, which involves drawing a number of XRBs per galaxy and assigning positions and luminosities for each one. LMXB positions were randomly distributed weighted by the surface brightness, whereas HMXBs were randomly distributed within a 1 kpc radius for SFR ranging from $10^{-5}$ to 1 $M_{\odot}$ yr$^{-1}$ (i.e., star formation outside of 1 kpc of the nucleus was ignored as these HMXBs will not be a problem). The XRBs were randomly assigned luminosities weighted by the XLF.
We simulated HDXI and AXIS observations using the *simx* software[^5], with the 2018 HDXI[^6] and AXIS[^7] responses. We assumed an absorbed power law spectrum for each XRB, with $\Gamma=1.8$ and $N_{\text{H}} = 2\times 10^{20}$ cm$^{-2}$ (Galactic absorption). We then projected the galaxies to $d$ and selected $t_{\text{exp}}$ and $\theta$, assuming a circular Gaussian PSF where $\theta$ is the on-axis half-power diameter. The PSF distortion with off-axis angle can be described by a second Gaussian term. We consider the effect of PSF blurring below.
Sources are detected using [wavdetect]{}, and for each XRB we compute the centroid error circle $\sigma = \sigma_{\text{telescope}}/\sqrt{N}$, where $N$ is the number of counts. We assume an optical galaxy centroid error of $\sigma=0.05$ arcsec, and reject any detected, non-nuclear XRBs. The accuracy of the centroid positions are insensitive to $\theta$. However, there is frequently a “glow” of X-rays from unresolved XRBs around the nucleus and from the wings of resolved XRBs. This glow is not uniformly distributed, but can be consistent with a weak nuclear point source and certainly impacts the centroid error circle. The proportion of galaxies with at least 5 counts within the nuclear aperture (using the 95% encircled energy radius) from this glow is approximately linear in $\theta$. We compute $P_{\text{XRB}}$ by including the glow in the measured centroid error circle and counting galaxies as contaminated where there are at least 5 counts in the nuclear aperture from the glow.
Figure \[figure.p\_xrb\] shows the dependence of $P_{\text{XRB}}$ on $d$ for the cases of $\log M_*/M_{\odot} = 8.5$ and $9.5$, which represent the mass range of interest. This example uses an exposure time of 50 ks and the *Lynx* spectral response (effective area as a function of energy), scaled to a collecting area of 1 m$^2$ at 1 keV. We computed $P_{\text{XRB}}$ over the range of parameters (assuming that SFR is proportional to mass, but not distributed in the same way) and find $$P_{\text{XRB}} \propto \theta \cdot t_{\text{exp}}^{-1/2} \cdot M_* \cdot d \cdot L_{X,\text{thresh}}^{-\beta}$$ where $\beta \approx 1$ for the XLFs that we used. The dependence on $t_{\text{exp}}$ comes from resolving and rejecting more of the glow, while the dependence on $d$ is from the nuclear aperture covering a larger physical area in the galaxy. If $L_{X,\text{thresh}} \equiv L_{X,\text{sens}}$, then $P_{\text{XRB}} \propto t_{\text{exp}}^{\beta-1/2} d^{-1}$.
![The likelihood of detecting one or more nuclear XRBs (or enough counts from a diffuse “glow” to register as a detection) as a function of distance for $10^{9.5} M_{\odot}$ (red) and $10^{8.5} M_{\odot}$ (blue) galaxies and a *Lynx* or AXIS-like mission. The different lines correspond to $L_{X,\text{thresh}}$ values of $10^{37}$ (dotted), $10^{37.5}$ (dashed), and $10^{38}$ erg s$^{-1}$ (solid) and extend out to the distance to which such a source could be detected. See text for description of the simulation method.[]{data-label="figure.p_xrb"}](figures/pxrbplot.eps){width="1\linewidth"}
Sensitivity and Number of Galaxies
----------------------------------
We may now determine the best $L_{X,\text{thresh}}$ at each $M_*$ to optimize $N_{\text{gal}}^{\text{need}}$. Figure \[figure.ngal\_focc\] shows $N_{\text{gal}}^{\text{need}}$ as a function of [$f_{\text{active}}$]{}and $P_{\text{XRB}}$. Specifically, $N_{\text{gal}}^{\text{need}}$ is defined in Figure \[figure.ngal\_focc\] based on achieving $\pm$5% precision on [$f_{\text{occ}}$]{}, for a 68.3% confidence interval. At a given $M_*$, [$f_{\text{active}}$]{} must exceed 0.3 in order to keep $N_{\text{gal}}^{\text{need}}$ below 1,000. This approach can be generalized to measuring [$f_{\text{occ}}$]{} in bins of $M_*$ 0.5 dex wide or to a continuous function [@miller15]. We use the former case to sketch the sensitivity requirements.
![The number of galaxies needed to measure a 68.3% confidence interval equivalent to $\pm$5% precision, as a function of [$f_{\text{active}}$]{} and $P_{\text{XRB}}$. To reduce $N_{\text{gal}}^{\text{need}}$ below about 1,000 galaxies in a given mass bin requires [$f_{\text{active}}$]{}$\gtrsim 0.3$, which will require higher sensitivity at lower $M_*$.[]{data-label="figure.ngal_focc"}](figures/lsb_ngal.eps){width="1\linewidth"}
In a given mass bin, both [$f_{\text{active}}$]{} and $P_{\text{XRB}}$ increase with sensitivity. The increase is linear for $P_{\text{XRB}}$ but not for [$f_{\text{active}}$]{}; for [$f_{\text{active}}$]{}$\approx 0.25$, $(L_{X,\text{sens}}-\bar{L}_X)/\sigma \approx 0.5$, or well within the core of the distribution. Figure \[figure.lx\_mstar\] shows that [*Chandra*]{} achieved this for a sensitivity of $\sim 3\times 10^{38}$ erg s$^{-1}$ at $\log M_*/M_{\odot} > 9.5$. A reasonable approximation to the optimal sensitivity is $L_{X,\text{sens}} \approx \bar{L}_X$ ([$f_{\text{active}}$]{}$=0.5$ if [$f_{\text{occ}}$]{}$=1$). This approximation is based on the fact that $N_{\text{gal}}^{\text{need}}$ decreases sharply as the sensitivity probes the core of the Gaussian distribution, but produce diminishing returns beyond. Meanwhile, $P_{\text{XRB}}$ is also a function of mass, so the $L_{X,\text{sens}} \approx \bar{L}_X$ applies at each mass bin.
For three mass bins $\log M_*/M_{\odot} = 8.0-8.5$, $8.5-9.0$, and $9.0-9.5$, the AMUSE-Field $L_X/M_*$ relation predicts optimal sensitivities of $L_{X,\text{sens}} \approx 2$, 3, and $16\times 10^{37}$ erg s$^{-1}$, respectively. At $\log M_*/M_{\odot} \le 9.5$, $P_{\text{XRB}} \le 0.1$ using the prior analysis (Figure \[figure.p\_xrb\]). These considerations lead to a conservative estimate of $N_{\text{gal}}^{\text{need}} \sim 1000$ in each bin, or about 3000 galaxies overall at the low-mass end.
We can infer the existence of sufficient targets within 100 Mpc, where short HDXI snapshots are sufficient. @dobrycheva13 argue that there are about 37,000 galaxies in the SDSS in this volume, which covers 35% of the sky. The luminosity function for a flux-limited sample, $\Phi(L) \propto (L/L*)^{-\alpha+3/2} e^{-L/L*}$ with $\alpha=-1.07$, implies that only about 10% of the detected galaxies are at $8 < \log M/M_* < 9$ [@schechter76; @binggeli88]. However, intrinsically there are more of these galaxies than the more massive ones, and the $\sim$10,000 detected in the SDSS in this range imply up to a factor of 3–10 more, depending on the slope of the luminosity function at $L \ll L*$ [$\alpha < -1.3$; @blanton05; @liu08].
Many of these will be LSBGs by definition, considering the SDSS sensitivity, which only make up 1.6% of the SDSS spectroscopic sample [@galaz11]. In the next decade, the Vera Rubin Observatory [VRO; @LSST09] will survey more than 20,000 square degrees down to $>27.5$ mag, so we expect at least 10,000 galaxies per mass bin. Although many will be unsuitable for observations (due to obscuration by the Galactic plane, proximity to bright sources, or morphology), there will easily be 1,000 candidate targets per bin. One challenge is that the photometric redshifts may not cleanly identify LSBGs within 100 Mpc [@greco2018], so some spectroscopic follow-up will be necessary.
Strategy
--------
Observing 3,000 galaxies through pointed observations would require $\sim$100 Ms of HDXI time, or three years. Here we investigate the potential for serendipitous sources to reduce the dedicated observing burden to measure [$f_{\text{occ}}$]{}. For the sake of argument, we assume two years of HDXI observations in a five-year mission with 75% observing efficiency (with the rest of the time allocated to the *Lynx* grating and microcalorimeter instruments). This amounts to 47 Ms. We further assume that the HDXI time is divided among *long* (150 ks), *medium* (50 ks), and *short* (10 ks) exposures with no field overlap, with allocations of 20%, 40%, and 40%, respectively.
This would cover 10.5 deg$^2$, 63 deg$^2$, and 315 deg$^2$ for the long, medium, and short exposures. The sensitivities lead to distance limits, and thus to limiting volumes. At $8 < \log M_*/M_{\odot} < 8.5$, the limiting distances are 25 Mpc, 50 Mpc, and 100 Mpc for the short, medium, and long exposures. For $8.5 < \log M_*/M_{\odot} < 9.0$ they are 40 Mpc, 90 Mpc, and $>$100 Mpc, and for $\log M*/M_{\odot} > 9$ they are all $>$100 Mpc. Assuming that the fields are observed at random, a few hundred galaxies could be observed in the two higher-mass bins but only a few tens of galaxies in the low-mass bin. This is the most conservative estimate because it wrongly assumes a *uniform* distribution, whereas a [*Chandra*]{}-like observing plan will target denser regions.
#### Cluster Outskirts
Galaxy clusters contain hundreds to thousands of galaxies and are of particular interest for X-ray observations. The cores of nearby clusters (Virgo, Fornax, Coma, and Perseus) have been well observed with [*Chandra*]{}, largely to study the intracluster medium (ICM). Future observations of the Perseus or Coma cores will be less useful for measuring [$f_{\text{occ}}$]{} because the ICM is so bright that reasonable exposures at HPD$=0.4$ arcsec will not be sensitive enough for galaxies with $\log M_*/M_{\odot} \lesssim 9.5$. In addition, [$f_{\text{active}}$]{} is lower in the Virgo core than in the field [@miller12], which we expect to be an even stronger effect in the larger Perseus and Coma clusters. However, cluster outskirts remain under-studied and are a key area of interest for *Lynx* and AXIS. AXIS is particularly interesting because of its planned low-Earth orbit [@mushotzky18], which reduces the particle background and enables a cleaner study of accreting ICM at the outskirts. Tiled observations at the outskirts would likely capture a few thousand galaxies where the ICM surface brightness is low. These would be sensitive probes of [$f_{\text{occ}}$]{} at $\log M_*/M_{\odot} \gtrsim 8.5$. LSBGs are an important part of this sample, as they make up a disproportionately large fraction of galaxies in clusters (likely due to ram-pressure stripping of gas).
#### Deep Fields
@miller15 considered the role of deep fields; the 4 Ms [*Chandra*]{} Deep Field-South probes AGNs in sub-$L_*$ galaxies in a cosmological volume, so assuming a uniform Eddington ratio distribution [@aird12], they showed that the distribution of X-ray detections probes [$f_{\text{occ}}$]{}. However, this is most effective above $\log M_*/M_{\odot} \ge 10$. *Lynx* and AXIS would create fields of equivalent depth in exposures of a few hundred ks, which would result in tens of such fields in the first few years of either mission. The main benefit to measuring [$f_{\text{occ}}$]{} at $\log M_*/M_{\odot} <10$ from the more distant objects is that the slope and scatter in the $L_X/M_*$ relation would be very tightly constrained, and possibly as a function of galaxy type or cosmological distance.
#### Normal Galaxies
Massive galaxies ($L>L^*$) are frequently targets of X-ray observations to study their hot gas, compact objects, or transient phenomena such as supernovae. However, dwarfs are clustered around more massive galaxies in the field [@binggeli90], and based on their relative frequency we would expect each HDXI or AXIS field with a massive galaxy to have a number of dwarfs. Often, these will be unsuitable targets due to morphology or background, but especially within 100 Mpc galaxy observations will be important for building up a sample of $\log M_*/M_{\odot} < 9$ targets. [*Chandra*]{} has observed numerous galaxies within this horizon, and we speculate that HDXI observations of these same galaxies would include at least 4,000 lower mass galaxies in fields with suitable sensitivity.
It is worth noting that these observations would also allow the detection of X-rays from MBHs in stripped dwarf nuclei (frequently former nuclear star clusters), such as in the ultra-compact dwarf M60-UCD1 [@strader2013; @seth2014]. A significant fraction of local MBHs (up to 1/3) may be located in such systems [@voggel2019], and for relatively nearby galaxies they can be easily identified via VRO and Wide-Field Infrared Space Telescopes (WFIRST) colors [using methods developed by @munoz2014]. We expect several around each galaxy relevant for the [$f_{\text{occ}}$]{} measurement [for the Milky Way, about 6 have been found; @kruijssen2019], so a serendipitous sample of $\sim$1000 is easily feasible during the HDXI lifetime.
#### Targeted Survey
There will almost certainly be enough serendipitous sources at $\log M_*/M_{\odot} > 8.5$ to constrain [$f_{\text{occ}}$]{} to 5% precision, and so a major component of the program is “free,” requiring only that one waits several years. However, at the lowest masses it is much less certain that enough galaxies will be observed because the sensitivity of the typical field only captures systems within $d<25$ Mpc. There will not likely be enough deeper observations to make up for this limit by measuring [$f_{\text{active}}$]{} at a lower sensitivity.
This motivates a snapshot survey of very nearby dwarf galaxies, many of which will be LSBGs. We estimate that 200-400 targets are required, with exposure times between 5-15 ks. This leads to a maximum total exposure time of $\sim$3 Ms. A dedicated survey of the Virgo cluster would significantly reduce the total time, since many of the nearby dwarf galaxies will be found in and around the cluster. If fields are selected to contain an average of two good candidates, the total observing burden is reduced to $\sim$1.5 Ms, which is a large program but a modest investment for measuring [$f_{\text{occ}}$]{}.
Summary {#section.summary}
=======
We searched for nuclear X-ray sources in 32 nearby LSBGs with [*Chandra*]{} and found 3-4, which we judge as very likely to be MBHs. This leads to [$f_{\text{active}}$]{}$=0.09-0.12$, which is consistent with the expectation from the best-fitting $L_X/M_*$ correlation from the AMUSE-Field study [@miller12], which used [*Chandra*]{} images of high surface brightness, early-type galaxies with almost no gas. However, [$f_{\text{active}}$]{} is inconsistent with the same relation if $M_*$ is replaced by the total baryonic mass, which is important since LSBGs have large gas fractions.
This result suggests that weak nuclear activity innearby LSBGs with regular morphology is similar to that in normal galaxies of the same stellar mass, and thus that MBH growth is somehow tied to stellar, rather than baryonic or dynamical, mass. One explanation could be that isolated LSBGs are inefficient at concentrating gas that would lead both to star formation and MBH growth. However, the sample size is too small to independently measure any relationship between $L_X$ and $M_*$ (or total baryonic mass) in LSBGs, and a deeper, more extensive X-ray survey is needed to do this. Such a survey would also be able to answer whether the nuclear activity is better correlated with bulge luminosity, as argued by @galaz11 for LSBGs, or total stellar mass. Nevertheless, our result supports a scenario in which MBHs co-evolve with the stellar component, rather than forming prior to it or in a way that correlates with halo mass.
The agreement with the AMUSE-Field $L_X/M_*$ correlation also suggests that LSBGs can be used to constrain the local [$f_{\text{occ}}$]{} of MBHs, albeit with too few detected sources to independently measure an $L_X/M_*$ relationship. LSBGs provide many relatively isolated targets with $\log M_* < 9$, where [$f_{\text{occ}}$]{} predictions differ between heavy- and light-seed theories of MBH formation. A dedicated program, spaced over about five years, with a new, high resolution, wide-field X-ray camera such as *Lynx* or AXIS would enable a measurement of [$f_{\text{occ}}$]{} to a precision of several percent, thereby providing a strong local boundary condition on all MBH formation and evolution models, and extending studies of black-hole feedback to the low-mass end of the luminosity function.
The authors thank the reviewer for a careful and thoughtful review that substantially improved this manuscript.
Support for this work was provided by the National Aeronautics and Space Administration through Chandra Special Project SP8-19003X.
This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. We acknowledge the usage of the HyperLeda database (http://leda.univ-lyon1.fr).
natexlab\#1[\#1]{}\[1\][[\#1](#1)]{} \[1\][doi: [](http://doi.org/#1)]{} \[1\][[](http://ascl.net/#1)]{} \[1\][[](https://arxiv.org/abs/#1)]{}
Abell, P. A., Allison, J., Anderson, S. F., [et al.]{} 2009.
, J., [Coil]{}, A. L., [Moustakas]{}, J., [et al.]{} 2012, , 746, 90,
, S., [Albareti]{}, F. D., [Allende Prieto]{}, C., [et al.]{} 2015, , 219, 12,
, D., [Gandhi]{}, P., [Smette]{}, A., [H[ö]{}nig]{}, S. F., & [Duschl]{}, W. J. 2011, , 536, A36,
, T. R. 2004, , 608, 957,
, E. F., [McIntosh]{}, D. H., [Katz]{}, N., & [Weinberg]{}, M. D. 2003, , 149, 289,
, B., [Sandage]{}, A., & [Tammann]{}, G. A. 1988, , 26, 509,
, B., [Tarenghi]{}, M., & [Sandage]{}, A. 1990, , 228, 42
, M. R., [Lupton]{}, R. H., [Schlegel]{}, D. J., [et al.]{} 2005, , 631, 208,
, M. R., & [Moustakas]{}, J. 2009, , 47, 159,
, G., [Impey]{}, C., & [McGaugh]{}, S. 1997, , 109, 745,
, G. D., [Impey]{}, C. D., [Malin]{}, D. F., & [Mould]{}, J. R. 1987, , 94, 23,
, K. C., [Magnier]{}, E. A., [Metcalfe]{}, N., [et al.]{} 2016, arXiv e-prints, arXiv:1612.05560.
, H. M., [Tully]{}, R. B., [Fisher]{}, J. R., [et al.]{} 2009, , 138, 1938,
, J. J., [Spergel]{}, D. N., [Gunn]{}, J. E., [Schmidt]{}, M., & [Schneider]{}, D. P. 1997, , 114, 635,
, M., [Reynolds]{}, C. S., [Vogel]{}, S. N., [McGaugh]{}, S. S., & [Kantharia]{}, N. G. 2009, , 693, 1300,
, D. V. 2013, Odessa Astronomical Publications, 26, 187
, V. M., [Pellizza]{}, L. J., [Mirabel]{}, I. F., & [Pedrosa]{}, S. E. 2015, , 579, A44,
, A., [Gallo]{}, E., [Hodges-Kluck]{}, E., [et al.]{} 2017, , 841, 51,
, P. E., [Kashyap]{}, V., [Rosner]{}, R., & [Lamb]{}, D. Q. 2002, , 138, 185,
, G., [Herrera-Camus]{}, R., [Garcia-Lambas]{}, D., & [Padilla]{}, N. 2011, , 728, 74,
, E., [Treu]{}, T., [Jacob]{}, J., [et al.]{} 2008, , 680, 154,
, E., [Hodges-Kluck]{}, E., [Treu]{}, T., [et al.]{} 2019, arXiv e-prints.
, J. A., [Dominguez]{}, A., [Gelmis]{}, K., [et al.]{} 2018, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 10699, 106990N,
, N. 1986, , 303, 336,
, M. 2004, , 349, 146,
, A. W. 2003, , 125, 3398,
, J. P., [Greene]{}, J. E., [Strauss]{}, M. A., [et al.]{} 2018, , 857, 104,
, K., [Richstone]{}, D. O., [Gebhardt]{}, K., [et al.]{} 2009, , 698, 198,
, L., [Bomans]{}, D. J., & [Dettmar]{}, R.-J. 2007, , 471, 787,
, W. K., & [Richter]{}, O.-G. 1989, [A General Catalog of HI Observations of Galaxies. The Reference Catalog.]{}, 350
, C., & [Bothun]{}, G. 1997, , 35, 267,
, C., [Burkholder]{}, V., & [Sprayberry]{}, D. 2001, , 122, 2341,
, P. M. W., [Burton]{}, W. B., [Hartmann]{}, D., [et al.]{} 2005, , 440, 775,
, Jr., R. C. 1998, , 498, 541,
, L. J., [Groves]{}, B., [Kauffmann]{}, G., & [Heckman]{}, T. 2006, , 372, 961,
, D. W., [Fabbiano]{}, G., [Ivanova]{}, N., [et al.]{} 2013, , 764, 98,
, J., & [Ho]{}, L. C. 2013, , 51, 511,
, J. M. D., [Pfeffer]{}, J. L., [Reina-Campos]{}, M., [Crain]{}, R. A., & [Bastian]{}, N. 2019, , 486, 3180,
, N., [Gallo]{}, E., [Hodges-Kluck]{}, E., [et al.]{} 2019, arXiv e-prints.
, B. D., [Alexander]{}, D. M., [Bauer]{}, F. E., [et al.]{} 2010, , 724, 559,
, B. D., [Basu-Zych]{}, A. R., [Mineo]{}, S., [et al.]{} 2016, , 825, 7,
, B. D., [Eufrasio]{}, R. T., [Tzanavaris]{}, P., [et al.]{} 2019, , 243, 3,
, C. T., [Capak]{}, P., [Mobasher]{}, B., [et al.]{} 2008, , 672, 198,
, D., [Prugniel]{}, P., [Terekhova]{}, N., [Courtois]{}, H., & [Vauglin]{}, I. 2014, , 570, A13,
, S. S. 1996, , 280, 337,
, L., [Yuan]{}, W.-M., & [Dong]{}, X.-B. 2009, Research in Astronomy and Astrophysics, 9, 269,
, A., [Heinz]{}, S., & [di Matteo]{}, T. 2003, , 345, 1057,
, B., [Gallo]{}, E., [Treu]{}, T., & [Woo]{}, J.-H. 2012, , 747, 57,
, B. P., [Gallo]{}, E., [Greene]{}, J. E., [et al.]{} 2015, , 799, 98,
, R. F., [Disney]{}, M. J., [Parker]{}, Q. A., [et al.]{} 2004, , 355, 1303,
, S., [Gilfanov]{}, M., & [Sunyaev]{}, R. 2012, , 419, 2095,
, D. G., [Levine]{}, S. E., [Canzian]{}, B., [et al.]{} 2003, , 125, 984,
, P., [Schiminovich]{}, D., [Barlow]{}, T. A., [et al.]{} 2005, , 619, L7,
, R. P., [Puzia]{}, T. H., [Lan[ç]{}on]{}, A., [et al.]{} 2014, , 210, 4,
, R. 2018, ArXiv e-prints.
, D. D., [Seth]{}, A. C., [Neumayer]{}, N., [et al.]{} 2018, , 858, 118,
, D. D., [den Brok]{}, M., [Seth]{}, A. C., [et al.]{} 2019, arXiv e-prints, arXiv:1902.03813.
, M. B., & [Zepf]{}, S. E. 2016, , 818, 33,
, S., [Prabhu]{}, T. P., & [Das]{}, M. 2011, , 418, 789,
, S. D., [Krusch]{}, E., [Bomans]{}, D. J., & [Dettmar]{}, R. J. 2009, , 504, 807,
, S., & [Khabibullin]{}, I. 2017, , 468, 2249,
, P. 1976, , 203, 297,
, J. 1998, , 116, 1650,
, J. M., [Bothun]{}, G. D., [Schneider]{}, S. E., & [McGaugh]{}, S. S. 1992, , 103, 1107,
, J. M., [McGaugh]{}, S. S., & [Eder]{}, J. A. 2001, , 121, 2420,
, N. J., [Dudik]{}, R. P., [Dorland]{}, B. N., [et al.]{} 2015, , 221, 12,
, A., [Ag[ü]{}eros]{}, M., [Lee]{}, D., & [Basu-Zych]{}, A. 2008, , 678, 116,
, A. C., [van den Bosch]{}, R., [Mieske]{}, S., [et al.]{} 2014, , 513, 398,
, R., [Ho]{}, L. C., & [Feng]{}, H. 2017, , 842, 131,
, D., [Impey]{}, C. D., [Bothun]{}, G. D., & [Irwin]{}, M. J. 1995, , 109, 558,
, J., [Seth]{}, A. C., [Forbes]{}, D. A., [et al.]{} 2013, , 775, L6,
, S., [Ramya]{}, S., [Das]{}, M., [et al.]{} 2016, , 455, 3148,
, J. R., [Sun]{}, M., [Zeimann]{}, G. R., [et al.]{} 2015, , 811, 26,
, R. C. E., [Gebhardt]{}, K., [G[ü]{}ltekin]{}, K., [Y[i]{}ld[i]{}r[i]{}m]{}, A., & [Walsh]{}, J. L. 2015, , 218, 10,
, K. T., [Seth]{}, A. C., [Baumgardt]{}, H., [et al.]{} 2019, , 871, 159,
, B., [Perret]{}, B., [Petremand]{}, M., [et al.]{} 2013, , 145, 36,
, M. 2012, Science, 337, 544,
[^1]: http://cxc.harvard.edu/proposer/POG/
[^2]: http://cxc.harvard.edu/ciao/
[^3]: http://cxc.harvard.edu/proposer/POG/
[^4]: available at https://heasarc.gsfc.nasa.gov/cgi-bin/Tools/w3nh/w3nh.pl
[^5]: https://hea-www.harvard.edu/simx/
[^6]: http://hea-www.cfa.harvard.edu/ jzuhone/soxs/responses.html
[^7]: http://axis.astro.umd.edu/
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We address the question of finding the most effective convex decompositions into boundary elements (so-called boundariness) for sets of quantum states, observables and channels. First we show that in general convex sets the boundariness essentially coincides with the question of the most distinguishable element, thus, providing an operational meaning for this concept. Unexpectedly, we discovered that for any interior point of the set of channels the optimal decomposition necessarily contains a unitary channel. In other words, for any given channel the best distinguishable one is some unitary channel. Further, we prove that boundariness is sub-multiplicative under composition of systems and explicitly evaluate its maximal value that is attained only for the most mixed elements of the considered convex structures.'
author:
- Zbigniew Puchała
- Anna Jenčová
- Michal Sedlák
- Mário Ziman
title: 'Exploring boundaries of quantum convex structures: special role of unitary processes'
---
Introduction
============
Convexity, rooted in the very concept of probability, is one of unavoidable mathematical features of our description of physical systems. Operationally, it originates in our ability to switch randomly between different physical devices of the same type. As a result, all elementary quantum structures and most of the quantum properties are “dressed in convex clothes”. For example, the sets of states, observables and processes are all convex, and it is of foundational interest to understand the similarities and identify the differences of their convex structures.
For any convex set, we may introduce the concept of an interior point in a natural way as a point that can be connected to any other point by a line segment containing it in its interior. We will use this concept to define mixedness and boundariness as measures evaluating how much the element is not extremal, or how much the element is not a boundary point, respectively. More precisely, mixedness will be determined via the highest weight occurring in decompositions into extremal points and boundariness will be determined via the highest weight occurring in decompositions into boundary points. In both cases, these numbers tell us how much randomness is needed to create the given element. Since we focus on sets of quantum devices related to finite dimensional Hilbert spaces, we will work in finite dimensional setting, but note that similar definitions can be introduced also in infinite dimensions, although some of the facts used below are no longer true.
If the given convex set is also compact, it can be viewed as a base of a closed pointed convex cone and we may consider the corresponding base norm in the generated vector space (see e.g. [@rockafellar]). Note that the related distance between points of the base can be determined solely from the convex structure of the base (see for instance recent works [@reeb_etal2011; @errka]). As it is well known for quantum states [@Helstrom; @Holevo] and as has been recently proved for other quantum devices [@jencova2013], this distance is closely related to the minimum error discrimination problem.
It was proved in Ref. [@haapasalo2014] that for the sets of quantum states and observables, boundariness and the base norm distance are closely related. More precisely, the largest distance of a given interior point $y$ from another point of the base is given in terms of boundariness of $y$. In the present paper, we show that this is true for any base of the positive cone in a finite dimensional ordered vector space. In particular, for sets of quantum devices, this property singles out a subset of extremal elements that are best distinguishable from interior points. Exploiting these results, we will point out an interesting difference between the convex sets of states and channels, and also provide an unexpected operational characterization of unitary channels.
This paper is organized as follows. In Section II we will provide readers with basic elements of convex analysis and quantum theory relevant for the rest of the paper. The concept of boundariness will be introduced in Section III, where various equivalent definitions will be stated and also its operational meaning will be discussed. In Section IV we will investigate the boundariness for the case of quantum channels. In particular, we will prove a conjecture stated in Ref. [@haapasalo2014]. In Section V we will address the question of boundariness for composition of systems and Section VI is devoted to identification of elements for which boundariness achieves its maximal value. Last Section VII summarizes our results.
Quantum convex cone structures
==============================
Suppose $V$ is a real finite-dimensional vector space and $C\subset V$ is a closed convex cone. We assume that $C$ is *pointed*, i.e. $C\cap -C=\{0\}$, and *generating*, i.e. $V=C-C$. Then $(V,C)$ becomes a partially ordered vector space, with $C$ the cone of positive elements. Let $V^*$ be the dual space with duality $\<\cdot,\cdot\>$, then we may introduce a partial order in $V^*$ as well, with the dual cone of positive functionals $C^*=\{f\in V^*, \<f,z\>\ge 0,\
\forall z\in C\}$. Note that $C^*$ is again pointed and generating, and $C^{**}=C$.
Interior points $z\in int(C)$ of the cone $C$ are characterized by the property that for each $v\in V$ there is some $t>0$ such that $tz-v\in C$, that is, the interior points of $C$ are precisely the *order units* in $(V,C)$. Alternatively, the following lemma gives a well known characterization of boundary points of $C$ as elements contained in some supporting hyperplane of $C$, see Ref. [@rockafellar Section 11] for more details.
An element $z\in C$ is a boundary point, $z\in \partial C$, if and only if there exists a nonzero element $f\in C^*$ such that $\<f,z\>=0$. Clearly, then also $f\in \partial C^*$.
A *base* of $C$ is a compact convex subset $B\subset C$ such that for every nonzero $z\in C$, there is a unique constant $t>0$ and an element $b\in B$ such that $z=tb$. The *relative interior* $ri(B)$ is defined as the interior of $B$ with respect to the relative topology in the smallest affine subspace containing $B$. Note that we have $ri(B)=B\cap int(C)$, so that the boundary points $z\in \partial B=B\setminus ri(B)$ can be characterized as in the previous lemma.
There is a one-to-one correspondence between bases $B\subset C$ and order units in the dual space $e\in int(C^*)$, such that $B=\{ z\in C, \<e,z\>=1\}$ is a base of $C$ if and only if $e$ is an order unit. The order unit $e$ determines the *order unit norm* in $(V^*,C^*)$ as $$\|f\|_e=\inf\{\lambda>0, \lambda e\pm f\in C^*\},\quad f\in V^*.$$ Its dual is the *base norm* $\|\cdot\|_B$ in $(V,C)$. In particular, we obtain the following expression for the corresponding distance of elements of $B$: $$\begin{aligned}
\label{eq:base}
\|x-y\|_B=2\sup_{g,e-g\in C^*}\<g,x-y\>,\qquad x,y\in B\end{aligned}$$
We will now describe the basic convex sets (see Ref.[@heinosaari12]) of quantum states, channels and measurements (observables). Let us stress that each of these sets is a compact convex subset in a finite dimensional vector space and as such forms a base of the positive cone of some partially ordered vector space, so that these sets fit into the framework introduced above.
Let us denote by ${\mathcal{H}}_d$ the $d$-dimensional Hilbert space associated with the studied physical system. Then ${\mathcal{S}}({\mathcal{H}}_d)$ stands for the set of all density operators (positive linear operators of unit trace) representing the set of quantum states.
Observables are identified with positive-operator valued measures (POVMs) being determined by a collection of effects $E_1,\dots,E_m$ ($O\leq E_j\leq I$) normalized as $\sum_j E_j=I$. Each effect $E_j$ defines a different measurement outcome. In particular, if the system is prepared in a state $\varrho$, then $p_j={\mathrm{tr}[\varrho E_j]}$ is the probability of the registration of the $j$th outcome.
Quantum channels are modeled by completely positive trace-preserving linear maps, i.e. by transformations $\varrho\mapsto \sum_l A_l\varrho A_l^\dagger$ for any collection of operators $\{A_l\}_l$ satisfying the normalization $\sum_l A_l^\dagger A_l=I$. Define the one-dimensional projection operator $\Psi_+=\frac{1}{d}\sum_{j,k}\ket{jj}\bra{kk}$ on ${\mathcal{H}}_d\otimes{\mathcal{H}}_d$, where the vectors $\ket{j}$ form a complete orthonormal basis on ${\mathcal{H}}_d$. Due to Choi-Jamiolkowski isomorphism [@Jamiolkowski72; @Choi75], the set of quantum channels of a finite-dimensional quantum system is mathematically closely related to the set of density operators (states) of a composite system. In particular, a channel ${\mathcal{E}}$ is associated with a density operator $$J_{\mathcal{E}}=({\mathcal{E}}\otimes{\mathcal{I}})[\Psi_+]\in{\mathcal{S}}({\mathcal{H}}_d\otimes{\mathcal{H}}_d)$$ and the normalization condition ${\rm tr}_1 J_{\mathcal{E}}=\frac{1}{d}I$ is the only difference between the mathematical representations of states and channels. In other words, only a special (convex) subset of density operators on ${\mathcal{H}}_d\otimes{\mathcal{H}}_d$ can be identified with quantum channels on $d$-dimensional quantum systems.
Boundariness
============
For any element of a compact convex subset $\sZ\subset V$ with boundary $\partial \sZ$ and a set of extremal elements $ext(\sZ)$ we may introduce the concepts of *mixedness* and *boundariness* evaluating the “distance” of the element from extremal and boundary points, respectively. For any convex decomposition $y=\sum_j \pi_j x_j$, where $0\leq\pi_j\leq 1$ and $\sum_j\pi_j=1$, we define its maximal weight $w_y(\{\pi_j,x_j\}_j)=\max_j
\pi_j$. Using this quantity, we may express the mixedness of $y\in \sZ$ as follows $$m(y)=1-\sup_{x_j\in ext(\sZ)} w_y(\{\pi_j,x_j\}_j)\,,$$ where supremum is taken over all convex decompositions of $y$ into extremal elements. In a similar way we may define the boundariness [@haapasalo2014] of $y$ as $$\begin{aligned}
\label{eq:defbyn}
b(y)=1-\sup_{x_j\in\partial \sZ} w_y(\{\pi_j,x_j\}_j)\,,\end{aligned}$$ where supremum is taken over all decompositions into boundary elements. By definition $m(y)\geq b(y)$, since the convex decompositions in (\[eq:defbyn\]) are less restrictive.
\
Let us prove that the above formula is equivalent to the original definition [@haapasalo2014] of boundariness. We recall that for any element $y\in \sZ$, the [*weight function*]{} $t_y:\sZ \to [0,1]$ assigns for every $x\in \sZ$ the supremum of possible weights of the point $x$ in convex decompositions of $y$, i.e. $$t_y(x)=\sup\Big\{0\leq t < 1\,\Big|\,z=\frac{y-tx}{1-t}\in \sZ\Big\}\,.$$ Thanks to compactness of $\sZ$, the supremum is really attained and there exists some $z\in\sZ$ such that $y=t x+(1-t)z$, where $t=t_y(x)$. Note that we must have $z\in \partial \sZ$ and, in fact, for an interior point $y$, $t=t_y(x)$ is equivalent to $z\in \partial \sZ$. Let us consider a convex decomposition $y=\sum_j \pi_j x_j$, $x_j \in \partial \sZ$ and denote by $k$ the index for which $\pi_k=\max_j \pi_j\neq 1$ (the case $\max_j \pi_j=1$ is trivial and $b(y)=0$ in both definitions). If we define $\overline{x}_k=\sum_{j\neq k}
\frac{\pi_j}{1-\pi_k}x_j$ then $y=\pi_k x_k + (1-\pi_k)\overline{x}_k$, where $\overline{x}_k\in \sZ$. Either $\overline{x}_k \in \partial\sZ$ and we managed to rewrite $y$ as a two term convex combination of elements from boundary or $\overline{x}_k \in \sZ\setminus\partial\sZ$, which implies $\pi_k<t_y(x_k)$ and there exists $w\in\partial\sZ$ such that a better two term decomposition $y=t
x_k+(1-t)w$ with $t>\pi_k$ exists. This shows that definition (\[eq:defbyn\]) is equivalent to $$\begin{aligned}
b(y)&=1-\sup_{x,z\in \partial \sZ}\{s|y=(1-s)x+s z\} \nonumber \\
&=\inf_{x\in \partial \sZ}t_y(x)\,. \nonumber\end{aligned}$$ Finally, we obtain the original definition [@haapasalo2014] $$\begin{aligned}
\label{eq:defbdorig}
b(y)=\inf_{x\in \sZ}t_y(x),\end{aligned}$$ because the infimum is always determined by elements $x\in ext(\sZ)$ as we discussed in Ref. [@haapasalo2014 Proposition 1].
Having established the cone picture of quantum structures, it is useful to see how boundariness can be defined using this language.
\[lemma:bdd\_dual\] Let $f\in C^*$. If $\|f\|_e=1$, then $e-f\in \partial C^*$.
Suppose $\|f\|_e=1$, then $e-f\in C^*$. If $e-f\in int(C^*)$, then there is some $t>0$ such that $e-f\pm tf\in C^*$. But then $(1+t)^{-1}e-f\in C^*$, so that $\|f\|_e\le (1+t)^{-1}<1$.
We now find an equivalent expression for boundariness.
\[prop:evalbd1\] $b(y)=\min\{ \<f,y\>,\ f\in C^*, \|f\|_e=1\}$.
Let us denote the minimum on the right hand side by $\tilde b(y)$. Let $x\in
\sZ$ and $y=tx+(1-t)z$, with $t=t_y(x)$. Then $z\in \partial \sZ$, so that there is some nonzero $f\in C^*$ such that $\<f,z\>=0$. Put $\tilde f=
\|f\|_e^{-1}f$, then $\tilde f\in C^*$, $\|\tilde f\|_e=1$ and we have $$\tilde b(y)\le \<\tilde f,y\>=t_y(x)\<\tilde f,x\>\le t_y(x).$$ Since this holds for all $x\in \sZ$, we obtain $\tilde b(y)\le b(y)$.
For the converse, let $f\in C^*$, $\|f\|_e=1$, then $e-f\in \partial C^*$. Hence there is some element $x\in \sZ$, such that $\<e-f,x\>=0$. Let $s=t_y(x)$, then $y=sx+(1-s)z$ for some $z\in \partial \sZ$. We have $$\<f,y\>=1-\<e-f,y\>=1-(1-s)\<e-f,z\>\ge s=t_y(x)\ge b(y),$$ hence $\tilde b(y)\ge b(y)$.
Let $x,y\in \sZ$ and take $z\in \partial \sZ$ such that $y=sx+(1-s)z$, where $s=t_y(x)$. Then $$\begin{aligned}
\label{eq:upbound1}
\|x-y\|_B&=\|x-sx-(1-s)z\|_B \nonumber \\
&=(1-s)\|x-z\|_B\le 2(1-b(y))\end{aligned}$$ constitutes the upper bound derived in [@haapasalo2014].
\[prop:attained\] Let $y\in ri(\sZ)$ and let $x\in\sZ$. The following are equivalent.
1. $\|y-x\|_B=2(1-b(y))$
2. $t_y(x)=b(y)$
3. There is some $f\in C^*$, with $\|f\|_e=1$ and $\<f,y\>=b(y)$, such that $\<f,x\>=1$.
Suppose (i) and let $y=sx+(1-s)z$ with $s=t_y(x)$. Then $$2(1-b(y))=\|x-y\|_B=(1-s)\|x-z\|_B.$$ Since both $(1-s)\le 1-b(y)$ and $\|x-z\|_B\le 2$, the equality implies that $t_y(x)=s=b(y)$.
Suppose (ii), then $y=b(y)x+(1-b(y))z$ for some $z\in \partial \sZ$. There is some nonzero $f\in C^*$ such that $\<f,z\>=0$ and we may clearly suppose that $\|f\|_e=1$. By Proposition \[prop:evalbd1\], $b(y)\le \<f,y\>=b(y)\<f,x\>\le
b(y)$. Since $y$ is an interior point, $b(y)>0$, so that we must have $\<f,y\>=b(y)$ and $\<f,x\>=1$.
Finally, suppose (iii), then using inequalities (\[eq:base\]),(\[eq:upbound1\]), $$\begin{aligned}
2(1-b(y))&\ge \|x-y\|_B\ge 2\<e-f,y-x\>=2\<e-f,y\>\\&=2(1-b(y)).\end{aligned}$$
We now resolve the conjecture of the tightness of the upper bound (\[eq:upbound1\]) by showing that it can be always saturated.
\[thm:tightness\] For any $y\in \sZ$, there exists some $x_0\in ext(\sZ)$, such that $$\|y-x_0\|_B=\sup_{x\in B}\|y-x\|_B=2(1-b(y)).$$
Note first that since $x\mapsto \|y-x\|_B$ is a convex function, the supremum over $\sZ$ is attained at some $x_0\in ext(\sZ)$. It is therefore enough to prove that equality in (\[eq:upbound1\]) holds for some $x\in \sZ$. If $y$ is an interior point, then by Proposition \[prop:attained\], the equality is attained for any $x$ such that $t_y(x)=b(y)$, and we know from the results in [@haapasalo2014] that this is achieved in $B$. If $y\in \partial \sZ$, then there exists some $f\in C^*$, $\|f\|_e=1$ such that $\<f,y\>=0$ and since $e-f\in \partial C^*$, there is some $x\in \sZ$ such that $\<e-f,x\>=0$. Then $$2\ge \|y-x\|_B\ge 2\<e-f,y-x\>=2=2(1-b(y)).$$
Boundariness for quantum channels
=================================
In Ref. [@haapasalo2014] it was shown that the inequality (\[eq:upbound1\]) is saturated for states and observables, however, the case of channels remained open. Theorem \[thm:tightness\] shows that this saturation holds also in this remaining case. In particular, for any interior point $Y\in\cal{Q}$, where $\cal{Q}$ is either the set of quantum states, or channels, or observables, the identity holds $$||X-Y||_B=2(1-b(Y) )\,,$$ for a suitable $X\in ext(\cal{Q})$. In what follows we will make a bit stronger and surprising observation that $X$ needs to be a unitary channel. We will prove a theorem indicating that unitary channels are somehow special from the perspective of boundariness and minimum-error discrimination.
\[lem:optsfd\] Let $D$ be a positive operator on ${\mathcal{H}}_d\otimes{\mathcal{H}}_d$ and define $$\label{eqn:def-S}
\mathcal{R} = \left\{\ket{y}\in{\mathcal{H}}_d\otimes{\mathcal{H}}_d:
{\rm tr}_1\ket{y}\bra{y}\leq \frac{1}{d}I\right\}.$$ Denote by $\ket{y_D}\in\mathcal{R}$ a vector which maximizes the overlap with $D$, i.e. $\bra{y_D} D \ket{y_D} = \max_{\ket{y} \in \mathcal{R}} \bra{y} D
\ket{y}$. Then $\ket{y_D}$ is a unit vector, hence it is maximally entangled.
Let us note that $\ket{y}\in\mathcal{R}$ is normalized to one if and only if $\ket{y}$ is maximally entangled, i.e. ${\rm tr}_1\ket{y}\bra{y}=\frac{1}{d}I$. Suppose $\ket{y_D}$ has the following Schmidt decomposition $\ket{y_D}=\sum_j
\sqrt{\mu_j} \ket{e_j}\ket{f_j}$ and assume that for some $k$ we have $\mu_k < 1/d$, thus it is not normalized. Then $$\begin{aligned}
\bra{y_D} D \ket{y_D}
=&\, \mu_k \bra{e_k f_k} D \ket{e_k f_k} +
\sum_{j,l\neq k} \sqrt{\mu_j \mu_l} \bra{e_j f_j} D \ket{e_l f_l} \nonumber \\
& + 2 \sqrt{\mu_k} \sum_{j\neq k} \sqrt{\mu_j} Re \bra{e_k f_k} D \ket{e_j f_j}\,. \nonumber\end{aligned}$$
In what follows we will construct a vector from $\mathcal{R}$ which has a greater overlap with $D$. First, we introduce vector $\ket{\tilde{e}_k}$ which differs from $\ket{e_k}$ only by a sign $$\ket{\tilde{e}_k} = \mathrm{sgn}_+\left(\sum_{j\neq k} \sqrt{\mu_j} Re \bra{e_k f_k} D \ket{e_j f_j}\right) \ket{e_k},$$ where $\mathrm{sgn}_+(x)$ equals to $1$ for non-negative $x$ and $-1$ for negative $x$. Using this vector we write $$\begin{split}
&
\mu_k \bra{e_k f_k} D \ket{e_k f_k} +
2 \sqrt{\mu_k} \sum_{j=1, j\neq k}^d \sqrt{\mu_j} Re \bra{e_k f_k} D \ket{e_j f_j}\\
&\leq
\mu_k \bra{e_k f_k} D \ket{e_k f_k} +
2 \sqrt{\mu_k} \left| \sum_{j=1, j\neq k}^d \sqrt{\mu_j} Re \bra{e_k f_k} D \ket{e_j f_j} \right|\\
&=
\mu_k \bra{\tilde{e}_k f_k} D \ket{\tilde{e}_k f_k} +
2 \sqrt{\mu_k} \sum_{j=1, j\neq k}^d \sqrt{\mu_j} Re \bra{\tilde{e}_k f_k} D \ket{e_j f_j}.
\end{split}$$ In the last line above, $\mu_k$ is multiplied by strictly positive factor ($D$ is a positive matrix) and $\sqrt{\mu_k}$ is multiplied by a non-negative factor, so we will (strictly) increase the value of the products if we replace $\mu_k$ with $\frac1d$. Finally we obtain $$\begin{split}
\bra{y} D \ket{y} < \bra{\tilde{y}} D \ket{\tilde{y}},
\end{split}$$ for $\ket{\tilde{y}} = \sum_{i=1,i\neq k}^d \sqrt{\mu_i} \ket{e_i f_i} +
\sqrt{\frac1d} \ket{\tilde{e}_k f_k}$. Since $\ket{\tilde{y}} \in \mathcal{R}$, we obtained a contradiction.
\[thbfore\] Suppose ${\mathcal{F}}$ is an interior element of the set of channels $\cal{Q}$. Then $$\label{eq:bforf}
b({\mathcal{F}}) = \left[{\max_{{\mathcal{U}}} \lambda_1(J^{-1}_{\mathcal{F}}J_{\mathcal{U}})}\right]^{-1}
= \frac{d}{\max_{U} \bb U |J^{-1}_{\mathcal{F}}|U\kk} \,,$$ where the optimization runs over all unitary channels ${\mathcal{U}}:\rho\mapsto U\rho
U^\dagger$ and $|U\kk=(U\otimes I) \sum_j \ket{jj}$. Moreover, if ${\mathcal{F}}=b({\mathcal{F}})\,\mathcal{E} + (1-b({\mathcal{F}}))\, \mathcal{G}$ for some $\mathcal{E}\in
\cal{Q}$, $\mathcal G\in \partial \cal{Q}$, then $\mathcal{E}$ must be a unitary channel.
Let us denote by $J_{\mathcal{E}}, J_{\mathcal{F}}$ Choi-Jamiolkowski operators for channels ${\mathcal{E}}$ and ${\mathcal{F}}$, respectively. We assume ${\mathcal{F}}$ is an interior element, thus, $J_{{\mathcal{F}}}$ is invertible. Then $t_{\mathcal{F}}({\mathcal{E}})=\sup\{0\leq t<1, J_{\mathcal{F}}-tJ_{\mathcal{E}}\geq 0\}$. It follows that for all $\ket{x}$, $\bra{x}J_{{\mathcal{F}}}\ket{x} \geq t \bra{x} J_{{\mathcal{E}}}
\ket{x}$. Setting $\ket{y} = \sqrt{J_{{\mathcal{F}}}} \ket{x}$ we obtain $$\begin{aligned}
\label{eq:lbont}
\frac{1}{t} \geq \frac{\bra{y}\sqrt{J_{{\mathcal{F}}}}^{-1} J_{{\mathcal{E}}}
\sqrt{J_{{\mathcal{F}}}}^{-1} \ket{y}}{{\langle y | y \rangle}}.\end{aligned}$$ The maximum value of the right hand side equals $\lambda_1(\sqrt{J_{{\mathcal{F}}}}^{-1} J_{{\mathcal{E}}} \sqrt{J_{{\mathcal{F}}}}^{-1})
= \lambda_1(J_{{\mathcal{F}}}^{-1} J_{{\mathcal{E}}})=\lambda_1(\sqrt{J_{{\mathcal{E}}}}J_{{\mathcal{F}}}^{-1}\sqrt{J_{{\mathcal{E}}}})$, where $\lambda_1(X)$ denotes the maximal eigenvalue of $X$. In conclusion, $t_{\mathcal{F}}({\mathcal{E}})=1/\lambda_1(J^{-1}_{\mathcal{F}}J_{\mathcal{E}})$ and $$\label{eqn:formula-for-b}
b({\mathcal{F}}) = \inf_{{\mathcal{E}}} t_{\mathcal{F}}({\mathcal{E}}) = \left[{\max_{{\mathcal{E}}} \lambda_1(J^{-1}_{\mathcal{F}}J_{\mathcal{E}})}\right]^{-1}\,,$$ where the optimization runs over all channels.
For any Choi-Jamiołkowski state $J_{\mathcal{E}}$ and an arbitrary unit vector $\ket{x}\in{\mathcal{H}}_d\otimes{\mathcal{H}}_d$ we have $\sqrt{J_{\mathcal{E}}} {| x \rangle \langle x |}\sqrt{J_{\mathcal{E}}} \leq J_{\mathcal{E}}$. The complete positivity of partial trace implies ${\rm tr}_1 \left( J_{\mathcal{E}}- \sqrt{J_{\mathcal{E}}} {| x \rangle \langle x |}\sqrt{J_{\mathcal{E}}} \right) \geq 0$, and since ${\rm tr}_1 J_{\mathcal{E}}= \frac{1}{d} I$ it follows $${\rm tr}_1 \sqrt{J_{\mathcal{E}}} {| x \rangle \langle x |} \sqrt{J_{\mathcal{E}}} \leq \frac{1}{d} I\,.$$ In other words, $\sqrt{J_{\mathcal{E}}} \ket{x}\in\mathcal{R}$ defined in Lemma \[lem:optsfd\]. Consequently, $\lambda_1(J_{\mathcal{F}}^{-1} J_{\mathcal{E}})=
\max_{\ket{x}}\bra{x}\sqrt{J_{\mathcal{E}}} J_{\mathcal{F}}^{-1}\sqrt{J_{\mathcal{E}}}\ket{x}
\leq\max_{\ket{y}\in\mathcal{R}} \bra{y}J_{\mathcal{F}}^{-1}\ket{y}$ for every channel ${\mathcal{E}}$ and using Eq. (\[eqn:formula-for-b\]) we obtain $$\begin{aligned}
\label{eq:lbforb}
b({\mathcal{F}}) = \left[\max_{{\mathcal{E}},\ket{x}}\bra{x}\sqrt{J_{\mathcal{E}}} J_{\mathcal{F}}^{-1}\sqrt{J_{\mathcal{E}}}\ket{x}\right]^{-1}\geq \left[\max_{\ket{y}\in\mathcal{R}} \bra{y}J_{\mathcal{F}}^{-1}\ket{y}\right]^{-1}.\end{aligned}$$ Since $J_{\mathcal{F}}^{-1}$ is a positive operator Lemma \[lem:optsfd\] implies that the maximum over $\ket{y}$ is achieved only by unit (hence maximally entangled) vectors. For every such vector $\ket{y_{\mathcal{F}}}$ there exists a unitary matrix $U$ such that $\ket{y_{\mathcal{F}}}=\frac{1}{\sqrt{d}}\sum_j U\ket{j}\otimes \ket{j}$. Moreover, choice of $\ket{x}=\ket{y_{\mathcal{F}}}$, ${\mathcal{E}}={\mathcal{U}}$, where $J_{{\mathcal{U}}}=\ket{y_{\mathcal{F}}}\bra{y_{\mathcal{F}}}$ proves that the lower bound (\[eq:lbforb\]) is tight. Finally, the achievability of maximum on the right hand side of Eq.(\[eq:lbforb\]) requires by Lemma \[lem:optsfd\] that the norm of $\sqrt{J_{\mathcal{E}}}\ket{x}$ is one, which in turn implies that ${\mathcal{E}}$ is a unitary channel. Otherwise $t_{\mathcal{F}}({\mathcal{E}})> b({\mathcal{F}})$ (see Eq. (\[eqn:formula-for-b\])) and decompositions of the form ${\mathcal{F}}=b({\mathcal{F}}){\mathcal{E}}+(1-b({\mathcal{F}}))\mathcal{G}$ ($\mathcal{G}\in\partial\cal{Q}$) can not exist.
Suppose ${\mathcal{F}}$ is an interior element of the set of channels. Then there exist a unitary channel ${\mathcal{U}}$ such that $||{\mathcal{F}}-{\mathcal{U}}||_{B}=2(1-b({\mathcal{F}}))$. Moreover, if ${\mathcal{E}}\in \cal{Q}$ is not a unitary channel, then $\|{\mathcal{F}}-{\mathcal{E}}\|_B<2(1-(b({\mathcal{F}}))$.
Combining Proposition \[prop:attained\] and Theorem \[thbfore\] we conclude that the equality $||{\mathcal{F}}-{\mathcal{U}}||_{B}=2(1-b({\mathcal{F}}))$ holds precisely for unitary channels ${\mathcal{U}}$ such that $\frac{b({\mathcal{F}})}{d}=\bb U |J^{-1}_{\mathcal{F}}|U\kk ^{-1}$
In what follows we will explicitly evaluate the boundariness formula determined in Eq. (\[eq:bforf\]) for the families of qubit and erasure channels (on arbitrary dimensional system).
Qubit channels
--------------
\[thbforqubit\] Suppose ${\mathcal{F}}$ is an interior element of the set of qubit channels. Then $$\label{eq:bforfqubit}
b({\mathcal{F}}) =\frac{2}{
\lambda_1
\left(
W^{\dagger} J_{{\mathcal{F}}}^{-1} W + (W^{\dagger} J_{{\mathcal{F}}}^{-1} W)^T
\right)
}\, ,$$ where $W$ is a unitary matrix (called sometimes a Magic Basis) [@hill1997entanglement] $$W = \frac{1}{\sqrt{2}}
\left(
\begin{smallmatrix}
0 & 0 & 1 & {\mathrm{i}}\\
-1 & {\mathrm{i}}& 0 & 0 \\
1 & {\mathrm{i}}& 0 & 0 \\
0 & 0 & 1 & -{\mathrm{i}}\end{smallmatrix}
\right).$$
For any qubit channel ${\mathcal{F}}$ with Choi-Jamiołkowski state $J_{{\mathcal{F}}}$, boundariness $b({\mathcal{F}})$ is given by (see Eq. ) $$\label{eq:numrange}
b({\mathcal{F}}) =\frac{1}{\max_{\psi\in{\mathcal{S}}_{ME}} \bra{\psi}J^{-1}_{\mathcal{F}}\ket{\psi} } \equiv \frac{1}{r^{\mathrm{ent}}\left(J^{-1}_{{\mathcal{F}}}\right)},$$ where ${\mathcal{S}}_{ME}=\left\{\ket{\psi}\in {\mathcal{H}}_d\otimes{\mathcal{H}}_d\,|\,{\rm
tr}_1\ket{\psi}\bra{\psi}=\frac{1}{d}\,I \right\}$ and $r^{\mathrm{ent}}(A)$ is a maximally entangled numerical radius for matrix $A$. We know from the literature [@dunkl2014real], that maximally entangled numerical range for $4 \times 4$ matrix $A$ is equal to real numerical range of matrix $W^{\dagger} A W$. From the above we note, that $$r^{\mathrm{ent}}(J_{{\mathcal{F}}}^{-1}) =
\lambda_1
\left(
\frac{W^{\dagger} J_{{\mathcal{F}}}^{-1} W + (W^{\dagger} J_{{\mathcal{F}}}^{-1} W)^T}{2}
\right),$$ which together with Eq. (\[eq:numrange\]) finishes the proof.
In the case of qubit channel ${\mathcal{F}}$ we can specify, the unitary channel ${\mathcal{U}}$, for which $||{\mathcal{F}}-{\mathcal{U}}||_{B}=2(1-b({\mathcal{F}}))$. It follows from the reasoning above, that unitary matrix $U$, which defines the channel, can be written as $$| U \kk = \sqrt{2} W \ket{v}.$$ Vector $\ket{v}$ above is the leading eigenvector of real symmetric matrix $W^{\dagger} J^{-1}_{\mathcal{F}}W + (W^{\dagger} J^{-1}_{\mathcal{F}}W)^T$.
Erasure channels {#sec:erasure-example}
----------------
Erasure channels transform any input state $\rho$ onto a fixed output state ${\mathcal{F}}_\sigma(\rho) = \sigma$. For such channel ${\mathcal{F}}_\sigma$ the Choi-Jamiołkowski state reads $$J_{{\mathcal{F}}_\sigma} = \frac{1}{d} \sigma \otimes I.$$
Boundariness of erasure channel ${\mathcal{F}}_\sigma$, which maps everything to a fixed interior point $\sigma$ in the set of states ${\mathcal{S}}({\mathcal{H}}_d)$, is given by $$b({\mathcal{F}}_{\sigma})
=\frac{1}{{\mathrm{tr}[\sigma^{-1}]}}.$$
Since $\sigma$ is an interior element of the set of states, $J^{-1}_{{\mathcal{F}}_\sigma} = d\, \sigma^{-1} \otimes I$ is well defined. Using theorem \[thbfore\] we obtain $$b({\mathcal{F}}_\sigma)=\frac{1}{\max_{U} \sum_{j,k} \bra{jj} (U^\dagger\sigma^{-1} U)\otimes I \ket{kk} }=\frac{1}{{\mathrm{tr}[\sigma^{-1}]}}, \nonumber$$ where we used $U\,U^\dagger=I$ and the cyclic invariance of the trace.
Let us note that in the special case of a qubit erasure channel ${\mathcal{F}}_\sigma$ with $\sigma=p \ket{0}\bra{0}+(1-p)\ket{1}\bra{1}$ we find $b({\mathcal{F}}_\sigma)=p(1-p)$ in accordance with the results of [@haapasalo2014].
Boundariness under composition
==============================
Suppose $\mathcal{E}, \mathcal{F}$ are channels on systems described in Hilbert spaces ${\mathcal{H}}_s$,${\mathcal{H}}_d$, respectively. Denote by $b(\mathcal{E}), b({\mathcal{F}})$ the values of their boundariness. In this section we address the question of the relation between the boundariness of channel composition, $b({\mathcal{E}}\otimes{\mathcal{F}})$, and the boundariness for individual channels.
\[prop:tensor\_general\] For channels the boundariness is sub-multiplicative, i.e. $b(\mathcal{E}\otimes\mathcal{F})\leq b(\mathcal{E})b(\mathcal{F})$.
Let us consider some decomposition of channels ${\mathcal{E}},{\mathcal{F}}$ into boundary elements with the weight equal to their boundariness. $$\begin{aligned}
J_{\mathcal{E}}&=b({\mathcal{E}})J_{{\mathcal{E}}+}+[1-b({\mathcal{E}})]J_{{\mathcal{E}}-} \nonumber \\
J_{\mathcal{F}}&=b({\mathcal{F}})J_{{\mathcal{F}}+}+[1-b({\mathcal{F}})]J_{{\mathcal{F}}-} \nonumber\end{aligned}$$ This allows us to write: $$\begin{aligned}
\label{eq:decomptp1}
J_{\mathcal{E}}\otimes J_{\mathcal{F}}&=b({\mathcal{E}})\, b({\mathcal{F}})\,J_{{\mathcal{E}}+}\otimes J_{{\mathcal{F}}+}+[1-b({\mathcal{E}})\,b({\mathcal{F}})\,]\,J_{\mathcal{T}}, \nonumber \\\end{aligned}$$ where $$\begin{aligned}
J_{\mathcal{T}}=& [1-b({\mathcal{E}})b({\mathcal{F}})]^{-1}\bigl( b({\mathcal{E}})[1-b({\mathcal{F}})]\,J_{{\mathcal{E}}+}\otimes J_{{\mathcal{F}}-} \nonumber \\
&+[1-b({\mathcal{E}})]\,b({\mathcal{F}})\, J_{{\mathcal{E}}-}\otimes J_{{\mathcal{F}}+} \nonumber \\
& +[1-b({\mathcal{E}})]\,[1-b({\mathcal{F}})]\, J_{{\mathcal{E}}-}\otimes J_{{\mathcal{F}}-}\bigr)\end{aligned}$$ is a Choi-Jamiolkowski state of a channel. Let us remind that a channel is on the boundary of the set of channels if and only if its Choi-Jamiolkowski state has non empty kernel (see e.g. [@haapasalo2014]). It is easy to realize that if ${\mathcal{E}}_+$ and ${\mathcal{F}}_+$ are boundary elements of the respective sets of channels, ${\mathcal{E}}_+\otimes {\mathcal{F}}_+$ lies on the boundary as well. Similarly, taking vectors $\ket{\varphi}, \ket{\psi}$ from the kernel of $J_{{\mathcal{E}}_-}$, $J_{{\mathcal{F}}_-}$, respectively, we can immediately see that $\ket{\varphi}\otimes\ket{\psi}$ belongs to the kernel of $J_\mathcal{T}$. This shows that Eq. (\[eq:decomptp1\]) provides a valid convex decomposition of a channel $\mathcal{E}\otimes\mathcal{F}$ into two boundary elements and we conclude $t_{\mathcal{E}\otimes\mathcal{F}}(\mathcal{E}_+\otimes\mathcal{F}_+)=b(\mathcal{E})b(\mathcal{F})$. Due to definition of boundariness from Eq. (\[eq:defbdorig\]) we obtain the upper bound from the proposition.
\[prop:tensor\] For states and observables the boundariness is multiplicative, i.e. $b(x\otimes y)=b(x)b(y)$, where $x,y$ stands for any pair of states, or observables.
The equality in Proposition \[prop:tensor\] is fulfilled, because for states and observables the boundariness is given by the smallest eigenvalue and eigenvalues of the tensor products are products of the eigenvalues.
We have numerical evidence suggesting that equality holds also in the case of channels, but we have no proof of such conjecture. Using Eq. (\[eq:bforf\]), this is equivalent to equality of $\max_{\xi}\bra{\xi}J^{-1}_{{\mathcal{E}}}\otimes J^{-1}_{{\mathcal{F}}}\ket{\xi}$ and $\max_{\chi}\bra{\chi}J^{-1}_{{\mathcal{E}}}\ket{\chi} \max_\omega
\bra{\omega}J^{-1}_{{\mathcal{F}}}\ket{\omega},$ where $\xi, \chi, \omega$ are maximally entangled states on the corresponding systems.
Below we prove this equality for case of qubit channels when one of the channels is the ”maximally mixed” channel ${\mathcal{F}}$, hence, for this pair of channels the boundariness is multiplicative.
\[prop:product\] Let ${\mathcal{E}}$ be an arbitrary qubit channel and let ${\mathcal{F}}$ be the erasure channel mapping any input to $\frac{1}{d} I$. Then $b({\mathcal{E}}\otimes {\mathcal{F}})=b({\mathcal{E}})b({\mathcal{F}})$.
By Proposition \[prop:tensor\], $b({\mathcal{E}}\otimes {\mathcal{F}})\le b({\mathcal{E}})b({\mathcal{F}})$, so that we have to show the opposite inequality. Let ${\mathcal{E}}: {\mathcal{B}}({\mathcal{H}}_A)\to {\mathcal{B}}({\mathcal{H}}_B)$ and ${\mathcal{F}}: {\mathcal{B}}({\mathcal{H}}_{A'})\to {\mathcal{B}}({\mathcal{H}}_{B'})$, where ${\mathcal{H}}_A$, ${\mathcal{H}}_B$ denote copies of ${\mathcal{H}}_2$, and ${\mathcal{H}}_{A'}$, ${\mathcal{H}}_{B'}$ denote copies of ${\mathcal{H}}_d$. Since $J_{{\mathcal{F}}}^{-1}=d^2 I_{B'A'}$ then by Theorem \[thbfore\] we want to prove the following inequality $$\max_{V\in {\mathcal{U}}({\mathcal{H}}_{BB'})} \bb V| J^{-1}_{{\mathcal{E}}}\otimes I_{B'A'}|V\kk\le d\max_{U\in {\mathcal{U}}({\mathcal{H}})} \bb U |J^{-1}_{\mathcal{E}}|U\kk\,.$$ For $V\in {\mathcal{U}}({\mathcal{H}}_{BB'})$, let $X_V=\mathrm{tr}_{B'A'}|V\kk\bb V|$. Then $X_V$ is a positive operator on ${\mathcal{H}}_{BA}$ and we have $$\mathrm{tr}_{B}X_V=\mathrm{tr}_{A'}\mathrm{tr}_{BB'} |V\kk\bb V|=d I_A.$$ Similarly, $\mathrm{tr}_AX_V=d I_B$. It follows that $\frac{1}{2d} X_V$ is the Choi-Jamiolkowski matrix of a unital qubit channel. As it is well known, any such channel is a random unitary channel, so that there are some unitaries $U_i\in {\mathcal{U}}({\mathcal{H}}_2)$ and probabilities $p_i$ such that $X_V=d\sum_ip_i |U_i\kk\bb U_i|$. It follows that $$\bb V| J^{-1}_{{\mathcal{E}}}\otimes I_{B'A'}|V\kk={\mathrm{tr}[J_{{\mathcal{E}}}^{-1}X_V]} \le d\max_{U\in {\mathcal{U}}({\mathcal{H}})} \bb U|J_{\mathcal{E}}^{-1}|U\kk.$$
Maximal value of boundariness
=============================
By definition, boundariness takes values between zero and one half, but all values in this interval are not necessarily attained. A simple example is the triangle (see Fig. \[fig:subfigure1\]), where one third is the maximal value. In this section we will investigate, what is the highest achievable value of boundariness in quantum convex sets, and which are the points achieving it. In fact, we will see that such point is unique and coincides with so-called maximally mixed element.
As for the other questions addressed in this paper, it is straightforward to evaluate the maximal value for states and measurements, but the case of channels is more involved.
The maximal value of boundariness for quantum convex sets is given as follows:
- [*States:*]{} $b_{\max}^s=1/d$ achieved for completely mixed state $\varrho=\frac{1}{d}I$.
- [*Observables:*]{} $b_{\max}^o=1/n$ achieved for $n$-outcome (uniformly) trivial observable $\{E_j=\frac{1}{n}I\}_{j=1}^n$.
- [*Channels:*]{} $b_{\max}^c=1/d^{2}$ achieved for completely depolarizing channel mapping all states into completely mixed state $\frac{1}{d}I$.
For states and measurements [@haapasalo2014] the highest boundariness means highest value of the lowest eigenvalue, which leads to maximally mixed state $\rho=\frac{1}{d}I$ and (uniform) trivial observable $\{E_i=\frac{1}{N}I\}_{i=1}^N$, respectively. The case of channels is more subtle. From the formula (\[eq:bforf\]) giving the boundariness of a channel it is clear that we search for a channel ${\mathcal{F}}$ such that ${\max_{U} \bb U
|J^{-1}_{{\mathcal{F}}}| U \kk }$ is minimized. We construct a simple lower bound using an orthonormal basis $\{\ket{v_i}\}_{i=1}^{d^2}$ of maximally entangled states. $$\begin{aligned}
\label{eq:lbmaxu}
{\mathrm{tr}[J^{-1}_{{\mathcal{F}}}]}=\sum_{i=1}^{d^2} \bra{v_i} J^{-1}_{{\mathcal{F}}}\ket{v_i}
\leq d \max_{U} \bb U |J^{-1}_{{\mathcal{F}}}| U \kk.\end{aligned}$$ Such a basis $\{\ket{v_{pq}}=Z^p W^q\otimes I \frac{1}{\sqrt{d}}\sum_j
\ket{jj}\}$ can be constructed by Shift and multiply unitary operators $Z=\sum_j
\ket{j\oplus 1}\bra{j}$, $W=\sum_j \omega^j \ket{j}\bra{j}$, where $\omega=e^{\frac{2\pi i}{d}}$. On the other hand from spectral decomposition $J_{{\mathcal{F}}}=\sum_i \lambda_i
\ket{a_i}\bra{a_i}$, where $\sum_i \lambda_i=1$, we have ${\mathrm{tr}[J^{-1}_{{\mathcal{F}}}]}=\sum_i \frac{1}{\lambda_i}\geq d^4$. Combining this with Eq. (\[eq:lbmaxu\]) we get $d^3\leq \max_{U} \bb U |J^{-1}_{{\mathcal{F}}}| U \kk$. Inserting this into Eq. (\[eq:bforf\]) we finally obtain $b({\mathcal{F}})\leq
\frac{1}{d^2}$. It is easy to see that the inequalities can be made tight only by a single channel, which maps everything to a complete mixture.
Summary
=======
This paper completes and extends the previous work [@haapasalo2014] in which the concept of boundariness was introduced. We proved that for compact convex sets evaluation of boundariness of $y$ coincides with the question of the best distinguishable element from $y$, i.e. $$2(1-b(y))=\max_x ||x-y||\,,$$ where $||\cdot||$ denotes the so-called base norm (being trace-norm for states, completely bounded norm – also known as the diamond norm for channels and observables). This identity was formulated in Ref.[@haapasalo2014] as an open conjecture for case of quantum channels and is confirmed by our results presented in this paper. In fact, we have discovered that the optimum is attained only for unitary channels. This surprising result provides quite unexpected operational characterization of unitary channels and exhibits their specific role among boundary elements and in minimum error discrimination questions. The unique role of unitary channels is noticeable also in the explicit formula that we derived for the evaluation of boundariness of channels. In the current paper we investigated only quantum channels mapping between Hilbert spaces of the same dimension. The results can be easily generalized for the case when the input has smaller dimension than the output. The role of unitary channels will be played by isometries. The opposite relation of the input/output dimensions seems to be much more complicated and is left for future research. Further we investigated how the boundariness behaves under the tensor product. We have shown that boundariness is a multiplicative quantity for states and observables, however, for channels we proved only the sub-multiplicativity $$b({\mathcal{E}}\otimes{\mathcal{F}})\leq b({\mathcal{E}})b({\mathcal{F}})\,.$$ However, our numerical analysis suggests that the boundariness is multiplicative also for case of channels.
Exploiting the relation between the boundariness and the discrimination, the multiplicativity implies that the most distinguishable element from $x\otimes y$ is still a factorized element $x_0\otimes y_0$, where $x_0,y_0$ stands for the most distinguishable elements from $x,y$, respectively. For channels this would mean that factorized unitaries are the most distant ones for all factorized channels. However, whether this is the case is left open.
In the remaining part of the paper we evaluated explicitly the maximal value of boundariness. We found that this maximum is achieved for intuitively the maximally mixed elements, i.e. for completely mixed state, uniformly trivial observables and channel contracting state space to the completely mixed state. In particular, for $d$-dimensional quantum systems we found for states $b_{\max}^s=1/d$, for observables $b_{\max}^o = 1/n$ is independent on the dimension (only the number of outcomes $n$ matters), and for channels $b^c_{\max}=1/d^2$. Let us stress that these numbers also determine the optimal values of error probability for related discrimination problems.
We thank to Errka Hapaasalo for discussions and workshop ceqip.eu for initiating this work. This work was supported by project VEGA 2/0125/13 (QUICOST). Z.P. acknowledges a support from the Polish National Science Centre through grant number DEC-2011/03/D/ST6/00413. A.J. acknowledges support by Research and Development Support Agency under the contract No. APVV-0178-11 and VEGA 2/0059/12. M.S. acknowledges support by the Operational Program Education for Competitiveness—-European Social Fund (Project No. CZ.1.07/2.3.00/30.0004) of the Ministry of Education, Youth and Sports of the Czech Republic. M.Z. acknowledges the support of GAČR project P202/12/1142 and COST Action MP1006.
[0]{}
T. Rockafellar, *Convex Analysis*, (Princeton University Press, Princeton, 1970)
D. Reeb, M. J. Kastroyano and M. M. Wolf, J. Math. Phys. 52, 082201 (2011), \[arXiv:1102.5170\]
E. Haapasalo, \[arXiv:1502.04881\]
A. S. Holevo, *Probabilistic and Statistical Aspects of Quantum Theory*, North-Holland series in statistics and probability 1, (Amsterdam-New York-Oxford, 1982)
C. W. Helstrom, *Quantum Detection and Estimation Theory*, (Academic Press Inc., New York, 1976)
A. Jenčová, J. Math. Phys. 55, 022201 (2014), \[arXiv:1308.4030v1\]
E. Haapasalo, M. Sedl[á]{}k and M. Ziman, Physical Review A 89, 062303 (2014)
T.Heinosaari and M. Ziman, *The Language of Quantum Theory*, (Cambridge University Press, 2013)
A. Jamiolkowski, Rep. Math. Phys. **3**, 275 (1972).
M.-D. Choi, Linear Algebra Appl. **10**, 285 (1975).
S. Hill, W.K. Wootters, Phys. Rev. Lett. 78, 5022 (1997)
C. Dunkl, P. Gawron, Ł Pawela, Z. Pucha[ł]{}a, K. [Ż]{}yczkowski, Linear Algebra and its Applications (2015), arXiv:1409.4941
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'The high metal content and fast expansion of supernova (SN) Ia ejecta leads to considerable line overlap in their optical spectra. Uncertainties in composition and ionization further complicate the process of line identification. In this paper, we focus on the 5900Å emission feature seen in SN Ia spectra after bolometric maximum, a line which in the last two decades has been associated with \[Co\]5888Å or NaD. Using non-LTE time-dependent radiative-transfer calculations based on Chandrasekhar-mass delayed-detonation models, we find that NaD line emission is extremely weak at all post-maximum epochs. Instead, we predict the presence of \[Co\]5888Å after maximum in all our SN Ia models, which cover a range from 0.12 to 0.87 of . We also find that the \[Co\]5888Å forbidden line is present within days of bolometric maximum, and strengthens steadily for weeks thereafter. Both predictions are confirmed by observations. Rather than trivial taxonomy, these findings confirm that it is necessary to include forbidden-line transitions in radiative-transfer simulations of SNe Ia, both to obtain the correct ejecta cooling rate and to match observed optical spectra.'
author:
- |
Luc Dessart,$^{1}$ D. John Hillier,$^{2}$ Stéphane Blondin,$^{1}$ and Alexei Khokhlov$^{3}$\
\
$^{1}$Aix Marseille Université, CNRS, LAM (Laboratoire d’Astrophysique de Marseille) UMR 7326, 13388, Marseille, France.\
$^{2}$: Department of Physics and Astronomy & Pittsburgh Particle physics, Astrophysics, and Cosmology Center (PITT PACC), University of Pittsburgh,\
3941 O’Hara Street, Pittsburgh, PA 15260, USA.\
$^3$ Department of Astronomy & Astrophysics and the Enrico Fermi Institute, The University of Chicago, Chicago, IL 60637, USA
date: 'Accepted . Received '
title: '\[Co\] versus NaD in type Ia supernova spectra'
---
\[firstpage\]
radiative transfer – supernovae: general – supernovae: individual: 2005cf – stars: white dwarfs
Introduction
============
Radiative-transfer modeling of supernova (SN) Ia is a challenging enterprise (@pinto_eastman_93 [@hoeflich_95; @BHN96_non-lte; @pinto_eastman_00a; @pinto_eastman_00b; @hoeflich_03; @kasen_etal_06; @jack_etal_11; @dessart_etal_13xx], hereafter D13). For a start, the initial ejecta conditions for such simulations are uncertain. The possibility of both single- and double-degenerate progenitor systems suggests that the ejecta mass likely varies amongst SN Ia ejecta (e.g., @pinto_eastman_00a [@sim_etal_10; @kerkwijk_etal_10]). The ejecta composition that results from the combustion of a C/O white dwarf is not known with confidence because the explosion scenario, besides being non unique, involves hard-to-model combustion physics [@HN00_Ia_rev]. As a result, for each scenario, simulations of the explosion, whether 1-D or multi-D, are parametrized rather than modeled consistently from first principles [@K91a; @gamezo_etal_05; @roepke_hillebrandt_05]. The progenitor is likely to depart from a simple hydrostatic configuration for many possible reasons including fast rotation [@yoon_langer_04], the conditions produced by the smoldering phase and ignition (see, e.g., @woosley_etal_04), or from the complex dynamical evolution in a merger event [@pakmor_etal_11; @pakmor_etal_12].
Even if the ejecta properties were accurately known, the modeling of the radiation would remain challenging because of the prevalence of line opacity, the strong influence of scattering resulting from the very low gas densities in the fast expanding low mass ejecta, the importance of non-LTE and time-dependent effects, and non-thermal processes. Perhaps even more important are the numerous processes that take place between the various constituents of the gas (electrons and ions), involving thousands of atomic levels, and which control its thermodynamic state. Historically, this complexity has generally been interpreted in terms of an “opacity" problem (see, e.g., @hoeflich_etal_93 [@pinto_eastman_00b; @kasen_etal_08]). In @dessart_etal_13xx, we demonstrate that the temperature and ionization distribution, which are difficult to determine accurately, are also important (not surprisingly) because the thermodynamic state of the gas determines which ions provide the opacity. We found, for example, that forbidden lines are crucial, as early as bolometric maximum, in controling the cooling of the ejecta. Paradoxically, these lines have low oscillator strengths, and thus tend to be ignored in simulations prior to $\lesssim$100d after explosion. Finally, inaccuracies in the atomic data, or the lack of atomic data altogether, introduce a major source of uncertainty in any radiative transfer modeling of SNe Ia.
A central problem with SNe in general is that lines are strongly Doppler broadened by the fast expansion of the ejecta. The large metal mass fraction in SNe Ia leads to the presence of forests of lines which overlap, typically preventing the identification of a “clean" line anywhere in their spectra. The emerging radiation is in addition strongly influenced by a few strong lines, such as Si6355Å or the Ca triplet, giving the wrong impression that the spectrum is analogous to a blackbody influenced by a few spectral features of large optical depth. This situation is particularly problematic when SNe Ia evolve past maximum because at such times, the optical depth clearly drops, the ejecta turns nebular, but strong lines are still present. There are no regions with negligible flux, even though the continuum optical depth is well below unity even at bolometric maximum [@hoeflich_etal_93; @pinto_eastman_00b; @hillier_etal_13].
After bolometric maximum, the peaks and valleys of SN Ia spectra are a complex blend of numerous lines, some thick, others thin, each interacting with hundreds of other lines either locally (within a Sobolev length) or non-locally (because redshifted into resonance with a redder line). This conspires to produce confusion about spectrum formation in SNe Ia. The concept of a “photosphere" is routinely used but the notion of a sharp boundary from where radiation would escape does not hold for SNe Ia. @branch_etal_08 propose that most/all lines at nebular times are permitted (generally resonance) transitions, while numerous papers emphasize the near exclusive presence of forbidden-line transitions [@kuchner_etal_94; @mazzali_etal_08].
To give additional evidence for the importance of forbidden lines in SNe Ia [@dessart_etal_13xx], we focus here on the 5900Å feature observed in post-maximum SN Ia spectra. In recent years, this feature has been associated with NaD, although this association remains suspicious and puzzling. Models sometimes provide a very good fit [@mazzali_etal_08 Fig. 3], a poor fit [@branch_etal_08], or predict no feature at that location [@kasen_etal_09; @blondin_etal_11; @tanaka_etal_11]. In the next section, we give an historical perspective of earlier work that modeled or discussed the spectral feature at 5900Å. We then present results from our grid of delayed-detonation and pulsational-delayed-detonation models covering a range of mass and show that this feature can be explained as \[Co\]5888Å emission, across the range from sub-luminous to standard SNe Ia (Section \[sect\_res\]). In other words, we find that such ejecta models naturally produce an emission feature at 5900Å, and that this emission in our models is systematically associated with \[Co\]5888Å. Our conclusions and a discussion of future work are presented in Section \[sect\_conc\].
Far from being a banal characteristic of SN Ia spectra, the observation of the \[Co\]5888Å line indicates that forbidden line transitions have to be incorporated in any SN Ia model at and beyond bolometric maximum, not just to reproduce observations, but also to compute correctly the cooling of SN Ia ejecta.
Previous works {#sect_hist}
==============
The spectral feature at 5900Å in SNe Ia has been discussed repeatedly in the last two decades. We can separate the studies focusing on the “photospheric" phase (epochs until soon after bolometric maximum) and those devoted to “advanced" nebular phase (beyond 100d after explosion when the SN exhibits an apparently pure-emission spectrum).
@axelrod_80 is probably the first to study nebular-phase spectra of SNe Ia, and he associates unambiguously the 5900Å feature with the forbidden transition of Co at 5888Å. Later, @eastman_pinto_93 present numerical developments incorporated into the code [eddington]{}, and apply their technique to spectrum formation in a SN Ia ejecta at 250d after explosion. They propose as the origin of the 5900Åfeature (see their Fig. 5). @kuchner_etal_94 use the simultaneous presence of \[Co\]5888Å and \[Fe\]4658Å (strictly speaking, the 4500-5000Å region contains lines from both Feand Fe) in SN Ia spectra to confirm the radioactive decay of at the origin of the SN Ia luminosity. Indeed, they find the flux ratio of these two lines (ignoring line overlap and some complications of the radiative transfer) can be explained from the decay of to . They discard the possible association of the 5900Å feature with Na. @mazzali_etal_97 study SN1991bg at both early times and late times. At nebular times, they propose that the 5900Å feature is primarily associated with \[Co\], and show how this Co forbidden transition may be used to set constraints on the original mass. Their nebular model reproduces the 5900Å feature, in both strength and width (see their Fig. 14), supporting the same assessment made by @kuchner_etal_94.
-------- ---------------- ---------------- ---------------- ---------------- ----------------
$M$() $M$(Ni) $M$(Fe) $M$(Si) $M$(Na)
\[M$_{\sun}$\] \[M$_{\sun}$\] \[M$_{\sun}$\] \[M$_{\sun}$\] \[M$_{\sun}$\]
DDC0 0.869 0.872 0.102 0.160 6.22(-6)
DDC6 0.722 0.718 0.116 0.216 1.02(-5)
DDC10 0.623 0.622 0.115 0.257 1.26(-5)
DDC15 0.511 0.516 0.114 0.306 1.70(-5)
DDC17 0.412 0.421 0.112 0.353 2.53(-5)
DDC20 0.300 0.315 0.110 0.426 3.53(-5)
DDC22 0.211 0.231 0.107 0.483 6.30(-5)
DDC25 0.119 0.142 9.80(-2) 0.485 1.51(-4)
PDDEL3 0.685 0.680 0.107 0.218 1.57(-5)
-------- ---------------- ---------------- ---------------- ---------------- ----------------
: Summary of nucleosynthetic yields for the Chandrasekhar-mass delayed-detonation models used in this work. Numbers in parenthesis correspond to powers of ten.[]{data-label="tab_modinfo"}
Disparate interpretations seem to start with the work of @lentz_etal_01 who fail to reproduce the 5900Åregion of SN1994D at early post-peak epochs, despite strong modulations of the adopted Na abundances in their ejecta model. They suggest that the explosion model employed is probably inadequate, but it also seems that they do not include forbidden-line transitions in the radiative-transfer modeling. @branch_etal_08 use [synow]{} to identify the lines present in SNe Ia after bolometric maximum and conclude that most lines are permitted. They also argue for a NaD association with the 5900Å feature (the broad absorption they predict is however associated with a broad line emission that is unseen), and while they mention the possibility they do not retain it. @mazzali_etal_08 present an analysis of SN2004eo spectra. They associate the 5900Å emission feature at 250d after explosion with NaD. At earlier times, they suggest the 5900Å feature is due to Si – it seems that no \[Co\] line is included in their modeling. Along the same reasoning, @tanaka_etal_11 tie the 5900Å feature to NaD in SN2003du, although the feature is not fitted by their model.
@maurer_etal_11 perform detailed non-LTE steady-state radiative-transfer calculations at nebular times and explicitly discuss \[Co\] line emission. Although the strength of \[Fe\] and \[Co\] lines varies between codes, sometimes significantly, \[Co\] remains the most plausible explanation for the 5900Å line.
It thus seems that from being secure in the 90’s, the association of the 5900Å feature with the \[Co\]5888Å line is no longer retained, even though the NaD association is rarely matched satisfactory by SN Ia radiative-transfer models.[^1] Following upon our recent study of SN Ia physics and spectrum formation (D13), we study the origin of the 5900Å feature in post-maximum SN Ia spectra using and our most complete model atom (see D13 for details). For this purpose, we use the same delayed detonation models with which we obtain good agreement to maximum-light spectra of SNe Ia [@blondin_etal_13]. We also include one pulsational-delayed-detonation model from @dessart_etal_13yy, model PDDEL3, since it yields a satisfactory match to the SN Ia 2005cf evolution from about $-$10 to +80d. In numerous ways, the match is superior to that obtained with model DDC10, for reasons that we discuss in @dessart_etal_13yy.
Results from delayed-detonation models {#sect_res}
======================================
In this paper, we use a sample of simulations from @blondin_etal_13, D13, and @dessart_etal_13yy. All are delayed detonations (DDC sequence), but some explode following a pulsation (PDDEL sequence; see @dessart_etal_13yy for details). We summarize the key nucleosynthetic yields for these models in Table \[tab\_modinfo\].
For the discussion of the paper, it suffices to say that these models cover a range of mass, from 0.18 to 0.81, and provide a satisfactory match to a wealth of SNe Ia at maximum light [@blondin_etal_13]. A fundamental property of [*all*]{} our delayed detonation models is that the sodium mass fraction below 15000 is on the order of 10$^{-10}$. The total sodium mass in our models is typically on the order of 10$^{-5}$.
In D13, we described how one must exert extreme care to account for a number of critical non-LTE processes in order to follow the SN Ia evolution from maximum light to the nebular phase. This not only requires a detailed account for opacity sources, especially associated with metals, but also for critical coolants which act as primary ingredient for setting the temperature and ionization state of the gas. So, each model we present here was evolved from 1d to 100d after explosion with all the key processes we found to be important (D13).
It would be presumptuous to pretend that with eight delayed-detonation models, we could reproduce all SNe Ia that exist. Instead, our strategy is to identify key signatures that characterize and distinguish SNe Ia, and test whether such delayed-detonation models predict those key signatures, without any tinkering of our hydrodynamical models (on composition, density profile, total mass, etc.). This strategy is sound because SNe Ia constitute a highly homogeneous class of events once we exclude the less frequent 91bg-like and 91T-like events. If a given explosion model has any validity, it should reproduce at least a subset of SNe Ia. We suspect this degeneracy calls for a similar degeneracy in SN Ia ejecta, and our simulations of delayed detonations confirm this.
In Fig. \[fig\_grid\], we present a counterpart of the montage of $B$-band maximum spectra presented in @blondin_etal_13, but this time for an epoch of $\sim$40d after B-band maximum. At this time, SNe Ia exhibit a very different spectral morphology, with a reduced flux in the blue and a dominance of metal lines, in particular from iron around 5000Å. In this montage, we can clearly see that a broad feature is present around 5900Å in all SNe Ia selected (which are quite typical of the SN Ia population as a whole), and therefore, that this feature is present whether we consider a luminous SN Ia like 1999ee or a sub-luminous SN Ia like 1999by.
Interestingly, all our delayed-detonation models predict a feature near 5900Å. At 40d past $B$-band maximum, the spectrum forms in the inner ejecta at velocities $\lesssim$10000, although some photons will still experience scattering and absorption at larger velocities if they overlap with strong lines (primarily from intermediate mass elements) or if they lie in the UV range. In the inner ejecta, our models have a composition dominated by cobalt and iron. As mentioned above, delayed detonation models leave no trace of sodium below about 15000, and as expected we find that NaD emission is negligible in our synthetic spectra at those epochs. To illustrate the association of \[Co\]5888Å(the properties of the Co forbidden-line transitions in the 6000Å region are given in Table \[tab\_co3\]) with the observed feature, we recompute the synthetic spectra but set the oscillator strength of that forbidden-line transition to zero. The resulting synthetic spectra are indistinguishable, apart from a strong difference in the 5900Å region. The associated flux difference in shown as a filled area colored in red.
After the detailed discussion presented in D13, it is not surprising that this feature is indeed primarily due to \[Co\]5888Å. And this explanation is more sensible from the point of view of nucleosynthesis since the sodium mass fraction in the inner ejecta is vanishingly small in all delayed-detonation models.
$\lambda$ \[Å\] Transition $f$ A \[s$^{-1}$\]
---- ----------------- -------------------------------------------------- ------------ ----------------
Co 5627.104 3d$^7$$^4$F$_e$\[9/2\] - 3d$^7$ $^2$G$_e$\[7/2\] 5.323(-11) 0.0140
Co 5888.482 3d$^7$$^4$F$_e$\[9/2\] - 3d$^7$ $^2$G$_e$\[9/2\] 2.081(-9) 0.4001
Co 5906.783 3d$^7$$^4$F$_e$\[7/2\] - 3d$^7$ $^2$G$_e$\[7/2\] 7.850(-10) 0.1500
Co 6195.455 3d$^7$$^4$F$_e$\[7/2\] - 3d$^7$ $^2$G$_e$\[9/2\] 8.636(-10) 0.1200
Co 6576.309 3d$^7$$^4$F$_e$\[9/2\] - 3d$^7$ $^4$P$_e$\[5/2\] 1.868(-10) 0.0480
: Summary of atomic properties for the \[Co\] lines in the 6000Å region, including the line at the origin of the 5900Å feature in post-maximum SN Ia spectra. The collisional strengths of Co, which are not known, are adopted from those of Ni [@2002CoPhC.145..311S] since it possesses a very similar term structure [@NIST]. Parentheses indicate powers of ten. []{data-label="tab_co3"}
A further confirmation of this association is that the observed strength of the 5900Å feature typically increases after bolometric maximum. In the context of \[Co\] emission, this also makes sense since the strength of forbidden lines relative to other lines should increase as the ejecta density drops. We show in Fig. \[fig\_seq\] a montage of spectra for the evolution of SN2005cf and model PDDEL3 [@dessart_etal_13yy] from bolometric maximum (i.e., +0d) until +80d (we note that model DDC10 studied in detail in D13 does a somewhat better job in the $B$-band region, but model PDDEL3 is more compatible with the narrow line profiles of SN2005cf; both models are equally suitable for the present discussion). As in Fig. \[fig\_grid\], we show the flux associated with the \[Co\]5888Å line as a filled red area. It is clearly evident that early after bolometric maximum, flux from that forbidden line contributes to the emergent radiation, and that this flux contribution increases with time, as observed. We note that at bolometric maximum, \[Co\]5888Å line emission is already a strong coolant of the Co-rich layers; \[Co\]5888Å line photons are not seen earlier on because these regions are located below the last scattering/absorbing layer.
There are other \[Co\] lines in the 6000Å region (Table \[tab\_co3\]). Of all these, the transition at 5888Å is the strongest. The transition at 5906Å is expected to be weaker and it also overlaps with the 5888Å transition. The transition at 6195Å should be about 1/3 of the strength of 5888Å but the model predicts essentially no flux in this line. We find that the 6195Å suffers absorption by overlapping lines, in particular from the Si doublet at 6355Å, which is strong at those epochs. Numerous other lines are present in this spectral region, while fewer reside in the 5900Å region, allowing the \[Co\]5888Å to escape. We also find that these low-lying states are in LTE with the ground state (they have the same departure coefficients), although we obtain strong depopulation (and strong departure from LTE) of the ground state of \[Co\] through non-thermal ionization and excitation (this departure is also epoch dependent). Scattering is thus expected to influence little the formation of these forbidden-line transitions.
We also see that in our simulations, the delayed detonation models systematically show a range of optical colors at +40d, while observations look more similar at this time (Fig. \[fig\_grid\]). This is particularly visible for the low-luminosity SN Ia 1999by, which shows a stronger \[Co\]5888Å line and relatively less flux in the red than in our model DDC25. This range in colors in our models reflects the trend in ionization level at the corresponding epoch, as evidenced by the ionization state of Co in the in the inner ejecta where the spectrum forms (Fig. \[fig\_ion\]). This could arise from problems with the atomic physics, or from issues with the structure computed for the assumed progenitor model. Another possible explanation may be that, while the mass is accurate in each model/observation pair (tightly constrained by the peak luminosity), the ejecta mass of 1.38, which is kept fixed here, may also vary and in particular involve sub-Chandrasekhar mass WDs (see, e.g., @sim_etal_10). For a fixed mass, reducing the ejecta mass leads naturally to a significant increase in ejecta ionization (see, e.g., @dessart_etal_12c).
Discussion and conclusions {#sect_conc}
==========================
From this work, we unambiguously discard NaD as the origin of the 5900Åfeature in SNe Ia after bolometric maximum based on 1) the prediction of \[Co\]5888Å line emission from all our delayed-detonation models after bolometric maximum; 2) the strengthening of that \[Co\] line with time in the first 2 months after bolometric maximum and 3) the satisfactory match of our synthetic spectra to the observations.
This finding is not trivial taxonomy because it further confirms our conclusions from the more theoretical discussion in D13. Although the \[Co\] line was identified three decades ago in nebular phase spectra of SNe Ia [@axelrod_80] we are discovering the role of \[Co\] in SNe Ia [*as early as bolometric maximum*]{}, and confirming its identification at later times. Lately, a controversial association has been made with NaD but this association is not supported by models. One explanation for the mis-identification is that \[Co\] lines are not included in the associated radiative transfer calculations [@lentz_etal_01; @mazzali_etal_08; @tanaka_etal_11], as was also done in our earlier modeling of SN Ia. When they are included, the line shows up [@maurer_etal_11]. Another explanation is that SN radiative transfer is often done by separate codes for “photospheric" phase and nebular phase studies, and photospheric codes typically neglect forbidden lines. Unfortunately, this creates a boundary between the two regimes that is artificial. Nebular lines are indeed seen at photospheric epochs, e.g., the \[Ca7300Å doublet at the end of the plateau in SNe II-P [@dessart_etal_13] or \[Co\]5888Å early after bolometric maximum; this work). Similarly, strong P-Cygni profiles, typical of photospheric-phase spectra persist well into the nebular phase.
In the future, we will investigate the radiative properties of SNe Ia as they turn nebular. As shown here, our Chandrasekhar-mass delayed-detonation models exhibit a large range of ionization whereas observations appear somewhat degenerate in that respect. Our simulations differ in mass but have the same ejecta mass of 1.4. While the mass determines the peak luminosity, the ratio of to ejecta mass is a key ingredient controlling the ionization state of the gas. Hence, we will investigate whether a range of ejecta masses, tied to a narrow range of to ejecta mass ratio, can reduce the ionization disparity of our models after maximum.
Acknowledgments {#acknowledgments .unnumbered}
===============
LD and SB acknowledge financial support from the European Community through an International Re-integration Grant, under grant number PIRG04-GA-2008-239184, and from “Agence Nationale de la Recherche" grant ANR-2011-Blanc-SIMI-5-6-007-01. DJH acknowledges support from STScI theory grant HST-AR-12640.01, and NASA theory grant NNX10AC80G. This work was also supported in part by the National Science Foundation under Grant No. PHYS-1066293 and benefited from the hospitality of the Aspen Center for Physics. AK acknowledges the NSF support through the NSF grants AST-0709181 and TG-AST090074. This work was granted access to the HPC resources of CINES under the allocation c2013046608 made by GENCI (Grand Equipement National de Calcul Intensif).
[39]{} natexlab\#1[\#1]{}
, T. S. 1980, PhD thesis, California Univ., Santa Cruz.
, E., [Hauschildt]{}, P. H., [Nugent]{}, P., & [Branch]{}, D. 1996, , 283, 297
, S., [Dessart]{}, L., [Hillier]{}, D. J., & [Khokhlov]{}, A. M. 2013, , 429, 2127
, S., [Kasen]{}, D., [R[ö]{}pke]{}, F. K., [Kirshner]{}, R. P., & [Mandel]{}, K. S. 2011, , 417, 1280
, D., [Jeffery]{}, D. J., [Parrent]{}, J., [Baron]{}, E., [Troxel]{}, M. A., [Stanishev]{}, V., [Keithley]{}, M., [Harrison]{}, J., & [Bruner]{}, C. 2008, , 120, 135
, L., [Blondin]{}, S., [Hillier]{}, D. J., & [Khokhlov]{}, A. 2013, MNRAS, to be submitted
, L., [Hillier]{}, D. J., [Blondin]{}, S., & [Khokhlov]{}, A. 2013, ArXiv:1308.6352
, L., [Hillier]{}, D. J., [Waldman]{}, R., [Livne]{}, E., & [Blondin]{}, S. 2012, , 426, L76
, L., [Waldman]{}, R., [Livne]{}, E., [Hillier]{}, D. J., & [Blondin]{}, S. 2013, , 428, 3227
, R. G. & [Pinto]{}, P. A. 1993, , 412, 731
—. 1993, , 412, 731
, V. N., [Khokhlov]{}, A. M., & [Oran]{}, E. S. 2005, , 623, 337
, W. & [Niemeyer]{}, J. C. 2000, , 38, 191
, D. J., [Dessart]{}, L., & [Li]{}, C. 2013, High Energy Density Physics, 9, 297
, P., [Mueller]{}, E., & [Khokhlov]{}, A. 1993, , 268, 570
, P. 1995, , 443, 89
, P. 2003, in Astronomical Society of the Pacific Conference Series, Vol. 288, Stellar Atmosphere Modeling, ed. I. [Hubeny]{}, D. [Mihalas]{}, & K. [Werner]{}, 185
, D., [Hauschildt]{}, P. H., & [Baron]{}, E. 2011, , 528, A141
, D., [R[ö]{}pke]{}, F. K., & [Woosley]{}, S. E. 2009, , 460, 869
, D., [Thomas]{}, R. C., & [Nugent]{}, P. 2006, , 651, 366
, D., [Thomas]{}, R. C., [R[ö]{}pke]{}, F., & [Woosley]{}, S. E. 2008, Journal of Physics Conference Series, 125, 012007
, A. M. 1991, , 245, 114
, M. J., [Kirshner]{}, R. P., [Pinto]{}, P. A., & [Leibundgut]{}, B. 1994, , 426, L89
, E. J., [Baron]{}, E., [Branch]{}, D., & [Hauschildt]{}, P. H. 2001, , 557, 266
, I., [Jerkstrand]{}, A., [Mazzali]{}, P. A., [Taubenberger]{}, S., [Hachinger]{}, S., [Kromer]{}, M., [Sim]{}, S., & [Hillebrandt]{}, W. 2011, , 418, 1517
, P. A., [Chugai]{}, N., [Turatto]{}, M., [Lucy]{}, L. B., [Danziger]{}, I. J., [Cappellaro]{}, E., [della Valle]{}, M., & [Benetti]{}, S. 1997, , 284, 151
, P. A., [Sauer]{}, D. N., [Pastorello]{}, A., [Benetti]{}, S., & [Hillebrandt]{}, W. 2008, , 386, 1897
, R., [Hachinger]{}, S., [R[ö]{}pke]{}, F. K., & [Hillebrandt]{}, W. 2011, , 528, A117+
, R., [Kromer]{}, M., [Taubenberger]{}, S., [Sim]{}, S. A., [R[ö]{}pke]{}, F. K., & [Hillebrandt]{}, W. 2012, , 747, L10
, P. A. & [Eastman]{}, R. G. 2000, , 530, 744
—. 2000, , 530, 757
, Y., [Kramida]{}, A. E., [Reader]{}, I., & [NIST ASD Team]{}. 2013, NIST Atomic Spectra Database (version 5.1)
, F. K. & [Hillebrandt]{}, W. 2005, , 431, 635
, S. A., [R[ö]{}pke]{}, F. K., [Hillebrandt]{}, W., [Kromer]{}, M., [Pakmor]{}, R., [Fink]{}, M., [Ruiter]{}, A. J., & [Seitenzahl]{}, I. R. 2010, , 714, L52
, A. G., [Noble]{}, C. J., [Burke]{}, V. M., & [Burke]{}, P. G. 2002, Computer Physics Communications, 145, 311
, M., [Mazzali]{}, P. A., [Stanishev]{}, V., [Maurer]{}, I., [Kerzendorf]{}, W. E., & [Nomoto]{}, K. 2011, , 410, 1725
, M. H., [Chang]{}, P., & [Justham]{}, S. 2010, , 722, L157
, S. E., [Wunsch]{}, S., & [Kuhlen]{}, M. 2004, , 607, 921
, S.-C. & [Langer]{}, N. 2004, , 419, 623
\[lastpage\]
[^1]: Part of the confusion has undoubtedly arisen because forbidden line emission is generally associated with low density, and thus the detection of a feature at 5900Å not long after maximum would seem to preclude its identification with a forbidden line. A key distinction, however, is that Co is not an impurity species, and thus it is much more likely to be seen even though ejecta densities exceed the critical density.
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- 'María E. Camisassa, Leandro G. Althaus, Alejandro H. Córsico, Francisco C. De Gerónimo , Marcelo M. Miller Bertolami, María L. Novarino , René D. Rohrmann, Felipe C. Wachlin ,'
- 'Enrique García–Berro\*'
bibliography:
- 'lowZ.bib'
date: 'Received ; accepted '
title: 'On the evolution of ultra-massive white dwarfs'
---
Introduction
============
White dwarf stars are the most common end-point of stellar evolution. Indeed, more than 97% of all stars will eventually become white dwarfs. These old stellar remnants preserve information about the evolutionary history of their progenitors, providing a wealth of information about the physical evolutionary processes of stars, the star formation history, and about the characteristics of various stellar populations. Furthermore, their structure and evolutionary properties are well understood — see , and for specific reviews — to the point that the white dwarf cooling times are currently considered one of the best age indicators for a wide variety of Galactic populations, including open and globular clusters .
The mass distribution of white dwarfs exhibits a main peak at $M_{ \rm WD} \sim 0.6 M_\sun $, and a smaller peak at the tail of the distribution around $M_{\rm WD} \sim 0.82 M_\sun $ [@2013ApJS..204....5K]. The existence of massive white dwarfs ($M_{\rm WD} \gtrsim 0.8 M_\sun $) and ultra-massive white dwarfs ($M_{\rm WD} \gtrsim 1.10 M_\sun $) has been revealed in several studies [@2010MNRAS.405.2561C; @2013MNRAS.430...50C; @2013ApJ...771L...2H; @2016MNRAS.455.3413K; @2017MNRAS.468..239C]. Indeed, [@2015MNRAS.452.1637R] reports the existence of a distinctive high-mass excess in the mass function of hydrogen-rich white dwarfs near $1\rm M_\sun$.
An historic interest in the study of ultra-massive white dwarfs is related to our understanding of type Ia Supernova. In fact, it is thought that type Ia Supernova involve the explosion of an ultra-massive white dwarf or the merger of two white dwarfs. Also, massive white dwarfs can act as gravitational lenses. It has been proposed that massive faint white dwarfs can be responsible of “microlensing” events in the Large Magallanic Cloud.
The formation of an ultra-massive white dwarf is theoretically predicted as the end product of the isolated evolution of a massive intermediate-mass star —with a mass larger than 6–9 $M_\sun$, depending on metallicity and the treatment of convective boundaries. Once the helium in the core has been exhausted, these stars reach the Super Asymptotic Giant Branch (SAGB) with a partially degenerate carbon(C)-oxygen(O) core as their less massive siblings. However, in the case of SAGB stars their cores develop temperatures high enough to start carbon ignition under partially degenerate conditions. The violent carbon-ignition leads to the formation of a Oxygen-Neon core, which is not hot enough to burn Oxygen or Neon (Ne) and is supported by the degenerate pressure of the electron gas. If the hydrogen-rich envelope is removed by winds before electron captures begin in the O-Ne core, an electron-capture supernova is avoided and the star leaves the SAGB to form a white dwarf. As a result, ultra-massive white dwarfs are born with cores composed mainly of $^{16}$O and $^{20}$Ne, with traces of [ $^{12}$C]{}, $^{23}$Na and $^{24}$Mg . In addition, massive white dwarfs with C-O cores can be formed through binary evolution channels; namely the single-degenerate channel in which a white dwarf gains mass from a nondegenerate companion, and double-degenerate channel involving the merger of two white dwarfs . The study of the predicted surface properties and cooling times of ultra-massive CO- and ONe-core white dwarfs can help to assess the relevance of different channels in the formation of these stars.
During the last years, $g$(gravity)-mode pulsations have been detected in many massive and ultra-massive variable white dwarfs with hydrogen-rich atmospheres (DA), also called ZZ Ceti stars . The ultra-massive ZZ Ceti star BPM 37093 was the first object of this kind to be analyzed in detail. The existence of pulsating ultra-massive white dwarfs opens the possibility of carrying out asteroseismological analyses of heavy-weight ZZ Ceti stars, allowing to obtain information about their origin and internal structure through the comparison between the observed periods and the theoretical periods computed for appropriate theoretical models. In particular, one of the major interests in the study of pulsating ultra-massive DA white dwarfs lies in the fact that these stars are expected to have a well developed crystallized core. The occurrence of crystallization in the degenerate core of white dwarfs, resulting from Coulomb interactions in very dense plasmas, was first suggested by several authors about 60 yr ago, see @Kirzhnits1960 [@Abrikosov1961; @1968ApJ...151..227V] for details, and the more recent works by for discussions. However, this theoretical prediction was not observationally demonstrated until the recent studies of [@2009ApJ...693L...6W] and [@2010Natur.465..194G], who inferred the existence of crystallized white dwarfs from the study of the white dwarf luminosity function of stellar clusters. Since ultra-massive ZZ Ceti stars are expected to have a core partially or totally crystallized, these stars constitute unique objects to detect the [ presence of crystallization]{}. Thus, asteroseismology of ultra-massive DA white dwarfs is expected to contribute to our understanding of the Coulomb interactions in dense plasmas. The first attempt to infer the existence of crystallization in [ an]{} ultra-massive white dwarf star from the analysis of its pulsation pattern was carried out by @2004ApJ...605L.133M in the case of BPM 37093 , but the results were inconclusive [@2005ApJ...622..572B].
Asteroseismological applications of ultra-massive DA white dwarfs require the development of detailed evolutionary models for these stars, taking into account all the physical processes responsible for interior abundance changes as evolution proceeds. The first attempts to model these stars by considering the evolutionary history of progenitor stars were the studies by [@1997MNRAS.289..973G] and . These studies, however, adopted several simplifications which should be assessed. To begin with, they consider a core chemical profile composed mainly of $^{16}$O and $^{20}$Ne, implanted to white dwarf models with different stellar masses. A main assumption made in (from here on, we refer to as A07) is that the same fixed chemical profile during the entire evolution is assumed for all of their models. Also, phase separation during crystallization is an important missing physical ingredient in these studies. In fact, when crystallization occurs, energy is released in two different ways. First, as in any crystallization process, latent heat energy is released. And second, a phase separation of the elements occurs upon crystallization, releasing gravitational energy [@1997ApJ...485..308I] and enlengthening the cooling times of white dwarfs. This process of phase separation has been neglected in all the studies of ultra-massive white dwarfs. Finally, progress in the treatment of conductive opacities and model atmospheres has been made in recent years, and should be taken into account in new attempts to improve our knowledge of these stars.
This paper is precisely aimed at upgrading these old white dwarf evolutionary models by taking into account the above-mentioned considerations. We present new evolutionary sequences for ultra-massive white dwarfs, appropriate for accurate white dwarf cosmochronology of old stellar systems and for precise asteroseismology of these white dwarfs. We compute four hydrogen-rich and four hydrogen-deficient white dwarf evolutionary sequences. The initial chemical profile of each white dwarf model is consistent with predictions of the progenitor evolution with stellar masses in the range $9.0\leq M_{\rm ZAMS}/ M_\sun \leq 10.5$ calculated in . This chemical structure is the result of the full evolutionary calculations starting at the Zero Age Main Sequence (ZAMS), and evolved through the core hydrogen burning, core helium burning, the SAGB phase, including the entire thermally-pulsing phase. An accurate nuclear network has been used for each evolutionary phase. Thus, not only a realistic O-Ne inner profile is considered for each white dwarf mass, but also realistic chemical profiles and intershell masses built up during the SAGB are taken into account. In our study, the energy released during the crystallization process, as well as the ensuing core chemical redistribution were considered by following the phase diagram of [@2010PhRvE..81c6107M] suitable for $^{16}$O and $^{20}$Ne plasmas[^1]. [ We also provide accurate magnitudes and colors for our hydrogen-rich models in the filters used by the spacial mission GAIA: G, $\rm G_{BP}$ and $\rm G_{RP}$.]{}
To the best of our knowledge, this is the first set of fully evolutionary calculations of ultra-massive white dwarfs including realistic initial chemical profiles for each white dwarf mass, an updated microphysics, and the effects of phase separation process duration crystallization[^2]. This paper is organized as follows. In Sect. \[code\] we briefly describe our numerical tools and the main ingredients of the evolutionary sequences, while in Sect. \[results\] we present in detail our evolutionary results and compare them with previous works. Finally, in Sect. \[conclusions\] we summarize the main findings of the paper, and we elaborate on our conclusions.
Numerical setup and input physics {#code}
=================================
The white dwarf evolutionary sequences presented in this work have been calculated using the [LPCODE]{} stellar evolutionary code . This code has been well tested and calibrated and has been amply used in the study of different aspects of low-mass star evolution [see @2010Natur.465..194G; @2010ApJ...717..897A; @2010ApJ...717..183R and references therein]. More recently, the code has been used to generate a new grid of models for post-AGB stars and also new evolutionary sequences for hydrogen-deficient white dwarfs [@2017ApJ...839...11C]. We mention that [LPCODE]{} has been tested against another white dwarf evolutionary code, and the uncertainties in the white dwarf cooling ages that result from the different numerical implementations of the stellar evolution equations were found to be below 2% .
For the white dwarf regime, the main input physics of [LPCODE]{} includes the following ingredients. Convection is treated within the standard mixing length formulation, as given by the ML2 parameterization [@1990ApJS...72..335T]. Radiative and conductive opacities are from OPAL [@1996ApJ...464..943I] and from [@2007ApJ...661.1094C], respectively. For the low-temperature regime, molecular radiative opacities with varying carbon to oxygen ratios are used. To this end, the low temperature opacities computed by [@2005ApJ...623..585F] as presented by are adopted. The equation of state for the low-density regime is taken from , whereas for the high-density regime, we employ the equation of state of [@1994ApJ...434..641S], which includes all the important contributions for both the solid and liquid phases. We considered neutrino emission for pair, photo, and bremsstrahlung processes using the rates of [@1996ApJS..102..411I], while for plasma processes we follow the treatment presented in [@1994ApJ...425..222H]. [ Outer boundary conditions for both H-rich and H-deficient evolving models are provided by non-gray model atmospheres, see , [@2017ApJ...839...11C], and [@2018MNRAS.473..457R] for references. The impact of the atmosphere treatment on the cooling times becomes relevant for effective temperatures lower than $10\,000$ K. ]{} [LPCODE]{} considers a detailed treatment of element diffusion, including gravitational settling, chemical and thermal diffusion. As we will see, element diffusion is a key ingredient in shaping the chemical profile of evolving ultra-massive white dwarfs, even in layers near the core.
Treatment of crystallization
----------------------------
A main issue in the modelling of ultra-massive white dwarfs is the treatment of crystallization. As temperature decreases in the interior of white dwarfs, the Coulomb interaction energy becomes increasingly important, until at some point, they widely exceed the thermal motions and the ions begin to freeze into a regular lattice structure. Since the crystallization temperature of pure $^{20}$Ne is larger than the crystallization temperature of $^{16}$O, this crystallization process induces a phase separation. In a mixture of $^{20}$Ne and $^{16}$O, the crystallized plasma will be enriched in $^{20}$Ne and, consequently, $^{20}$Ne will decrease in the remaining liquid plasma. [ This process releases gravitational energy, thus constituting a new energy source that will impact the cooling times.]{}
[ We used]{} the most up-to-date phase diagram of dense O-Ne mixtures appropriate for massive white dwarf interiors [@2010PhRvE..81c6107M]. This phase diagram, shown in Fig. \[Fig:PD\], yields the temperature at which crystallization occurs, as well as the abundance change at a given point in the solid phase during the phase transition. $\Gamma$ is the coulomb coupling parameter, defined as $\Gamma=\frac{\rm e^2}{{\rm k_B a_e }T}Z^{5/3}$, where $\rm a_e= \left( \frac{3}{4\pi n_e}\right)^{1/3}$ is the mean electron spacing. $\rm \Gamma_{crit}$ is set to 178.6, the crystallization value of a mono-component plasma. $ \rm \Gamma_O$ is the value of $\Gamma$ of $^{16}$O at which crystallization of the mixture occurs, and is related to the temperature and the density through the relation $ \Gamma_{\rm O}=\frac{\rm e^2}{{\rm k_B a_e }T}8^{5/3}$. For a given mass fraction of $^{20}$Ne, the solid red line in Fig. \[Fig:PD\] gives us $\rm \Gamma_O$, and, consequently, the temperature of crystallization is obtained. Once we obtain this temperature, it can be related to the $\Gamma$ of the mixture, by replacing $T$ in the formula $\Gamma=\frac{\rm e^2}{{\rm k_B a_e }T}Z_{\rm mixture}^{5/3}$, where $Z_{\rm mixture}$ is the mean ionic charge of the mixture. The $\Gamma$ obtained using this procedure is larger than the value of $\Gamma$ commonly used in the white dwarf evolutionary calculations, which is artificially set to 180. For a given abundance of $^{20}$Ne in the liquid phase, the solid red line predicts $\rm \Gamma_{crit}/\Gamma_{O}$, and the corresponding value of $\rm \Gamma_{crit}/\Gamma_{O}$ at the dashed black line predicts the $^{20}$Ne abundance in the solid phase, which is slightly larger than the initial $^{20}$Ne abundance. The final result of the crystallization process is that the inner regions of the star are enriched in $^{20}$Ne, and the outer regions are enriched in $^{16}$O.
![Phase diagram of crystallization for a $^{16}$O/$^{20}$Ne mixture [@2010PhRvE..81c6107M]. $\rm X_{Ne}$ is the $^{20}$Ne abundance. $\rm \Gamma_{crit}$ is set to 178.6. $\rm \Gamma_O$ is given by $ \Gamma_{\rm O}=(e^2 / \rm k_B a_e T)8^{5/3}$, see text for details.[]{data-label="Fig:PD"}](fig_01.eps){width="\columnwidth"}
{width="2\columnwidth"}
{width="2\columnwidth"}
The energetics resulting from crystallization processes has been self-consistently and locally coupled to the full set of equations of stellar evolution [see @2010ApJ...719..612A for details of the implementation]. The local change of chemical abundance resulting from the process of phase separation at crystallization leads to a release of energy (in addition to the latent heat). The inclusion of this energy in [LPCODE]{} is similar to that described in [@2010ApJ...719..612A], but adapted to the mixture of $^{16}$O and $^{20}$Ne characterizing the core of our ultra-massive white dwarf models. At each evolutionary time step, we calculate the change in chemical composition resulting from phase separation using the phase diagram of [@2010PhRvE..81c6107M] for an oxygen-neon mixture. Then, we evaluate the net energy released by this process during the time step. This energy is added to the latent heat contribution, which is considered as $0.77 k_BT$ per ion. The total energy is distributed over a small mass range around the crystallization front. This local energy contribution is added to the luminosity equation [see @2010ApJ...719..612A for details].
[ The increase of $^{20}$Ne abundance in the solid core as a result of crystallization leads]{} to a Rayleigh-Taylor instability and an ensuing mixing process at the region above the crystallized core, inducing the oxygen enrichment in the overlying liquid mantle [@1997ApJ...485..308I]. Thus, those layers that are crystallizing contribute as an energy source, and the overlying unstable layers will be a sink of energy.
Initial models
--------------
As we have mentioned, an improvement of the present calculations over those published in A07 is the adoption of detailed chemical profiles which are based on the computation of all the previous evolutionary stages of their progenitor stars. This is true for both the O-Ne core and the surrounding envelope. In particular, the full computation of previous evolutionary stages allows us to assess the mass of the helium-rich mantle and the hydrogen-helium transition, which are of particular interest for the asteroseismology of ultra-massive white dwarfs. Specifically, the chemical composition of our models is the result of the entire progenitor evolution calculated in . These sequences correspond to the complete single evolution from the ZAMS to the thermally pulsating SAGB phase of initially $M_{\rm ZAMS}= 9, 9.5, 10$, and $10.5 M_\sun$ sequences with an initial metallicity of $Z= 0.02$. Particular care was taken by to precisely follow the propagation of the carbon burning flame where most carbon is burnt . This is of special interest for the final oxygen and neon abundances in the white dwarf core. In addition, computed in detail the evolution during the thermally pulsing-SAGB phase where the outer chemical profiles and the total helium-content of the final stellar remnant are determined. [ No]{} extra mixing was included at any convective boundary at any evolutionary stage. The absence of core overshooting during core hydrogen- and helium-burning stages implies that, for a given final remnant mass ($M_{\rm WD}$), initial masses ($M_{\rm ZAMS}$) represent an upper limit of the expected progenitor masses. [ Indeed, considering moderate overshooting during core helium burning lowers the mass range of SAGB stars in $2\, \rm M_\odot$ .]{} It is worth noting that the initial final mass relation is poorly constrained from observations [@2009ApJ...692.1013S] and it is highly uncertain in stellar evolution models. [ On the other hand, considering overshooting during the thermally-pulsing SAGB, would induce third dredge-up episodes, altering the carbon and nitrogen abundances in the envelope. Finally, in this work we have not explored the impact on white dwarf cooling that could be expected from changes in the core chemical structure resulting from the consideration of extra- mixing episodes during the semi-degenerate carbon burning.]{}
The stellar masses of our white dwarf sequences are $M_{\rm WD}=1.10 M_\sun$, $1.16 M_\sun$, $1.23 M_\sun$ and $1.29 M_\sun$. Each evolutionary sequence was computed from the beginning of the cooling track at high luminosities down to the development of the full Debye cooling at very low surface luminosities, $\log(L_\star/L_\sun)= -5.5$. The progenitor evolution through the thermally-pulsing SAGB provides us with realistic values of the total helium content, which is relevant for accurate computation of cooling times at low luminosities. In particular, different helium masses lead to different cooling times. The helium mass of our $1.10 M_\sun$, $1.16 M_\sun$, $1.23M_\sun$ and $1.29 M_\sun$ models are $3.24 \times 10^{-4} M_\sun$, $1.82 \times 10^{-4} M_\sun$, $0.78 \times 10^{-4} M_\sun$ and $0.21 \times 10^{-4} M_\sun$, respectively. By contrast, the total mass of the hydrogen envelope left by prior evolution is quite uncertain, since it depends on the occurrence of carbon enrichment on the thermally pulsing AGB phase , which in turn depends on the amount of overshooting and mass loss, as well as on the occurrence of late thermal pulses. For this paper, we have adopted the maximum expected hydrogen envelope of about $\sim 10^{-6} M_{\sun}$ for ultra-massive white dwarfs. Larger values of the total hydrogen mass would lead to unstable nuclear burning and thermonuclear flashes on the white dwarf cooling track.
Fig. \[Fig:profiles\] illustrates the chemical profiles resulting from the progenitor evolution of our four hydrogen-rich white dwarf sequences[^3]. The core composition is $\sim 55\%$ $^{16}$O, $\sim 30\%$ $^{20}$Ne, with minor traces of $^{22}$Ne, $^{23}$Na, $^{24}$Mg. [ At some layers of the models, the mean molecular weight is higher than in the deeper layers, leading to Rayleigh-Taylor unstabilities. Consequently, these profiles are expected to undergo a rehomogeneization process in a timescale shorter than the evolutionary timescale. Thus, we have simulated the rehomogeneization process assuming to be instantaneous. The impact of this mixing process on the abundance distribution in the white dwarf core results apparent from inspecting Fig. \[Fig:profiles2\]. Clearly, rehomogeneization mixes the abundances of all elements at some layers of core, erasing preexisting peaks in the abundances.]{}
Evolutionary results {#results}
====================
![ Temporal evolution of surface luminosity (double dotted line) and different luminosity contributions: neutrino luminosity (dashed line), gravothermal luminosity (dotted line), latent heat (dot-dot-dashed line) and phase separation energy (solid line). The arrows indicate the main physical processes responsible for the evolution at different moments.[]{data-label="Fig:4"}](fig_04.eps){width="\columnwidth"}
We [ present]{} in Fig. \[Fig:4\] a global view of the main phases of the evolution of an ultra-massive hydrogen-rich white dwarf model during the cooling phase. In this figure, the temporal evolution of the different luminosity contributions is displayed for our $1.16 M_{\sun}$ hydrogen-rich white dwarf sequence. The cooling time is defined as zero at the beginning of the white dwarf cooling phase, when the star reaches the maximum effective temperature. During the entire white dwarf evolution, the release of gravothermal energy is the dominant energy source of the star. At early stages, neutrino emission constitutes an important energy sink. In fact, during the first million yr of cooling, the energy lost by neutrino emission is of about the same order of magnitude as the gravothermal energy release, remaining larger than the star luminosity until the cooling time reaches about $\log(t)\sim 7$. As the white dwarf cools, the temperature of the degenerate core decreases, thus neutrino emission ceases and, consequently, the neutrino luminosity abruptly drops. It is during these stages that element diffusion strongly modifies the internal chemical profiles. The resulting chemical stratification will be discussed below. At $\log(t)\sim 8.3$ crystallization sets in at the center of the white dwarf. This results in the release of latent heat and gravitational energy due to oxygen-neon phase separation. Note that, as a consequence of this energy release, during the crystallization phase the surface luminosity is larger than the gravothermal luminosity. This phase lasts for $2.5 \times 10^9$ years. Finally, at $\log(t) \sim 9$, the temperature of the crystallized core drops below the Debye temperature, and consequently, the heat capacity decreases. Thus, the white dwarf enters the so-called “Debye cooling phase”, characterized by a rapid cooling.
![Solid (dashed) lines display the cooling times for our hydrogen-rich (deficient) white dwarf sequences. At low luminosities and from left to right, stellar masses of both set of sequences are $1.29 M_\sun, 1.23 M_\sun, 1.16 M_\sun$ and $1.10 M_\sun$.[]{data-label="Fig:5"}](fig_05.eps){width="\columnwidth"}
The cooling times for all of our white dwarf sequences are displayed in Fig. \[Fig:5\]. These cooling times are also listed in Table \[tabla1\] at some selected stellar luminosities. [ Our hydrogen-deficient]{} sequences have been calculated by considering recent advancement in the treatment of energy transfer in dense helium atmospheres, . As shown in [@2017ApJ...839...11C], detailed non-gray model atmospheres are needed to derive realistic cooling ages of cool, helium-rich white dwarfs. At intermediate luminosities, hydrogen-deficient white dwarfs evolve slightly slower than their hydrogen-rich counterparts. This result is in line with previous studies of hydrogen-deficient white dwarfs [@2017ApJ...839...11C] and the reason for this is that convective coupling (and the associated release of internal energy) occurs at higher luminosities in hydrogen-deficient white dwarfs, with the consequent lengthening of cooling times at those luminosities. By contrast, at low-luminosities, hydrogen-deficient white dwarfs evolve markedly faster than hydrogen-rich white dwarfs. This is due to the fact that, at those stages, the thermal energy content of the hydrogen-deficient white dwarfs is smaller, and more importantly, because in these white dwarfs, the outer layers are more transparent to radiation. Note in this sense that the $1.10 \, M_\sun$ hydrogen-rich sequence needs 8.2 Gyr to reach the lowest luminosities, while the hydrogen-deficient sequence of the same mass evolves in only 4.6 Gyr to the same luminosities. Note also the different cooling behavior with the stellar mass, particularly the fast cooling of the $1.29 \, M_\sun$ hydrogen-rich sequence, our most massive sequence, which reaches $\log(L_\star/L_\sun)= -5$ in only 3.6 Gyr, which is even shorter (2.4 Gyr) in the case of the hydrogen-deficient counterpart. These short cooling times that characterize the most massive sequences reflect that, at such stages, matter in most of the white dwarf star has entered the Debye regime, with the consequent strong reduction in the specific heat of ions .
[ All our hydrogen-deficient white dwarf sequences experience carbon enrichment]{} in the outer layers as a result of convective mixing. The outer convective zone grows inwards and when the luminosity of the star has decreased to $\log(L_\star/L_\sun) \sim -2.5$, it penetrates into deeper layers where heavy elements such as carbon and oxygen are abundant. Consequently, convective mixing dredges up these heavy elements, and the surface chemical composition changes. In particular, the surface layers are predominantly enriched in carbon. These results are in line with the predictions of [@2017ApJ...839...11C] for hydrogen-deficient white dwarfs of intermediate mass.
$\log(L_\star/L_\sun)$
------------------------ ------------ ------------ ------------ ------------- ------------- ------------- ------------- -------------
$1.10$(HR) $1.16$(HR) $1.23$(HR) $1.29$ (HR) $1.10$ (HD) $1.16$ (HD) $1.23$ (HD) $1.29$ (HD)
$-2.0$ $0.274$ $0.290$ $0.356$ $0.437$ $0.266$ $0.289$ $0.361$ $0.479$
$-3.0$ $1.318$ $1.310$ $1.320$ $1.185$ $1.367$ $1.354$ $1.325$ $1.173$
$-3.5$ $2.236$ $2.173$ $2.043$ $1.692$ $2.457$ $2.268$ $2.010$ $1.590$
$-4.0$ $3.625$ $3.427$ $2.999$ $2.265$ $3.547$ $3.217$ $2.793$ $2.048$
$-4.5$ $6.203$ $5.390$ $4.132$ $2.876$ $4.209$ $3.739$ $3.171$ $2.273$
$-5.0$ $8.225$ $7.213$ $5.467$ $3.594$ $4.580$ $3.996$ $3.346$ $2.362$
\[tabla1\]
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Star Spectral Type $\log(g)({\rm cgs})$ $T_{\rm eff} ({\rm K})$ $M_\star/ M_\sun$ $t ({\rm Gyr})$ [Reference]{}
---------------------------------------- --------------- ---------------------- ------------------------- ------------------- ----------------- ---------------------------------------------------------------------------------------------
SDSS J090549.46$+$134507.87 DA 8.875 6774 1.110 3.966 [@2016MNRAS.455.3413K]
SDSS J000901.20$+$202606.80 DA 8.857 11081 1.104 1.706 “\
SDSS J002113.16$+$192433.62 & DA & 8.920 & 11555 & 1.134 & 1.655 & ”
SDSS J003608.73$+$180951.52 DA 9.250 10635 1.248 2.121 “\
SDSS J005142.50$+$200208.66 & DA & 9.080 & 14593 & 1.197 & 1.244 & ”
SDSS J013853.19$+$283207.13 DA 9.402 9385 1.288 2.305 “\
SDSS J015425.78$+$284947.71 & DA & 8.959 & 11768 & 1.153 & 1.652 & ”
SDSS J001459.15$+$253616.37 DA 8.812 10051 1.081 1.982 “\
SDSS J004806.14$+$254703.56 & DA & 8.885 & 9388 & 1.116 & 2.322 & ”
SDSS J005122.96$+$241801.15 DA 9.170 10976 1.226 2.069 “\
SDSS J224517.61$+$255043.70 & DA & 8.990 & 11570 & 1.165 & 1.734 & ”
SDSS J222720.65$+$240601.31 DA 8.947 9921 1.146 2.190 “\
SDSS J232257.27$+$252807.42 & DA & 8.882 & 6190 & 1.113 & 4.581 & ”
SDSS J164642.67$+$483207.96 DA 8.999 15324 1.169 1.042 “\
SDSS J110054.91$+$230604.01 & DA & 9.470 & 11694 & 1.307 & 1.828 & ”
SDSS J111544.64$+$294249.50 DA 9.136 8837 1.214 2.770 “\
SDSS J102720.47$+$285746.16 & DA & 9.053 & 8874 & 1.186 & 2.713 & ”
SDSS J100944.29$+$302102.03 DA 9.161 6639 1.222 3.893 “\
SDSS J130846.79$+$424119.60 & DA & 8.970 & 7237 & 1.156 & 3.668 & ”
SDSS J101907.08$+$484805.90 DA 9.231 12582 1.243 1.691 “\
SDSS J122943.28$+$493451.45 & DA & 9.240 & 16889 & 1.246 & 1.083 & ”
SDSS J110510.71$+$474804.08 DA 9.089 9538 1.198 2.460 “\
SDSS J150417.23$+$553900.45 & DO & 9.267 & 6360 & 1.244 & 2.929 & ”
SDSS J145009.87$+$510705.21 DA 9.180 11845 1.229 1.849 “\
SDSS J132208.52$+$551939.16 & DAH& 9.098 & 17136 & 1.204 & 0.939 & ”
SDSS J004825.11$+$350527.94 DA 8.887 7516 1.116 3.367 “\
SDSS J013550.03$-$042354.59 & DA & 9.150 & 12651 & 1.220 & 1.659 & ”
SDSS J102553.68$+$622929.41 DAH 9.356 9380 1.276 2.359 “\
SDSS J104827.74$+$563952.68 & DA & 8.829 & 9680 & 1.090 & 2.134 & ”
SDSS J112322.47$+$602940.06 DA 8.845 13611 1.099 1.121 “\
SDSS J110036.93$+$665949.42 & DA & 9.383 & 22251 & 1.286 & 0.760 & ”
SDSS J004920.03$-$080141.71 DA 9.403 11648 1.289 1.849 “\
SDSS J013514.18$+$200121.97 & DA & 9.370 & 17134 & 1.281 & 1.130 & ”
SDSS J093710.25$+$511935.12 DA 8.969 7030 1.155 3.827 “\
SDSS J234929.60$+$185119.52 & DA & 8.935 & 6966 & 1.139 & 3.848 & ”
SDSS J232512.08$+$154751.27 DA 9.063 10083 1.190 2.234 “\
SDSS J234044.83$+$091625.96 & DA & 9.234 & 6166 & 1.242 & 3.957 & ”
SDSS J003652.69$+$291229.48 DA 9.070 10284 1.192 2.182 “\
SDSS J000011.57$-$085008.4 & DQ & 9.230 & 10112 & 1.236 & 2.299 & [@2013ApJS..204....5K]\
SDSS J000052.44$-$002610.5 & DQ & 9.320 & 10088 & 1.257 & 2.192 & ”
GD50 (WD 0346$-$011) DA 9.200 42700 1.241 0.064 [@2011ApJ...743..138G]
GD518 (WD J165915.11+661033.3) [(V)]{} DA 9.080 12030 1.196 1.719 “\
SDSS J072724.66$+$403622.0 & DA & 9.010 & 12350 & 1.172 & 1.573 & [@2017MNRAS.468..239C]\
SDSS J084021.23$+$522217.4 [(V)]{} & DA & 8.930 & 12160 &1.139 & 1.523 & ”
SDSS J165538.93$+$253346.0 DA 9.200 11060 1.234 2.035 "\
SDSS J005047.61$-$002517.1 & DA & 8.980 & 11490 & 1.162 & 1.744 & [@2004ApJ...607..982M]\
BPM 37093 (LTT 4816) [(V)]{} & DA & 8.843 & 11370 & 1.097 & 1.608 & [@2016IAUFM..29B.493N]\
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
\[tabla2\]
Crystallized mass
------------------- --------------- ---------------- --------------- ---------------
$1.10 M_\sun$ $1.16 M_\sun$ $1.23 M_\sun$ $1.29 M_\sun$
$0 \%$ $4.31$ $4.38$ $4.46$ $4.58$
$20 \%$ $4.26$ $4.32$ $4.41$ $4.54$
$40 \%$ $4.22$ $4.29$ $4.38$ $4.51$
$60 \%$ $4.17$ $4.23$ $4.34$ $4.46$
$80 \%$ $4.09$ $4.16$ $4.26$ $4.39$
$90 \%$ $4.03$ $4.10$ $4.20$ $4.33$
$95 \%$ $3.91$ $3.95$ $4.10$ $4.25$
$99 \%$ $3.77$ $3.83$ $4.02$ $4.10$
: Percentages of crystallized mass of our hydrogen-rich sequences and effective temperature at which they occur.
\[tabla3\]
{width="2\columnwidth"}
The evolution of our ultra-massive white dwarf sequences in the plane $\log(g)-T_{\rm eff}$ is depicted in Fig. \[Fig:6\] together with observational expectations taken from [@2004ApJ...607..982M; @2016IAUFM..29B.493N; @2011ApJ...743..138G; @2013ApJS..204....5K; @2015MNRAS.450.3966B; @2016MNRAS.455.3413K; @2017MNRAS.468..239C]. In addition, isochrones of 0.1, 0.5, 1, 2, and 5 Gyr connecting the curves are shown. For these white dwarfs, we estimate from our sequences the stellar mass and cooling age (we elect those for which their surface gravities are larger than 8.8). Results are shown in Table \[tabla2\]. Note that for most of the observed white dwarfs, the resulting cooling age is in the range $1-4$ Gyr, and many of them have stellar masses above $1.25 M_\sun$. Note also from Fig. \[Fig:6\] the change of slope of the isochrones, reflecting the well known dependence of cooling times on the mass of the white dwarf, i.e, at early stages, evolution proceeds slower in more massive white dwarfs, while the opposite trend is found at advanced stages.
![The hydrogen-rich sequences in the plane $\log(g)-T_{\rm eff}$. Blue solid lines indicate 0, 20, 40, 60, 80, 90, 95 and 99% of crystallized mass. The symbols with error bars indicate the location of the known pulsating ultra-massive DA white dwarfs [@2004ApJ...607..982M; @2013ApJ...771L...2H; @2017MNRAS.468..239C; @2016IAUFM..29B.493N].[]{data-label="Fig:7"}](fig_07.eps){width="\columnwidth"}
In Fig. \[Fig:7\] we display our hydrogen-rich sequences in the plane $\log(g)-T_{\rm eff}$ together with observational expectations for pulsating massive white dwarfs taken from [@2004ApJ...607..982M; @2013ApJ...771L...2H; @2017MNRAS.468..239C; @2016IAUFM..29B.493N]. Also with solid lines we show the 0, 20, 40 ,60, 80, 90, 95 and 99 % of the crystallized mass of the star. Note that all of the observed pulsating white dwarfs with masses larger than $1.1 M_\sun$ fall in the region where more than 80% of their mass is expected to be crystallized. It is expected, as we will discuss in a forthcoming paper, that crystallization process affects the pulsation properties of massive ZZ Ceti stars, [ as it has also been shown by . ]{}
The effective temperature at various percentage of crystallized mass is also listed in Table \[tabla3\]. [ Note that, at the onset of crystallization, the highest mass]{} sequences exhibit a marked increase in their surface gravities. This behavior is a consequence of the change in the chemical abundances of $^{16}$O and $^{20}$Ne during the crystallization. As the abundance of $^{20}$Ne grows in the inner regions of the white dwarf, its radius decreases, and consequently its surface gravity increases. [Crystallization sets in at similar luminosities and effective temperatures in a hydrogen-deficient as in a hydrogen-rich white dwarf with the same mass.]{} Hydrogen-deficient cooling sequences are not shown in this Figure since they exhibit a similar behavior but their surface gravities are slightly larger, since their radius are relatively smaller.
![Inner abundance distribution for $1.10 M_\sun$ hydrogen-rich models at three selected effective temperatures, as indicated.[]{data-label="Fig:8"}](fig_08.eps){width="\columnwidth"}
![Same as Fig. \[Fig:8\] but for $1.29 M_\sun$ models.[]{data-label="Fig:9"}](fig_09.eps){width="\columnwidth"}
[ Element diffusion profoundly alters the inner abundance distribution from the early cooling stages of our massive white dwarf models.]{} This is borne out by Figs. \[Fig:8\] and \[Fig:9\], which display the abundance distribution in the whole star at three selected effective temperatures for the 1.10 and $1.29 M_\sun$ hydrogen-rich white dwarf models, respectively. Note that, as a result of gravitational settling, all heavy elements are depleted from the outer layers. [ Note also that that initial chemical discontinuities are strongly smoothed out.]{} But more importantly, the initial helium and carbon distribution in the deep envelope result markedly changed, particularly in the most massive models, where the initial pure helium buffer has almost vanished when evolution has reached low effective temperatures. This is quite different from the situation encountered in white dwarfs of intermediate mass. These changes in the helium and carbon profiles affects the radiative opacity in the envelope and thus the cooling times at late stages.
![ Change in the chemical profiles of our $1.16M_\sun$ hydrogen-rich white dwarf model induced by the phase separation process during crystallization. Top (bottom) panel depicts the chemical profile at $\log(T_{\rm eff})= 4.26 (3.94)$. For comparison, the abundances of $^{16}$O and $^{20}$Ne right before phase separation are also plotted with thick lines in both panels.[]{data-label="Fig:10"}](fig_10.eps){width="\columnwidth"}
The other physical process that changes the core chemical distribution during white dwarf evolution is, as we mentioned, phase separation during crystallization. The imprints of phase separation on the core chemical composition can be appreciated in the bottom panels of Fig. \[Fig:8\] and \[Fig:9\], and more clearly in Fig. \[Fig:10\], which illustrates the change in the abundances of $^{20}$Ne and $^{16}$O in a $1.16 M_\sun$ model shortly after the occurrence of crystallization (top panel) and by the time a large portion of the star has crystallized (bottom panel). The chemical abundances of $^{20}$Ne and $^{16}$O right before the crystallization sets in, are plotted with thick dashed lines. For this stellar mass, crystallization starts at the center of the
star at $\log(L_\star/L_\sun)
\sim -1.8$. Note that, in the top panel, (the crystallization front is at $\log(M_r/M_\star) \sim -0.4$) the initial $^{20}$Ne and $^{16}$O abundances have strongly been changed by the process of phase separation and the induced mixing in the fluid layers above the core, which extends upwards to $\log(M_r/M_\star) \sim -1$. Other elements apart from $^{16}$O and $^{22}$Ne are not taken into account in the phase separation process, and the slight change shown in their abundances is due only to element diffusion.
To properly assess the phase separation process during crystallization, it should be necessary to consider a 5-component crystallizing plasma composed in our case by $^{12}$C , $^{16}$O, $^{20}$Ne,$^{23}$Na and $^{24}$Mg, which are the most abundant elements in the white dwarf core (see Figure \[Fig:profiles2\]). Such 5-component phase diagram is not available in the literature. However, Prof. A. Cumming has provided us with the final abundances in the solid phase in the center of the $1.10\, \rm M_\odot$ white dwarf model, considering a given 5-component composition [^4]. The abundances of $^{12}$C , $^{16}$O, $^{20}$Ne,$^{23}$Na and $^{24}$Mg at the center of this model right before crystallization occurs are listed in Table \[tablacumming\], together with the final abundances in the solid phase predicted by the 5-component calculations, and those predicted by the phase diagram for a $^{16}$O-$^{20}$Ne mixture shown in Figure \[Fig:PD\]. The abundances of $^{16}$O and $^{20}$Ne are noticeably altered by crystallization regardless of the treatment considered. However, considering a 2-component phase diagram results in a stronger phase separation of $^{16}$O and $^{20}$Ne. Nevertheless, in this treatment the abundances of trace elements $^{12}$C, $^{23}$Na and $^{24}$Mg are not altered by the crystallization process. The sum of the abundances of these trace elements is lower than 15% in the core of all our ultra-massive white dwarf models and we do not expect this to alter substantially the evolutionary timescales. To properly assess the effects of considering a 5-component phase diagram on the cooling times of white dwarfs it should be necessary calculate the evolution of the white dwarf model through the entire crystallization process, for which we would require the full phase diagram, not available at the moment of this study.
Initial Solid 5-component Solid 2-component
----------- ---------- ------------------- -------------------
$^{12}$C $0.0167$ $0.0082$ $0.0167$
$^{16}$O $0.5624$ $0.5561$ $0.5450$
$^{20}$Ne $0.2921$ $0.3289$ $0.3311$
$^{23}$Na $0.0538$ $0.0579$ $0.0538$
$^{24}$Mg $0.0513$ $0.0489$ $0.0513$
: Abundances at the center of the $1.10\, \rm M_\odot$ white dwarf model before crystallization, and the final abundances in the solid phase resulting of considering a 5-component mixture of $^{12}$C , $^{16}$O, $^{20}$Ne,$^{23}$Na and $^{24}$Mg, and the 2-component $^{16}$O-$^{20}$Ne phase diagram shown in Figure \[Fig:PD\].
\[tablacumming\]
![Top panel: Cooling times of our $1.22 M_\sun$ hydrogen-rich sequence when crystallization is neglected (double-dotted line), when only latent heat is considered during crystallization (dotted line), and when both latent heat and energy from phase separation are considered during crystallization (solid line). Bottom panel: White dwarf radius in terms of the cooling time for these evolutionary tracks.[]{data-label="Fig:11"}](fig_11.eps){width="\columnwidth"}
The phase separation process of $^{20}$Ne and $^{16}$O releases appreciable energy, see Fig. \[Fig:4\], so as to impact the white dwarf cooling times. This can be seen in Fig. \[Fig:11\], which shows the cooling times for our $1.22 M_\sun$ hydrogen-rich sequence (upper panel) when crystallization is neglected (double-dotted line), w hen only latent heat is considered during crystallization (dotted line), and when both latent heat and energy from phase separation are considered during crystallization (solid line). Clearly, the energy resulting from crystallization, in particular the release of latent heat, increases substantially the cooling times of the ultra-massive white dwarfs. The inclusion of energy from phase separation leads to an additional delay on the cooling times (admittedly less than the delay caused by latent heat) at intermediate luminosities. But below $\log(L_\star/L_\sun)\sim -3.6$, when most of the star has crystallized, phase separation accelerates the cooling times. At these stages, no more energy is delivered by phase separation, but the changes in the chemical profile induced by phase separation have strongly altered both the structure and thermal properties of the cool white dwarfs, impacting their rate of cooling. Note in this sense, the change in the radius of the white dwarf that results from the inclusion of phase separation (bottom panel of Fig. \[Fig:11\]). In fact, the star radius becomes smaller due to the increase of neon in the core during crystallization. As we mentioned, this explains the increase of the surface gravity of our sequences in the case of phase separation is considered, see Figs. \[Fig:6\] and \[Fig:7\]
![Cooling times of our hydrogen-rich white dwarf sequences with $1.10 M_\sun$, $1.16 M_\sun$ and $1.23 M_\sun$ (thick lines), as compared with the cooling sequences of A07 of similar masses (thin lines).[]{data-label="Fig:12"}](fig_12.eps){width="\columnwidth"}
![Cooling times of $1.16 M_\sun$ hydrogen-rich white dwarf models without phase separation resulting from the use of different chemical profiles. Solid red line corresponds to the cooling sequence using our current stellar evolutionary code but implanting the chemical profile considered in A07. Black dotted line corresponds to the cooling sequence calculated using our new chemical profile (plotted in the top right panel of Fig. \[Fig:profiles2\]).[]{data-label="Fig:13"}](fig_13.eps){width="\columnwidth"}
The present evolutionary sequences of ultra-massive white dwarfs constitute an improvement over those presented in A07. The comparison between the evolutionary sequences of both studies is presented in Fig. \[Fig:12\] for the $1.10 M_\sun$, $1.16 M_\sun$ and $1.23 M_\sun$ hydrogen-rich sequences. Note that appreciable differences in the cooling times exist between both set of sequences. In particular, the present calculations predict shorter ages at intermediate luminosities, but this trend is reversed at very low surface luminosities, where our new sequences evolve markedly slower than in A07.
To close the paper, we attempt to trace back the origin of such differences. We begin by examining the impact of the new chemical profiles, as compared with that used in A07 (as illustrated in Figure 4 of ) which is the same used for all white dwarf sequences in A07. To this end, we have computed two artificial white dwarf sequences by neglecting phase separation during crystallization. Comparison is made in Fig. \[Fig:13\], which shows the cooling times of a $1.16 M_\sun$ hydrogen-rich white dwarf model resulting from the use of the chemical profile considered in A07 (solid line) and the chemical profile employed in the current study (dotted line). Note that the use of new chemical profiles employed in the present study predicts larger cooling times than the use of the chemical profiles of [@1997ApJ...485..765G] considered in A07. This is due to not only to the different core chemical stratification in both cases but also to the different predictions for the helium buffer mass expected in the white dwarf envelopes, which affects the cooling rate of cool white dwarfs. In this sense, the full computation of evolution of progenitor stars along the thermally pulsing SAGB constitutes an essential aspect that cannot be overlooked in any study of the cooling of massive white dwarfs.
Improvements in the microphysics considered in the computation of our new sequences also impact markedly the cooling times; this is particularly true regarding the treatment of conductive opacities and the release of latent heat during crystallization. Specifically, in the present sequences we make use of the conductive opacity as given in [@2007ApJ...661.1094C], in contrast to A07 where the older conductive opacities of [@1994ApJ...436..418I] were employed. The resulting impact on the cooling time becomes apparent from Fig. \[Fig:14\]. Here we compare the cooling times for $1.16 M_\sun$ white dwarf white dwarf models having the same chemical composition as in A07 but adopting different microphysics. A close inspection of this figure reveals that the improvement in the microphysics considered in our current version of [LPCODE]{} as compared with that used in A07, particularly the conductive opacity at intermediate luminosity and the treatment of latent heat during the crystallization phase at lower luminosities, lead to shorter cooling times. Note that when we use the old microphysics (and the same chemical profile) we recover the results of A07.
{width="2\columnwidth"}
We conclude from Figs. \[Fig:13\] and \[Fig:14\] that the inclusion of detailed chemical profiles appropriate for massive white dwarfs resulting from SAGB progenitors and improvements in the microphysics results in evolutionary sequences for these white dwarfs much more realistic than those presented in A07. These improvements together with the consideration of the effects of phase separation of $^{20}$Ne and $^{16}$O during crystallization yield accurate cooling times for ultra-massive white dwarfs.
![ H-rich white dwarf cooling sequences in the color-magnitude diagram in GAIA bands, together with the sample of white dwarfs within 100 pc, obtained by [@2018MNRAS.480.4505J]. The filled squares indicate the moment when crystallization begins in each white dwarf cooling sequence and the filled triangles indicate the moment when convective coupling occurs.[]{data-label="Fig:15"}](fig_15.eps){width="\columnwidth"}
[ Finally, we present our ultra-massive white dwarf cooling tracks in GAIA photometry bands: G, $\rm G_{BP}$ and $\rm G_{RP}$. These magnitudes have been obtained using detailed model atmospheres for H-composition described in . The cooling tracks are plotted in the color-magnitude diagram in Figure \[Fig:15\], together with the local sample of white dwarfs within 100 pc from our sun of [@2018MNRAS.480.4505J], in the color range: $-0.52<(\rm G_{BP}- G_{RP})<0.80$. The onset of crystallization in our cooling sequences is indicated with filled squares. Note that crystallization occurs at approximately the same magnitude, $G+5+5log(\pi)\sim 12$. The moment when convective coupling is occurring in each white dwarf sequence is also indicated using filled triangles. Clearly, our ultra-massive white dwarf cooling tracks fall below the vast majority of the white dwarf sample. The reason for this relies on the mass distribution of the white dwarf sample, that exhibits a sharp peak around $0.6\,\rm M_\odot$ [@2019MNRAS.482.5222T]. Thus, the vast majority of white dwarfs will be characterized by larger luminosities than the ones present in our ultra-massive white dwarfs. However, a detailed analysis of this color-magnitude diagram is beyond the scope of the present paper and we simply present white dwarf colors for our ultra-massive white dwarfs, which are available for downloading.]{}
Summary and conclusions {#conclusions}
=======================
In this paper we have studied the evolutionary properties of ultra-massive white dwarfs with $^{16}$O and $^{20}$Ne cores. For this purpose, we have calculated hydrogen-rich and hydrogen-deficient white dwarf cooling sequences of $1.10, 1.16, 1.23$ and $1.29 M_\sun$, resulting from solar metallicity progenitors with the help of [LPCODE]{} evolutionary code. These cooling sequences are appropriate for the study of the massive white dwarf population in the solar neighborhood resulting from single evolution of progenitor stars. In our study we have considered initial chemical profiles for each white dwarf model consistent with predictions of the progenitor evolution with stellar masses in the range $9.0\leq M_{\rm ZAMS}/ M_\sun \leq 10.5$, as calculated in . These chemical profiles are the result of the computation of full evolutionary sequences from the ZAMS, through the core hydrogen burning, core helium burning, and the semidegenerate carbon burning during the thermally-pulsing SAGB phase. Hence, not only a realistic O-Ne inner profile is considered for each white dwarf mass, but also realistic chemical profiles and intershell masses built up during the SAGB are taken into account. In particular, the evolution through the entire SAGB phase provides us with realistic values of the total helium content necessary to compute realistic cooling times at low luminosities. We have calculated both hydrogen-rich and hydrogen-deficient white dwarf evolutionary sequences. In particular our hydrogen-deficient sequences have been calculated by considering recent advancement in the treatment of energy transfer in dense helium atmospheres. Each evolutionary sequence was computed from the beginning of the cooling track at high luminosities down to the development of the full Debye cooling at very low surface luminosities, $\log(L_\star/L_\sun)= -5.5$. [ We also provide colors in the GAIA photometric bands for these white dwarf evolutionary sequences on the basis of models atmospheres of .]{}
A relevant aspect of our sequences is that we have included the release of energy and the ensuing core chemical redistribution resulting from the phase separation of $^{16}$O and $^{20}$Ne induced by the crystallization. This constitutes a major improvement as compared with previous studies on the subject, like those of A07 and . To this end, we incorporate the phase diagram of [@2010PhRvE..81c6107M] suitable for $^{16}$O and $^{20}$Ne plasma, which provides us also with the correct temperature of crystallization. In addition, our white dwarf models include element diffusion consistently with evolutionary processes.
The calculations presented here constitute the first set of fully evolutionary calculations of ultra-massive white dwarfs including realistic initial chemical profiles for each white dwarf mass, an updated microphysics, and the effects of phase separation process duration crystallization. All these processes impact to a different extent the cooling times of ultra-massive white dwarfs. We find a marked dependence of the cooling times with the stellar mass at low luminosity and a fast cooling in our most massive sequences. In particular, our $1.29 \, M_\sun$ hydrogen-rich sequence reaches $\log(L_\star/L_\sun)= -5$ in only 3.6 Gyr, which is even shorter (2.4 Gyr) in the case of the hydrogen-deficient counterpart. Our results also show an enrichment of carbon in the outer layers of the hydrogen-deficient sequences at intermediate luminosities. We have also investigated the effect of element diffusion, and found that these processes profoundly change the inner abundance distribution from the very early stages of white dwarf evolution. In particular, the initial helium and carbon distributions below the hydrogen-rich envelope result substantially changed when evolution reaches low effective temperature, thus impacting the cooling times at such advanced stages of evolution.
[ Our new cooling sequences indicate that all pulsating white dwarfs existing in the literature with masses higher than $1.10 M_\sun$ should have more than 80% of their mass crystallized if they harbour O-Ne cores. This is a relevant issue since crystallization has important consequences on the pulsational properties of massive ZZ Ceti stars. This aspect has recently been thoroughly explored in [@2018arXiv180703810D] on the basis of these new sequences, with relevant implications for the pulsational properties characterizing ultra-massive white dwarfs.]{}
In summary, we find that the use of detailed chemical profiles as given by progenitor evolution and their time evolution resulting from element diffusion processes and from phase separation during crystallization constitute important improvements as compared with existing calculations that has to be considered at assessing the cooling times and pulsational properties of ultra-massive white dwarfs. We hope that asteroseismological inferences of ultra-massive white dwarfs benefit from these new evolutionary sequences, helping to shed light on the crystallization in the interior of white dwarfs.
This paper is devoted to the memory of Enrique García-Berro, which without his experience, talent and passion, would have not been possible. We strongly acknowledge A. Cumming from providing us with the phase diagram, a key physical ingredient required in our investigation, and L. Siess for the chemical profiles of his models. Part of this work was supported by AGENCIA through the Programa de Modernización Tecnológica BID 1728/OC-AR, and by the PIP 112-200801-00940 grant from CONICET. M3B is partially supported through ANPCyT grant PICT-2016-0053 and MinCyT-DAAD bilateral cooperation program through grant DA/16/07. This research has made use of NASA’s Astrophysics Data System.
[^1]: A. Cumming, personal communication.
[^2]: These evolutionary sequences are available at [http://evolgroup.fcaglp.unlp.edu.ar/TRACKS/ultramassive.html]{}
[^3]: The chemical profiles of our hydrogen-deficient white dwarf models are the same, except that no hydrogen is present in the envelope.
[^4]: A. Cumming, personal communication.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We analyze spin-dependent energetics and conductance for one dimensional (1D) atomic carbon wires consisting of terminal magnetic (Co) and interior nonmagnetic (C) atoms sandwiched between gold electrodes, obtained employing first-principles gradient corrected density functional theory and Landauer’s formalism for conductance. Wires containing an even number of interior carbon atoms are found to be acetylenic with $\sigma-\pi$ bonding patterns, while cumulene structures are seen in wires containing odd number of interior carbon atoms, as a result of strong $\pi$-conjugation. Ground states of carbon wires containing up to 13 C atoms are found to have anti-parallel spin configurations of the two terminal Co atoms, while the 14 C wire has a parallel Co spin configuration in the ground state. The stability of the anti-ferromagnetic state in the wires is ascribed to a super-exchange effect. For the cumulenic wires this effect is constant for all wire lengths. For the acetylenic wires, the super-exchange effect diminishes as the wire length increases, going to zero for the atomic wire containing 14 carbon atoms. Conductance calculations at the zero bias limit show spin-valve behavior, with the parallel Co spin configuration state giving higher conductance than the corresponding anti-parallel state, and a non-monotonic variation of conductance with the length of the wires for both spin configurations.'
address: |
Department of Physics, Applied Physics and Astronomy,\
Rensselaer Polytechnic Institute,\
Troy, NY 12180, USA.
author:
- 'L. Senapati, R. Pati, M. Mailman and S. K. Nayak'
title: 'First-principles investigation of spin polarized conductance in atomic carbon wires'
---
[2]{}
Introduction
============
Miniaturization from sub-micron conventional solid state devices to extreme small scale single organic molecule based devices has been the focus of intensive research in recent years, motivated by the emerging field of molecular scale electronics and quantum information technology. Controlled transport of electrons in molecular wires containing only a few atoms forms the basis of molecular scale electronics. Significant recent advances in experimental techniques have made it possible to fabricate nano-wires containing only a few atoms and measure their electrical properties.$^1$ Specifically, atomic carbon wires containing up to 20 atoms have been synthesized.$^2$ These carbon wires serve as ideal models to develop understanding of and to eventually control the mechanism of electron transport in finite one-dimensional (1D) systems. Previous theoretical and experimental studies$^{3-8}$ on atomic and molecular wires have been primarily limited to charge transport, with some recent exceptions.$^{9-14}$ Recent experimental measurements have shown that electron spin polarization can persist considerably longer than charge polarization.$^{15}$ It is consequently highly desirable to learn how to manipulate and enhance the control of electron transport offered by spin degrees of freedom, adding another dimension to the emerging field of molecular scale electronics and revealing important information for potential applications in spin-based molecular electronics (spintronics) as well as in quantum information processing.
Pure carbon clusters have a long history.$^{16,17}$ Clusters with less than 10 atoms are known to have low-energy linear structures characterized by cumulenic bonding (C=C=C=C) with near equal bond lengths. These structures are stabilized by strong $\pi$-conjugation between the double bonds which are alternately directed in the x- and y-planes perpendicular to the bonds.$^{16}$ Some of these clusters also possess cyclic forms, which also become the stable form for larger sizes.$^{17}$ Lang and Avouris calculated the conductance of such a cumulenic carbon atom wire, i.e., with all C-C bond lengths constrained to be equal, connected on both ends directly to metal electrodes.$^5$ They found an oscillatory behavior, with wires composed of odd numbers of carbon atoms having a higher conductance than even-numbered wires. This was contrary to expectations based on a simple molecular orbital theory of the cumulene structure, and was attributed to electron donation from the metal contacts into additional $\pi$-bonds formed between the terminal carbon atoms and the electrodes.
The presence of terminal magnetic atoms has been recently been shown$^{14}$ to modify both the structures and conductance properties of these wires. Pati et al have reported first-principles calculations$^{14}$ of spin-dependent electronic structures and energetics, as well as spin-polarized conductance of small carbon wires containing up to five carbon atoms that are terminated by magnetic atoms which are in term attached to gold electrodes. The magnetic atoms can act as spin polarizers or filters, resulting in a strong spin valve effect. These results showed that when terminated by magnetic atoms, the $\pi$-conjugated structure is not necessarily the lowest energy structure. In particular, for the even-numbered carbon wires, the acetylenic structure with alternating $\sigma$- and multiple $\pi$-bonds becomes more stable. The calculations also showed that the ground states of these magnetically terminated carbon wires have anti-parallel terminal atom spin configurations, which could be rationalized by contribution from super-exchange effects. The conductance of the wires was seen to non-monotonous with wire length just as in the pure carbon wires,$^5$ and is shown to depend strongly on the magnetic configuration of the terminal atoms, with the result that the wires could be made to act as a molecular spin valve.
This first study of spin-dependent properties of short carbon wires$^{14}$ raised a number of fundamental questions. In particular, what happens when the length of the carbon wire spacer between the magnetic atom species is increased? How do the wire structures and the relative energetics of the different spin configurations change? how do the ground state spin configurations depend upon the length? How does the effective exchange coupling between the magnetic species change with the number of carbon atoms in the wire? What is the critical length of the connecting wire up to which the super-exchange effects identified for the short wires survive?
In order to address these questions and to thereby improve our understanding of spin-dependent electron transfer in extended molecular systems, we extend here the work in ref.14 up to wires containing 14 carbon atoms. Our first principles calculations, which explicitly include spin-polarization effects, reveal that the anti parallel Co spin state remains the ground state for wires with up to 13 carbon atoms, and also show a continuation of the alternation between cumulenic and acetylenic structures for odd and even wire lengths respectively. However, the 14 carbon atom wire is seen to have a parallel Co spin configuration in the ground state. Interestingly, we find that for wires containing an odd number of carbon atoms the energy difference (${\Delta} E$) between the anti-ferromagnetic and ferromagnetic state (anti-ferromagnetic being the ground state) remains constant as a function of wire length. In contrast, in the acetylenic carbon chains, this energy difference decreases exponentially as a function of the number of atoms with the exception of the 2-carbon atom wire. Analysis of this change in ground state spin configuration (14 carbon atom wire) in terms of the super-exchange contribution allows us to estimate the characteristic length for super-exchange in acetylenic carbon wires as $\sim$20 Å. We also find that the $\pi$-conjugated cumulenic wires exhibit higher conductance than the acetylenic wires. Finally, the calculated magneto-conductance for different wire lengths shows a large difference between the two magnetization states, particularly for C-wires containing 13 and 14 C-atoms, suggesting its potential applications in molecular magneto-electronics.
The remainder of the paper is organized as follows. Our computational approach is described in Section II. The results and discussions are presented in Section III. Section IV summarizes our results.
Computational Details
=====================
As in the previous study of short atomic wires,$^{14}$ we utilize an architecture consisting of chains of non magnetic C-atoms connecting two magnetic Co atoms. The Co-(C)$_n$-Co wire structures, with n=6 to 14, are subsequently inserted between two metal gold electrodes for calculation of spin polarized conductance. In a magnetic system like this, the total conductance can be evaluated as: $$g_t=g_{spin-conserved} + g_{spin-flip} ,$$ where g$_{spin-conserved}$ is the conductance from the spin conserved part and g$_{spin-flip}$ is the conductance due to spin flip scattering. The latter contribution plays a significant role only when the spin-orbit coupling plays a significant role. Since the spin flip scatterthe spin-orbit coupling effect in highly ordered, strongly conjugated C-wire structures is expected to be small, leading to a large spin-flip scattering length, we have assumed the scattering to be coherent and have not included relativistic spin-orbit coupling effects in the present paper. The spin conserved part of the conductance is calculated as: $$g_{spin-conserved}=g^\alpha + g^\beta,$$ where g$^\alpha$ and g$^\beta$ are the contribution to conductance from up (a) and down (b) spin states, respectively. Since at low bias the conduction primarily occurs in the close proximity of the Fermi energy of the metal contact, we can use Landauer’s approach$^{18}$ to calculate and at the Fermi energy. In the zero bias limit, we have: $$g^{\alpha(\beta)}{(E_f)} = \;\frac{e^2}{h}\;\ {T^{\alpha(\beta)}(E_f)},$$ where ${T^{\alpha(\beta)}(E_f)}$ is the transmission function for the spin up (${\alpha}$) or spin down (${\beta}$) electrons. This is evaluated using the Green’s function derived from the Kohn-Sham matrix obtained from self-consistent spin unrestricted Density Functional calculations.$^{19}$ We have employed a gradient corrected Perdew-Wang 91 exchange and correlation functional$^{19}$ and double numerical polarized basis set$^{20}$ for the calculation of energetics and magnetic structures. Both the spin configurations and geometry for parallel and anti-parallel magnetic states between the Co atoms are simultaneously optimized using the self-consistent DFT approach. Anti-parallel magnetic configurations between the Co atoms are obtained by making use of the broken symmetry formalism. Details of this procedure can be found in refs ${13}$ and ${14}$. From the calculated spin-polarized conductance, we then estimate the magneto conductance (MC) according to: $$MC = \;\frac{g_t(\uparrow,\uparrow)-g_t(\uparrow,\downarrow)}{g_t(\uparrow,\uparrow)},$$ where ${g_t(\uparrow,\uparrow)}$ and ${g_t(\uparrow,\downarrow)}$ are given by the total conductance, Eq.(1) in the parallel and anti-parallel configurations, respectively.
Results and Discussion
======================
Structures, magnetic properties, and energetics
------------------------------------------------
Using the procedures summarized in the previous section, we have optimized both the spin state and geometry for the Co-(C)$_{n=6-14}$-Co wire structures in parallel and anti parallel magnetic configurations between the Co atoms. Similar to the earlier report for short atomic wires containing up to 5 carbon atoms.$^{14}$, we find a clear $\pi-\pi$ and ${\sigma-\pi}$ bonding pattern for the C-atoms in the wire. This is illustrated in Fig. 1 for n=11 and 12 carbon atom wires, which shows the ground state
structures of both magnetic configurations. The bond distances in the even carbon wires show a clear alternation, both for ferro and antiferro, consistent with ${\sigma-\pi}$ bonding. antiferro, consistent with ${\sigma-\pi}$ bonding. In contrast, the odd wire show no evidence of bond alternation, consistent with pure $\pi$-conjugated structure. Comparing the energy for parallel(ferro) and anti parallel(antiferro) spin configurations in the wires, we find that the anti-ferromagnetic state is lower in energy for all wires studied here, except n=14, which shows a ferromagnetic ground state. The calculated energy difference between ferromagnetic and anti ferromagnetic configurations, ${\Delta}E=E(\uparrow,\downarrow)-E(\uparrow,\uparrow)$, is shown as a function of number of C-atoms in the atomic wire in Fig. 2. For comparison purposes we have also shown here the results for short atomic wires obtained in $Ref. 14$. The energy difference between the two magnetization states are found to be larger than $k_BT$ at room temperature, suggesting that these antiferro-magnetic states are stable in normal operating conditions. The lower energy for the anti ferromagnetic spin configuration between the terminal Co atoms can be attributed to a super-exchange interaction that is facilitated by strong overlap of the magnetic Co and the non-magnetic C-atoms. A careful analysis of ${\Delta}E$
as function of wire length suggests that for the strongly $\pi$-conjugated wires i.e. cumulenic structures (odd number of C-atoms), the energy difference is approximately independent of the number of C-atoms in the wire. For $\sigma-\pi$ conjugated C-wires (even number of C-atoms), ${\Delta}E$ exhibits an non-monotonic behavior with wire length. In particular, the anti-ferromagnetic state for C-wires containing 2, 4 and 6 carbon atoms are more stable than those for the $\pi$-conjugated C-wires. For acetylenic wire, ${\Delta} E$ decreases in an exponential manner (with the exception of the 2 carbon atom wire) and is found to be negative for the wire containing 14 C-atoms. This suggests that the super exchange, which stabilizes the anti-ferromagnetic phase in $\sigma-\pi$ conjugated system, attenuates exponentially and becomes negligible for a wire containing 14 C-atoms. This allows us to estimate the super-exchange characteristic length for a $\sigma-\pi$ conjugated system to be $\sim$20 Å.
The additional stability for the wires containing 2, 4 and 6 C-atoms compared to 1, 3 and 5 C-atom wires could be explained as follows. In wires containing 1, 3 and 5 C-atoms, the exchange interaction is facilitated between the two terminal Co atoms through delocalized spins shared by both the Co atoms stabilizing further the ferromagnetic coupling compared to that in short even atom wire. Similar ferromagnetic stabilization has been seen in the Fe$^{3+}$-Fe$^{2+}$ compounds.$^{21}$ This extra stability of ferromagnetic ordering in the short odd atom wire leads to smaller ${\Delta}E$ as compared to wires containing 2, 4 and 6 C-atoms. Thus both double exchange and super exchange play an important role in stabilizing the magnetic ordering in these systems. In odd C-wires, double exchange and super exchange effects remain constant due to continuous p-conjugation. However, the super exchange effect exceeds the double exchange effect resulting in anti ferromagnetic configurations as the ground states. In even C-wires containing 2,4 and 6 C-atoms, the double exchange effect is smaller compared to 1, 3, and 5 C-wires, respectively. This is due to the lack of spin delocalization (Fig. 3) in even wires, which in effect destabilizes ferromagnetic ordering leading to a large energy difference between the two magnetic configurations (${\Delta}E$ in Fig. 2).
Spin Polarized Conductance.
----------------------------
Using Landauer’s approach as outlined in Section II, we have calculated the spin-polarized conductance in the zero bias limit. The results are summarized in Fig. 4. Several interesting features are apparent here. First, the conductance in the C-wire is found to be higher for parallel than for anti-parallel spin configuration of the terminal Co-atoms. This is a prerequisite for the spin-valve effect, which is primarily due to spin dependent scattering and which has been observed in magnetic/non-magnetic hetero bulk structures.$^{22,23}$ Second, both parallel and anti-parallel spin configurations show oscillations in conductance as a function of the wire length. For the parallel spin configurations, the conductance oscillation is damped after n=8 C-atoms and remains almost constant at about $1g_0$ $(g_0={2e^2/h})$ for the wire with n=12, 13 and 14 C-atoms. In contrast, in the anti-parallel case, the conductance is seen to decrease as n increases and to finally vanish for a wire for 12 and 14 C-atoms. The faster decrease of conductance with the wire length for even C-wires in the anti parallel case is due to the presence of $\sigma$-bonds in these systems, which can act as tunnel barriers for electron conduction. In fact, recent calculations on $\sigma$-bonded structures have shown that the tunnel barrier increases with the increase of wire length$^{24}$, leading to an exponential decay in the electronic conduction. In contrast, in odd C-atom wires, the $\pi$-orbitals are highly delocalized, providing pathways for electron transfer and consequently leading to higher conductance as seen in Fig. 4. As discussed above, the super-exchange effect vanishes for a wire containing 14 C-atoms.
To understand the oscillatory pattern in conductance, we have also calculated the Mulliken charges and spin densities at individual atoms for different wire lengths. The spin density at the Co atoms, i.e. the difference between the number of spin up and spin down electrons, are shown in Fig. 3. We see that for both parallel and anti-parallel magnetization states, the spin density at the Co-atoms oscillates with the number of C-atoms in the wire. Also, the $\sigma-\pi$ conjugated wires show a higher atomic spin density at Co than the $\pi$-conjugated wires. This is not surprising since, as noted above, the $\pi$-conjugated systems have a stronger delocalization of spin compared to that in the $\sigma-\pi$ conjugated wires. We have also calculated the magneto conductance $(MC)$ according to Eq.(4). Fig.5 summarizes the ${MC}$ values as a function
of number of C-atoms in the wires, with ${MC}$ displayed as a percentage. We find an oscillatory behavior in the magneto conductance, with a maximum value of $100\%$ change in magneto resistance between the parallel and anti parallel magnetization states for wires containing 12 and 14 C-atoms. This huge change in resistance between the two magnetization states suggests potential useful applications of these nanoscale materials for molecular magneto-electronics.
Conclusions
===========
We have investigated the chain length dependent magnetic structures and energetics associated with highly conjugated C-wires sandwiched between magnetic Co atom species using the gradient corrected density functional approach. The Co-terminated wires show an alternation of structure between cumulenic for odd numbers of C atoms and acetylenic for even numbers of C atoms. The spin-polarized conductance was calculated as a function of number of C-atoms in the wire in the zero bias limit using Landauer’s formalism. These length dependent calculations reveal an oscillatory pattern in conductance, with a significantly higher conductance arising for the parallel magnetization state compared to that for the anti-parallel magnetization state. The ground state C-wire structures containing up to 13 carbon atoms are found to have an anti-ferro magnetic spin configuration of the terminal Co atoms. In contrast, the wire with 14 carbon atoms is found to have a parallel Co spin configuration in the ground state. The energy difference between the parallel and anti parallel magnetization states is found to be larger than $k_BT$ at room temperature, suggesting that these two magnetization states are not interchangeable at normal operational temperatures. The stability of the anti ferromagnetic spin configuration between the terminal Co atoms is seen to arise because of a super-exchange interaction that is facilitated by strong orbital overlap of the terminal magnetic Co atoms and the non-magnetic C-atoms of the wire. This effect vanishes for the C-wire containing 14 C-atoms explaining the switch to a more stable parallel spin configuration. For carbon wires containing 12 and 14 C-atoms, we found almost no conductance in anti-parallel spin configurations, suggesting that the characteristic length for super-exchange interaction in these $\sigma-\pi$ conjugated carbon wires is about 20 Å. A maximum value of $100\%$ change in magneto resistance was obtained for carbon wires containing 12 and 14 carbon atoms.
[**Acknowledgments**]{} We thank K.B. Whaley and J. Schrier for useful discussions on the length dependence of bond alternation and super-exchange, and for helpful comments on the manuscript. SKN also would like to thank Professor Z. Soos for helpful discussions. This work was supported by the NSF funded Nanoscale Science and Engineering Center at RPI. This work was also partially supported by National Computational Science Alliance under Grant Nos. MCA01S014N and DMR020003N, and by the ACS Petroleum Research Fund.
[abcd-uf]{}
A. I. Yanson, I. K. Yanson, J. M. v. Ruitenbeek, Nature [**400**]{}, 144 (1999). G. Roth and H. Fischer, Organometallics [**15**]{}, 5766 (1996). V. Mujica, M. Kemp, A. Roitberg, and M. Ratner, J. Chem. Phys. [**104**]{}, 7296 (1996). W. Tian, S. Datta, S. Hong, R. Riefenberger, J. I. Henderson, and C.P. Kubiak, J. Chem. Phys. [**109**]{}, 2874 (1998). N. D. Lang, and Ph. Avouris, Phys. Rev. Lett. [**81**]{}, 3515 (1998). B. Larade, J. Taylor, H. Mehrez, and H. Guo, Phys. Rev. B [**64**]{}, 075420 (2002). M. Di Ventra, S. T. Pantelides, and N. D. Lang, Phys. Rev. Lett. [**84**]{}, 979 (2000). M. A. Reed, C. Zhou, C. J. Muller, T. P. Burgin, and J. M. Tour, Science [**278**]{}, 252 (1997). E. Emberly, and G. Kirczenow, Chem. Phys. [**281**]{}, 311 (2002). M. Zwolak, and M. Di Ventra, App. Phys. Lett. [**81**]{}, 925 (2002). M. Ouyang, D. D. Awschalom, Science [**301**]{}, 1074 (2003). K. Tsukagoshi, B. W. Alphenaar and H. Ago, Nature [**401**]{}, 572 (2003). R. Pati, L. Senapati, P. M. Ajayan and S. K. Nayak, Phys. Rev. B [**68**]{},100407 (2003). R. Pati, M. Mailman, L. Senapati, P. M. Ajayan, S. D. Mahanti and S. K. Nayak, Phys. Rev. B [**68**]{}, 014412 (2003). S. A. Wolf, D. D. Awschalom, R. A. Buhrman, J. M. Daughton, S. von Molnar, M. L. Roukes, A. Y. Chtchelkanova, D. M. Treger, Science [**294**]{} , 1488 (2001). K. S. Pitzer and E. Clementi, J. Am. Chem. Soc. [**81**]{}, 4477 (1959). For a review, see A. Van Orden and R. J. Saykally, Chem. Rev. [**98**]{}, 2313 (1998). S. Datta, Electron Transport in Mesoscopic Systems (Cambridge University Press, Cambridge, 1997). Robert G. Parr, and Weitao Yang, Density-Functional Theory of Atoms and Molecules (Oxford Science Publications, 1994). DMOL code: Biosym Technologies Inc; San Diego, CA, 1995. J.R. Hagadorn, L. Que Jr, W. B. Tolman, I. Prisecaru and E. Munck, J. Am. Chem. Soc., [**121**]{}(41), 9740(1999). M. N. Baibich, J. M. Broto, A. Fert, F. Ngyyen Van Dau, and F. Petroff, P. Eitenne, G. Creuzet, A. Friederich, and J. Chazelas, Phys. Rev. Lett. [**61**]{}, 2472 (1988). P. Lang, R. Nordström, R. Zeller, and P. H. Dederichs, Phys. Rev. Lett. [**71**]{}, 1927 (1993). R. Pati and S. P. Karna, Chem. Phys. Lett. [**351**]{}, 302 (2002).
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We consider the simple hypothesis of letting quantum systems have an inherent random nature. Using well-known stochastic methods we thus derive a stochastic evolution operator which let us define a stochastic density operator whose expectation value under certain conditions satisfies a Lindblad equation. As natural consequences of the former assumption decoherence and spontaneous emission processes are obtained under the same conceptual scheme. A temptative solution for the preferred basis problem is suggested. All this is illustrated with a comprehensive study of a two-level quantum system evolution.'
author:
- |
D. Salgado[^1] & J.L. Sánchez-Gómez[^2]\
Dpto. Física Teórica, Universidad Autónoma de Madrid, Spain
title: |
**Lindblad Evolution and Stochasticity:**\
**the Case of a Two-Level System**
---
Introduction
============
Stochastic methods are lately being profusely used in Quantum Mechanics in general and in Quantum Optics in particular to better study and analyze the physical processes taking place during the interaction between the system and the measuring apparatus. The conceptual framework is very clear, consisting in an open quantum system which is being continuously monitorized by an appropiate device. This situation is very well described using stochastic unravellings for the Lindblad equation of the system density operator, the most general expression of which for the diffusive case has been recently given in [@Diosi]. Needless to say, the presence of an observer is compulsory for orthodox quantum principles to be applied.
On the other hand, in the early years of Quantum Mechanics, the role of the observer was crucial to both justify the indeterministic nature of the results of a measurement upon quantum systems [@Dirac] and to understand the lack of quantum interference in a, say, double-slit experiment. Nowadays it is understood that this lack of interference is not strictly due to the presence of the observer, but to the possibility of knowing which path the particle followed [@Englert]. Keeping orthodox principles [@Dirac] as far as possible, the role of the observer is then left as the only justification to set the indeterminism in the measurement process of quantum systems.
In this article, we take under consideration the possibility of reducing once more the role of the observer and study the consequences of letting quantum systems have an intrinsic stochastic nature. In the same fashion, this hypothesis has already given rise to different stochastic evolution models (cf. [@Ghirardi] and references therein to previous works), the main purpose of which was to reproduce faithfully all nonrelativistic quantum predictions with the sole exception of quantum superpositions among macroscopically distinguishable states. By concentrating upon a very simple quantum system, namely a two-level system, we will show how this stochastic assumption allows us to make predictions only restricted to relativistic extensions of Quantum Mechanics (QED) or only achievable by resorting to the theory of open quantum systems. We will restrict ourselves to the nonrelativistic domain, i.e. we will keep on using state vectors to describe physical systems, something which is impossible to do in an attempt to merge relativistic principles with Quantum Theory.
Motivation and General Framework {#MotGenFram}
================================
The main physical hypothesis we address here is the possibility of endowing the evolution operator of a quantum system with stochastic nature. To be concrete we will deal with a two-level quantum system in particular. This assumption has already been done in another places (cf. e.g. [@Giulini] for a general overview) and can be alternatively motivated in two ways. On one hand any system, whether quantum or classical, is continuously subjected to the external influence of its environment, however small the said influence may be. This standpoint is assumed in the program of decoherence, in fact it is its conceptual starting point [@Giulini]. Such external influences are, needless to say, uncontrollable, so we may encode them using stochastic methods. This way of proceeding is similar to for instance the modelling of the evolution of a Lorentz particle [@vanKamp]. In van Kampen’s terminology it corresponds to what he calls *external noise* which is due essentially to the huge amount of uncontrollable external factors that affect the system evolution.
Alternatively, we may adopt the complementary point of view, stating that the stochasticity arises just as *internal* noise (cf. again [@vanKamp]). This second standpoint is subtler and, in our opinion, much less intuitive but hopefully more accurate. Let us use a canonical example (which will be extensively studied below), a two-level quantum system in electromagnetic vacuum. Within the non-relativistic quantum-mechanical standpoint the evolution of this system is completely determined by its hamiltonian $H_{0}$ and since the latter is time-independent, the system is stationary, i.e. the energy does not change. Within QED framework, the description varies to include the effect of *vacuum fluctuations*. These fluctuations, quantum in nature, drive the system to its ground energy level, thus producing *spontaneous* decay. This situation can also be understood letting the system be modelled by stochastic evolution, rooting the stochasticity in this quantum vacuum. Since these fluctuations do not occur in real ordinary space, but in Hilbert space we use stochastic evolution operators to describe them. The term *internal* is in this situation a bit dubious, since the electromagnetic vacuum can hardly be thought as internal to the two-level system. We should understand then *internal* as making reference to the essence itself of the system. As a matter of fact this vacuum is only detectable through its effects on a quantum system, so in certain sense the evolution of the system should also contain such effects.
From a mathematical point of view the question of the origin of the stochasticity is secondary and eveything reduces to find the form of such a stochastic evolution operator. So we proceed by substituting $U(t)\rightsquigarrow U_{st}(t)$ and then investigating the form of ${U_{st}(t)}$. To do this we resort to a general decomposition theorem of real random variables [@Nualart] the generalization to (bounded) operator-valued random variables of which we take for granted.
\[DecompTh\] Let $X$ be an operator-valued random variable acting upon a N-dimensional Hilbert space $\mathfrak{H}$. Then there exist $N\times N$ operator-valued processes $v_{k}(t)$ such that $$X=\mathbb{E}X+\sum_{k=1}^{N^{2}}\int_{T}v_{k}(s)dW_{t}^{k},$$ where $\mathbb{E}$ denotes the expectation value with respect to the probability measure and $W_{t}^{k}$ are $n$ complex Wiener processes [^3].
Expressing the latter integral as a function of the upper interval limit ($T\rightsquigarrow\mathbb{R}^{+}$) we may write as the more general form for a stochastic evolution operator
$$\label{StEvolOper}
U_{st}(t)=\mathbb{E}U_{st}(t)+\sum_{k=1}^{N^{2}}\int_{0}^{t}v_{k}(s)dW_{s}^{k},$$
where in general the Wiener processes will have the covariance matrix given by
$$\mathbb{E}[W_{t}^{k}W_{t}^{k'*}]=a_{kk'}t.$$
The connection with ordinary Quantum Mechanics is made by stating
$$\label{DensOp}
\rho_{QM}(t)=\mathbb{E}\rho_{st}(t),$$
where $\rho_{st}(t)$ is the system density operator induced by the previous stochastic evolution operator, namely
$$\rho_{st}(t)={U_{st}(t)}\rho(0)U^{\dagger}_{st}(t).$$
Henceforward, we adopt the same assumptions as in the orthodox formalism, in particular we force trace-preserving evolution. This immediately imposes conditions on the absolute value of $\mathbb{E}{U_{st}(t)}$, as the following proposition shows:
\[ExpVal\] If $\rho_{QM}(t)$ is defined as (\[DensOp\]) and ${\textrm{tr}}\rho_{QM}(t)={\textrm{tr}}\rho_{QM}(0)$, then $$\label{ExpValOpEvol}
\mathbb{E}{U_{st}(t)}=\exp(-iHt)\left(I-\sum_{nm=1}^{N^{2}}a_{nm}\int_{0}^{t}v_{m}^{\dagger}(s)v_{n}(s)ds\right)^{1/2}\hspace{-2mm},$$ where $H$ is a selfadjoint operator.
Cf. Appendix A.
A few comments should be included. Firstly the selfadjoint operator $H$ is to be identified with the hamiltonian of the system. Secondly note, as expected, that if $v(t)\equiv 0$, then we recover the usual quantum-mechanical formalism, thus reinforcing the idea that new effects are contained only in the stochastic part of the evolution operator, leaving the hamiltonian partial evolution unchanged. Thirdly the square root imposes restriction upon $\sum_{nm}a_{nm}v_{m}^{\dagger}(t)v_{n}(t)$ since
$$I-\sum_{nm=1}^{N^{2}}a_{nm}\int_{0}^{t}v_{m}^{\dagger}(s)v_{n}(s)ds\geq 0$$
must hold for all $t\geq 0$. Finally ${U_{st}(t)}$ is not an unitary operator, unitarity[^4] is only obtained in stochastic average, i.e.
$$\label{StUnit}
\mathbb{E}[U^{\dagger}_{st}(t){U_{st}(t)}]=I.$$
This last feature is the crucial difference with other stochastic evolution schemes showing the same philosophy (specially those assumed in [@Partha] and [@Adler]). There, as in the scheme we propose here, it is assumed a basic stochastic evolution described by two operators, namely the Hamiltonian $H$ (for the deterministic part) and an *ad hoc* operator $L$ which gives rise to the Lindblad operator [@Lindblad] after the whole calculation is carried on. Nevertheless the stochastic nature appears only as a modification of the evolution operator generator. For instance example 30.1 of [@Partha] proposes the *ansatz*
$$T(t)[\rho(0)]=\mathbb{E}[e^{iW_{t}L}\rho(0)e^{-iW_{t}L}],$$
thus obtaining a Lindblad-type generator[^5]
$$\theta[X]=\frac{1}{2}\left\{ [LX,L]+[L,XL] \right\}.$$
This is straightforwardly generalized to more than one Lindblad operator $L_{k}$. In a simpler and more intuitive form much the same it is arrived at in [@Adler]. They only confer stochastic nature to the evolution operator by adding random parts to its generator, thus obtaining some further restrictions on the final evolution, such as only selfadjoint Lindblad operators and consequently not achieving the desirable most general situation (any Lindblad operator). Parthasarathy has however provided a scheme to obtain general Lindblad evolution, namely he has proposed the evolution
$$T_{t}[X]=\mathbb{E}[U^{\dagger N(t)}XU^{N(t)}],$$
where $N(t)$ is Poisson process with intensity $\lambda$ and $L=\sqrt{\lambda}U$. Needless to say, though he arrives at a Lindblad generator $\frac{1}{2}\left\{ [L\cdot,L^{\dagger}]+[L,\cdot L^{\dagger}] \right\}$, the physical interpretation of such a stochastic evolution operator is rather evasive. So we find that to keep the intuitive approach of Adler, we must sacrifice some generality whereas on the other hand to arrive at such generality as Parthasarathy we lose physical intuition. In order to conjugate both characteristics, we have assumed a less restrictive position.
However this generality does not drive us directly to a Lindblad evolution as we shall see. Some restrictions should be made on the added random parts $v_{k}(t)$. These restrictions possess, as the whole scheme, a rather intuitive physical interpretation. To see how the Lindblad evolution appears we focus on the differential version of (\[DensOp\]). The starting point is the following
If $\rho(t)$ is defined as (\[DensOp\]), then it satisfies the differential equation $$\label{DifEvolDensOp}
\dot{\rho}(t)=L_{t}[\rho(t)]+\widetilde{L}_{t}[\rho(0)],$$
where for any $X$
$$\begin{aligned}
L_{t}[X]&=&-i[H,X]+{\nonumber}\\
&+&\frac{1}{2}\sum_{nm=1}^{N^{2}}a_{nm}\left([\ell_{n}(t)X,\ell_{m}^{\dagger}(t)]+[\ell_{n}(t),X\ell_{m}^{\dagger}(t)]\right){\nonumber}\\
& & \\
\widetilde{L}_{t}[X]&=&-\sum_{nm=1}^{N^{2}}a_{nm}L_{t}[\int_{0}^{t}v_{n}(s)Xv_{m}^{\dagger}(s)ds]\end{aligned}$$
and by construction we have defined
$$\label{Lindblads}
\ell_{n}(t)=v_{n}(t)\mathbb{E}[U_{st}]^{-1}(t).$$
This result is straighforwardly obtained by deriving with respect to time the expression of $\rho_{QM}(t)$ (\[DensOp\]) in which we use expression (\[ExpValOpEvol\]) for ${U_{st}(t)}$ and identifying the previous definitions.
The equation (\[DifEvolDensOp\]) is almost a Lindblad differential equation. There are only two differences:
1. There is an extra term $\widetilde{L}_{t}[\rho(0)]$ which spoils the Markovianity. This on the other hand is also obtained in the more general quantum evolution of a subsystem in the orthodox formalism (cf. [@GFVKS] or [@Alicki]). The physical conditions to be met to guarantee Markovianity must be found.
2. The would-be Lindblad operators $\ell_{n}(t)$ are time-dependent. Though time-dependent Lindblad operators can be found in the literature (cf. [@Alicki] and references therein), this is not the usual case upon which we will focus.
Thus we must find the conditions under which on one hand $\widetilde{L}_{t}[\rho(0)]=0$ holds and on the other hand $\ell_{k}(t)\rightsquigarrow\ell_{k}$. In the next section we will comprehensively analyzed a two-level system to gain more physical insight into the question instead of trying to obtain the general conditions.
However notice that a first partial general result can be obtained as to the energy preserving or not preserving nature of the fluctuation part $v(t)$ once the second term $\widetilde{L}_{t}[\rho(0)]$ is shown to be negligible:
Both $[v(t),v^{\dagger}(t)]=0$ and $[H,v(t)]=0$ are independent sufficient conditions on their own to get energy conservation, i.e.
$$\frac{dE(t)}{dt}\equiv\frac{d}{dt}{\textrm{tr}}H\rho(t)=0.$$
Its elementary using (\[DifEvolDensOp\]) with $\widetilde{L}_{t}[\rho(0)]=0$ and the cyclic property of the trace.
This result may be useful to check whether we have a decaying or pumping interaction or on the contrary we just have a decohering but energy-preserving evolution (see below).
Before working out the announced two-level quantum system example, we will briefly state where the main differences between previous stochastic quantum evolution models and our scheme are. Let us consider the Quantum State Diffusion model (cf. [@Percival]). This model corresponds to stochastic unravellings of the Lindblad equation under the hypothesis of state-vector normalization, which is implemented in the following way. If the stochastic state-vector satisfies the QSD equation
$$d|\psi_{st}\rangle=|v\rangle dt+|f\rangle d\xi,$$
then the normalization preservation is achieved by forcing
$$\label{OrtCond}
\langle\psi_{st}|f\rangle=0.$$
As $|\psi_{st}\rangle$ must stochastically unravel the Lindblad equation, then
$$\label{StocUnrav}
\rho(t)=\mathbb{E}[|\psi_{st}(t)\rangle\langle\psi_{st}(t)|].$$
The QSD unravelling for the Lindblad equation with one Lindblad operator is
$$d|\psi_{st}\rangle=-iH|\psi_{st}\rangle dt+\left(\langle L^{\dagger}\rangle L-\frac{1}{2}L^{\dagger} L-\frac{1}{2}\langle L^{\dagger}\rangle\langle L \rangle\right)|\psi_{st}\rangle dt+(L-\langle L\rangle)|\psi_{st}\rangle d\xi,$$
where $L$ denote the Lindblad operator entering the corresponding Lindblad equation and $\xi$ denotes white noise (cf. [@Percival]). Note that this equation is **non-local** and **non-linear**. The difference with the scheme proposed here is double. Firstly we do not assume the Lindblad equation as a starting hypothesis, we arrive at it only after finding some physical conditions (cf. below). Secondly we do not impose normalization of $|\psi_{st}\rangle$, we only impose the condition
$$\mathbb{E}[|\langle\psi_{st}(t)|\psi_{st}(t)\rangle|^{2}]=1,$$
i.e. normalization in stochastic average. To better appreciate these differences, we will explicitly write down the evolution equation for $|\psi_{st}\rangle$ derived from the evolution operator (\[StEvolOper\]):
$$\begin{aligned}
d|\psi_{st}\rangle&=&(-iH-\frac{1}{2}\ell^{\dagger}(t)\ell(t))\mathbb{E}|\psi_{st}\rangle dt{\nonumber}\\
&+&v(t)|\psi(0)\rangle dW_{t}.\end{aligned}$$
Notice that this equation is **linear** and, under appropiate elections of $v(t)$, **local**. Notice also the explicit appearance of $|\psi(0)\rangle$, in consonance with the most general evolution equation for an open quantum system [@GFVKS]. But QSD is not the unique diffusive stochastic unravelling for the Lindblad equation (cf. [@Diosi]). As a matter of fact it is included as a particular case of the stochastic evolution equation given by Diósi and Wiseman. Again, in their approach the Lindblad equation is assumed from the very beginning, which settles the first difference with the formalism proposed here. Secondly, as a consequence of the assumed conceptual framework (a quantum system being continuously monitorized), they impose the condition of purity conservation for the stochastic evolution, which gives rise to the main analytical difference, since in our approach ${U_{st}(t)}^{\dagger}{U_{st}(t)}\neq I$, as we have indicated above. Unitarity is only satisfied in stochastic average (cf. eq. (\[StUnit\])).
Differences can also be found with the work of Ghirardi, Pearle and Rimini [@Ghirardi]. There the Lindblad equation is not assumed from the beginning, but it is arrived at after imposing some other conditions. Among them, the most important one is, in our opinion, the election of the quantity to be containing the physical information of the quantum system. Their starting stochastic evolution equation is **linear** and (if appropiate elections are made) **local**, just as in our approach:
$$d|\psi\rangle=C|\psi\rangle dt + \mathbf{A}|\psi\rangle\cdot d\mathbf{B},$$
but they claim that, since this evolution does not preserve normalization, the physical information cannot be contained in $|\psi\rangle$. So they define another stochastic vector $|\phi\rangle$ by
$$|\phi\rangle\equiv\frac{1}{||\psi||}|\psi\rangle$$
with probability function given by the probability function corresponding to $|\psi\rangle$ times their squared norm $||\psi||^{2}$, i.e.
$$q(\phi)=||\psi||^{2}p(\psi),$$
where $q(\phi)$ denotes the probability associated to $|\phi_{st}\rangle$ and $p(\psi)$ denotes the probability associated to $|\psi_{st}\rangle$. This assumption leads immediately to the **nonlinear** stochastic differential equation
$$d|\phi\rangle=\Big[-iH-\frac{\gamma}{2}(\mathbf{A}^{\dagger}-\mathbf{R}_{\phi})\cdot\mathbf{A}+\frac{\gamma}{2}(\mathbf{A}-\mathbf{R}_{\phi})\cdot\mathbf{R}_{\phi}\Big]|\phi\rangle dt + (\mathbf{A}-\mathbf{R}_{\phi})|\phi\rangle\cdot d\mathbf{B},$$
where $\gamma$ denotes the non-null elements of the diagonal covariance matrix of the set of Wiener processes $\mathbf{B}$ and $\mathbf{R}_{\phi}\equiv\frac{1}{2}\langle\phi|\mathbf{A}+\mathbf{A}^{\dagger}|\phi\rangle$. Now the difference is rooted in the assumption that in our approach nonnormalization does not suppose a major nuisance, since we claim that the physical information of the system is contained in
$$\label{DensOp2}
\rho(t)=\mathbb{E}[|\psi_{st}\rangle\langle\psi_{st}|],$$
so that the probability of finding a system in the state $P_{\sigma}=|\sigma\rangle\langle\sigma|$ will be
$${\textrm{tr}}\rho P_{\sigma}=\mathbb{E}[|\langle\psi_{st}|\sigma\rangle|^{2}].$$
And it is the physical quantity (\[DensOp2\]) which is to be normalized. Thus we do not need to resort to some other normalized process $|\phi\rangle$.
Finally, though being very close in spirit, some differences can also be remarked with the work of Gisin [@Gisin]. Here the objective is to classify all the pure state valued stochastic differential equations in $\mathbb{C}^{2}$ such that the corresponding density matrix follows a quantum dynamical semigroup evolution. The first formal difference stems form the fact that he describes the stochastic evolution in terms of the “height” and azimutal angle of the corresponding point in the Bloch Sphere, thus preventing its analysis from being generalized to higher-level systems in a straightforward fashion. Moreover he only focus on pure states in contrast to our density matrix formalism, which also embraces mixture states. Last of all, again as in previous commented models, the evolution equation for the density operator is assumed from the beginning, though the restriction of complete positivity is not guaranteed in his analysis.
Non-perturbed Two-level Systems {#NonPert}
===============================
We begin by considering non-perturbed systems, i.e. with hamiltonian of the form
$$H_{0}={\left(\begin{array}{cc}
\hspace*{-1.5mm} E_{+} & 0\hspace*{-1.5mm} \\
\hspace*{-1.5mm} 0 & E_{-}\hspace*{-1.5mm}
\end{array} \right)}$$
The reason to focus only on non-perturbed hamiltonians will be clear in the next section. As usual (cf. e.g. [@Peng]) we may choose $E_{+}+E_{-}=0$, thus we write
$$H_{0}=\hbar\omega_{0}{\left(\begin{array}{cc}
\hspace*{-1.5mm} 1 & 0\hspace*{-1.5mm} \\
\hspace*{-1.5mm} 0 & -1\hspace*{-1.5mm}
\end{array} \right)}$$
As a canonical example we may consider an atom in electromagnetic vacuum. The random part of the stochastic evolution operator may be written in the energy eigenvector basis as
$$\begin{aligned}
{\left(\begin{array}{cc}
\hspace*{-1.5mm} {\int_{0}^{t}\lambda_{11}(s)dW_{s}^{1}} & 0\hspace*{-1.5mm} \\
\hspace*{-1.5mm} 0 & 0\hspace*{-1.5mm}
\end{array} \right)}&+&{\left(\begin{array}{cc}
\hspace*{-1.5mm} 0 & {\int_{0}^{t}\lambda_{12}(s)dW_{s}^{2}}\hspace*{-1.5mm} \\
\hspace*{-1.5mm} 0 & 0\hspace*{-1.5mm}
\end{array} \right)}{\nonumber}\\\label{CompWis}+{\left(\begin{array}{cc}
\hspace*{-1.5mm} 0 & 0\hspace*{-1.5mm} \\
\hspace*{-1.5mm} {\int_{0}^{t}\lambda_{21}(s)dW_{s}^{3}} & 0\hspace*{-1.5mm}
\end{array} \right)}&+&{\left(\begin{array}{cc}
\hspace*{-1.5mm} 0 & 0\hspace*{-1.5mm} \\
\hspace*{-1.5mm} 0 & {\int_{0}^{t}\lambda_{22}(s)dW_{s}^{4}}\hspace*{-1.5mm}
\end{array} \right)}\end{aligned}$$
This is the most general form. To clearly understand the physical meaning of each of the previous four terms we will start by considering them one by one.
Single Couplings: Decohering, Decaying and Pumping Factors
----------------------------------------------------------
Thus let us study the evolution induced by a stochastic evolution operator of the form
$$\begin{aligned}
{U_{st}(t)}&=&e^{-iH_{0}t}\left[I-\int_{0}^{t}v^{\dagger}(s)v(s)ds\right]^{1/2}{\nonumber}\\
&+&\int_{0}^{t}v(s)dW_{s},\end{aligned}$$
where the stochastic expectation value has already been written to preserve the trace, $v(t)$ is one of the previous $\lambda_{ij}(t)E_{ij}$’s ($\{E_{ij}\}$ being the canonical basis of $\mathcal{M}_{2}(\mathbb{C})$) and $W_{t}$ is the complex standard Wiener process. As a first result we elementarily arrive at $\widetilde{L}_{t}[\rho(0)]=0$ for any election of $v(t)$, so Markovianity is straightforwardly achieved. To obtain time-independent Lindblad operators, we insert sucessively each $v(t)$ in the definition of $\ell(t)$ (eq. (\[Lindblads\])). Forcing time-independency sets an ordinary differential equation for $|\lambda_{ij}(t)|^{2}$ whose solution leaves as the only option for $\lambda_{ij}(t)$ the expression
$$\lambda_{ij}(t)=\gamma^{1/2}\exp\left(-\frac{\gamma t}{2}\mp i\omega_{0}t\right).$$
The Lindblad operators produced in this way are of the form
$$\ell=\gamma E_{ij}$$
The physical interpretation is rather intuitive. If for instance we choose $v(t)=\lambda_{11}(t)E_{11}$, then we obtain an energy-preserving decohering evolution[^6] , i.e.
$$\label{OnlyDecoh}
\frac{d\rho(t)}{dt}=-\frac{i}{\hbar}[H_{0},\rho(t)]+\frac{\gamma}{2}{\left(\begin{array}{cc}
\hspace*{-1.5mm} 0 & -\rho_{12}(t)\hspace*{-1.5mm} \\
\hspace*{-1.5mm} -\rho_{21}(t) & 0\hspace*{-1.5mm}
\end{array} \right)}.$$
This means that $W^{1}(t)$ represents a vacuum-system interaction which does not change the energy of the system, but that introduces decohering factors. Notice that exactly the same equation (\[OnlyDecoh\]) would have been obtained had we chosen $v(t)=\lambda_{22}(t)E_{22}$. The difference stems from the fact that whereas $W^{1}(t)$ represents an interaction through coupling to the excited energy level, $W^{4}(t)$ represents a similar interaction but coupled to the ground energy level. Graphically the situation can be represented as in Fig. \[DecohFactors\].
![Representation of energy-preserving decohering factors in the stochastic evolution of two-level quantum system.\[DecohFactors\]](DecohFactors.eps){width="8.5cm"}
This same interpretation can be carried over to the other two terms. For instance, for $v(t)=\lambda_{21}(t)E_{21}$ the final evolution equation is
$$\label{SponEmiss}
\frac{d\rho(t)}{dt}=-\frac{i}{\hbar}[H_{0},\rho(t)]+\frac{\gamma}{2}{\left(\begin{array}{cc}
\hspace*{-1.5mm} -2\rho_{22}(t) & -\rho_{12}(t)\hspace*{-1.5mm} \\
\hspace*{-1.5mm} -\rho_{21}(t) & 2\rho_{22}(t)\hspace*{-1.5mm}
\end{array} \right)}.$$
![Spontaneous Decay of a Two-level Atom.\[SponDecay\]](graph3.eps){width="6cm" height="5cm"}
Now besides the off-diagonal decohering terms a new effect appears: the system *spontaneously* decays into the ground state. This behaviour is indeed exactly what is obtained by orthodox means (cf. e.g. [@Peng] and Fig. \[SponDecay\]). The reverse behaviour, as expected, is achieved if we choose $v(t)=\lambda_{12}(t)E_{12}$. In these two cases then the vacuum-system interaction is understood as an energy-non-preserving (decaying or pumping) interaction, depending upon whether the coupling takes place through the excited or the ground state. Pictorially we may represent these effects as in Fig. \[DecPumpFactors\]. The whole results are collected in table \[OneStPart\] for each of the previous stochastic parts.
![Representation of energy-nonpreserving decohering factors in the stochastic evolution of two-level quantum system.\[DecPumpFactors\]](DecPumpFactors.eps){width="8.5cm"}
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
$\lambda_{ij}(t)$
------------------------------------------------------------------------- --------------------------------------------------------------------------------- -------------------------------------------------------------------------
$\lambda_{11}(t){\left(\begin{array}{cc} $\lambda_{11}(t)=\gamma^{1/2}\exp\left(-\frac{\gamma t}{2}-i\omega_{0}t\right)$ $\gamma{\left(\begin{array}{cc}
\hspace*{-1.5mm} 1 & 0\hspace*{-1.5mm} \\ \hspace*{-1.5mm} 1 & 0\hspace*{-1.5mm} \\
\hspace*{-1.5mm} 0 & 0\hspace*{-1.5mm} \hspace*{-1.5mm} 0 & 0\hspace*{-1.5mm}
\end{array} \right)}$ \end{array} \right)}$
$\lambda_{12}(t){\left(\begin{array}{cc} $\lambda_{12}(t)=\gamma^{1/2}\exp\left(-\frac{\gamma t}{2}+i\omega_{0}t\right)$ $\gamma{\left(\begin{array}{cc}
\hspace*{-1.5mm} 0 & 1\hspace*{-1.5mm} \\ \hspace*{-1.5mm} 0 & 1\hspace*{-1.5mm} \\
\hspace*{-1.5mm} 0 & 0\hspace*{-1.5mm} \hspace*{-1.5mm} 0 & 0\hspace*{-1.5mm}
\end{array} \right)}$ \end{array} \right)}$
$\lambda_{21}(t){\left(\begin{array}{cc} $\lambda_{21}(t)=\gamma^{1/2}\exp\left(-\frac{\gamma t}{2}-i\omega_{0}t\right)$ $\gamma{\left(\begin{array}{cc}
\hspace*{-1.5mm} 0 & 0\hspace*{-1.5mm} \\ \hspace*{-1.5mm} 0 & 0\hspace*{-1.5mm} \\
\hspace*{-1.5mm} 1 & 0\hspace*{-1.5mm} \hspace*{-1.5mm} 1 & 0\hspace*{-1.5mm}
\end{array} \right)}$ \end{array} \right)}$
$\lambda_{22}(t){\left(\begin{array}{cc} $\lambda_{22}(t)=\gamma^{1/2}\exp\left(-\frac{\gamma t}{2}+i\omega_{0}t\right)$ $\gamma{\left(\begin{array}{cc}
\hspace*{-1.5mm} 0 & 0\hspace*{-1.5mm} \\ \hspace*{-1.5mm} 0 & 0\hspace*{-1.5mm} \\
\hspace*{-1.5mm} 0 & 1\hspace*{-1.5mm} \hspace*{-1.5mm} 0 & 1\hspace*{-1.5mm}
\end{array} \right)}$ \end{array} \right)}$
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Notice that Markovianity is obtained with no further restrictions upon the form of the stochastic parts $v_{k}(t)$’s. The final form of their entries is calculated from the time-independence condition for the Lindblad operators. On the contrary, some physical conditions are to be met when two Wiener processes are present.
Double couplings
----------------
The next natural step is to include one more Wiener process in the stochastic evolution, so we must investigate the options
$$\begin{aligned}
{U_{st}(t)}\hspace{-2mm}&=&\hspace{-2mm}e^{-iH_{0}t}\hspace{-1mm}\left[I-\sum_{ij=1}^{2}a_{ij}\int_{0}^{t}v_{j}^{\dagger}(s)v_{i}(s)ds\right]^{1/2}\hspace{-4mm}+{\nonumber}\\
&+&\int_{0}^{t}v_{1}(s)dW_{s}^{1}+\int_{0}^{t}v_{2}(s)dW_{s}^{2}\hspace{-3mm}\end{aligned}$$
where $v_{j}(t)$ will alternately be each $\lambda_{ij}(t)E_{ij}$. The calculations proceed exactly in the same spirit as before, with the exception that now the covariance matrix $\mathbb{E}[W_{t}^{n}W_{t}^{m*}]=a_{nm}t$ must be taken into account. The results are contained in table \[TwoStPart\].
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
$a_{ij};\lambda_{ij}(t)$
--------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------ ---------------------------------------------------------------------------------------------
$a_{12}=0$
${\left(\begin{array}{cc} $\lambda_{11}(t)=\sqrt{\gamma_{1}}\exp\left(-\frac{a_{11}\gamma_{1}}{2}t-i\omega_{0}t\right)$ $\gamma_{1}{\left(\begin{array}{cc}
\hspace*{-1.5mm} \lambda_{11}(t) & 0\hspace*{-1.5mm} \\ \hspace*{-1.5mm} 1 & 0\hspace*{-1.5mm} \\
\hspace*{-1.5mm} 0 & 0\hspace*{-1.5mm} \hspace*{-1.5mm} 0 & 0\hspace*{-1.5mm}
\end{array} \right)}\qquad {\left(\begin{array}{cc} \end{array} \right)}\qquad \gamma_{2}{\left(\begin{array}{cc}
\hspace*{-1.5mm} 0 & \lambda_{12}(t)\hspace*{-1.5mm} \\ \hspace*{-1.5mm} 0 & 1\hspace*{-1.5mm} \\
\hspace*{-1.5mm} 0 & 0\hspace*{-1.5mm} \hspace*{-1.5mm} 0 & 0\hspace*{-1.5mm}
\end{array} \right)}$ \end{array} \right)}$
$\lambda_{12}(t)=\sqrt{\gamma_{2}}\exp\left(-\frac{a_{22}\gamma_{2}}{2}t+i\omega_{0}t\right)$
It’s impossible to have both a
${\left(\begin{array}{cc} preserving and a nonpreserving
\hspace*{-1.5mm} \lambda_{11}(t) & 0\hspace*{-1.5mm} \\
\hspace*{-1.5mm} 0 & 0\hspace*{-1.5mm}
\end{array} \right)}\qquad {\left(\begin{array}{cc}
\hspace*{-1.5mm} 0 & 0\hspace*{-1.5mm} \\
\hspace*{-1.5mm} \lambda_{21}(t) & 0\hspace*{-1.5mm}
\end{array} \right)}$
factor coupled to the same level.
$a_{14}=0$
${\left(\begin{array}{cc} $\lambda_{11}(t)=\sqrt{\gamma_{1}}\exp\left(-\frac{a_{11}\gamma_{1} t}{2}-i\omega_{0}t\right)$ $\gamma_{1}{\left(\begin{array}{cc}
\hspace*{-1.5mm} \lambda_{11}(t) & 0\hspace*{-1.5mm} \\ \hspace*{-1.5mm} 1 & 0\hspace*{-1.5mm} \\
\hspace*{-1.5mm} 0 & 0\hspace*{-1.5mm} \hspace*{-1.5mm} 0 & 0\hspace*{-1.5mm}
\end{array} \right)}\qquad {\left(\begin{array}{cc} \end{array} \right)}\qquad \gamma_{4}{\left(\begin{array}{cc}
\hspace*{-1.5mm} 0 & 0\hspace*{-1.5mm} \\ \hspace*{-1.5mm} 0 & 0\hspace*{-1.5mm} \\
\hspace*{-1.5mm} 0 & \lambda_{22}(t)\hspace*{-1.5mm} \hspace*{-1.5mm} 0 & 1\hspace*{-1.5mm}
\end{array} \right)}$ \end{array} \right)}$
$\lambda_{22}(t)=\sqrt{\gamma_{4}}\exp\left(-\frac{a_{44}\gamma_{4} t}{2}+i\omega_{0}t\right)$
It’s impossible to obtain a
${\left(\begin{array}{cc} Markovian evolution
\hspace*{-1.5mm} 0 & \lambda_{12}(t)\hspace*{-1.5mm} \\
\hspace*{-1.5mm} 0 & 0\hspace*{-1.5mm}
\end{array} \right)}\qquad {\left(\begin{array}{cc}
\hspace*{-1.5mm} 0 & 0\hspace*{-1.5mm} \\
\hspace*{-1.5mm} \lambda_{21}(t) & 0\hspace*{-1.5mm}
\end{array} \right)}$
for any initial $\rho(0)$.
It’s impossible to have both a
${\left(\begin{array}{cc} preserving and a nonpreserving
\hspace*{-1.5mm} 0 & \lambda_{12}(t)\hspace*{-1.5mm} \\
\hspace*{-1.5mm} 0 & 0\hspace*{-1.5mm}
\end{array} \right)}\qquad {\left(\begin{array}{cc}
\hspace*{-1.5mm} 0 & 0\hspace*{-1.5mm} \\
\hspace*{-1.5mm} 0 & \lambda_{22}(t)\hspace*{-1.5mm}
\end{array} \right)}$
factor coupled to the same level.
$a_{34}=0$
${\left(\begin{array}{cc} $\lambda_{21}(t)=\gamma_{3}^{1/2}\exp\left(-\frac{a_{33}\gamma_{3} t}{2}-i\omega_{0}t\right)$ $\gamma_{3}{\left(\begin{array}{cc}
\hspace*{-1.5mm} 0 & 0\hspace*{-1.5mm} \\ \hspace*{-1.5mm} 1 & 0\hspace*{-1.5mm} \\
\hspace*{-1.5mm} \lambda_{21}(t) & 0\hspace*{-1.5mm} \hspace*{-1.5mm} 0 & 0\hspace*{-1.5mm}
\end{array} \right)}\qquad {\left(\begin{array}{cc} \end{array} \right)}\qquad \gamma_{4}{\left(\begin{array}{cc}
\hspace*{-1.5mm} 0 & 0\hspace*{-1.5mm} \\ \hspace*{-1.5mm} 0 & 0\hspace*{-1.5mm} \\
\hspace*{-1.5mm} 0 & \lambda_{22}(t)\hspace*{-1.5mm} \hspace*{-1.5mm} 0 & 1\hspace*{-1.5mm}
\end{array} \right)}$ \end{array} \right)}$
$\lambda_{22}(t)=\gamma_{4}^{1/2}\exp\left(-\frac{a_{44}\gamma_{4} t}{2}+i\omega_{0}t\right)$
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Some comments are to be made. First the impossibilty of having a double coupling to the same level is clearly shown (rows 2 and 5), which is quite natural from a physical point of view. Perhaps the most appealing feature is the necessary uncorrelation between the two Wiener processes expressed through the condition $a_{ij}=0$. There’s an immediate interpretation of this condition in the basis of Markovian evolution. Physically Markovianity is characterized by the fact that the evolution of a system from one initial time to a final time is exactly the same as the evolution from that initial time to an intermediate time and then from the latter to the final instant, *whatever* that intermediate time is. This strongly suggests the idea that the evolution process is made up of very tiny and identical contributions of infinitesimal duration, so that the final evolution is just the whole contribution of each infinitesimal part. The common view states that such a system lacks of memory, i.e. the state to which it evolves only depends on its present state and never in the previous ones. These two standpoints are complementary. In our case if the Wiener processes partially driving the system evolution were correlated, then this evolution partition into tiny equal parts would be impossible, thus neglecting the possibility of Markovianity. The final form of the matrix entries $\lambda_{ij}(t)$’s are obtained requiring time-independence as before.
Triple and Quadruple Couplings
------------------------------
The considerations taken up before lead us immediately to neglect any possibility of establishing triple or quadruple couplings, since each energy level can only interact with the electromagnetic vacuum through one single Wiener process. Only in case of more complex systems we may combine in a variety of different ways triple, quadruple and higher-order couplings.
Perturbed Systems
=================
The generalization of the previous formalism to perturbed, i.e. interacting systems poses relevant questions as to the adopted conceptual scheme. Naively we may try to investigate the evolution produced by an operator of the form
$$\begin{aligned}
{U_{st}(t)}\hspace{-2mm}&=&\hspace{-2mm}\mathcal{T}e^{-i\int_{0}^{t}H(s)ds}\left[I-\int_{0}^{t}v^{\dagger}(s)v(s)ds\right]^{1/2}\hspace{-3mm}+{\nonumber}\\
\label{UstNonDiag}&+&\int_{0}^{t}v(s)dW_{s},\hspace{-1cm}\end{aligned}$$
where $H(t)$ may be nondiagonal, as e.g. that expressing the interaction with an electromagnetic wave[^7]
$$H(t)={\left(\begin{array}{cc}
\hspace*{-1.5mm} \hbar\omega_{0} & \mathcal{E}e^{-i\omega t}\hspace*{-1.5mm} \\
\hspace*{-1.5mm} \mathcal{E^{*}}e^{i\omega t} & -\hbar\omega_{0}\hspace*{-1.5mm}
\end{array} \right)}.$$
Hopefully this as before should drives us to a differential equation for the density operator with the form
$$\frac{d\rho(t)}{dt}=-i[H(t),\rho(t)]+\frac{1}{2}\left\{[\ell\rho(t),\ell^{\dagger}]+[\ell,\rho(t)\ell^{\dagger}]\right\}.$$
In particular one should be able to reproduce Rabi oscillations with decay (cf. e.g. [@Loudon]), but this is impossible as the following proposition states:
\[Prop4\] Let $X\in\mathcal{M}_{2}(\mathbb{C})$ be of the form $X=v\rho v^{\dagger}$ where $\rho\in\mathcal{M}_{2}(\mathbb{C})$ is a density matrix and $v\in\mathcal{M}_{2}(\mathbb{C})$ is arbitrary. If in an arbitrary common basis, $H$ and $\ell$ are expressed as $$H={\left(\begin{array}{cc}
\hspace*{-1.5mm} h_{11} & h_{12}\hspace*{-1.5mm} \\
\hspace*{-1.5mm} h_{21} & h_{22}\hspace*{-1.5mm}
\end{array} \right)}\qquad \ell={\left(\begin{array}{cc}
\hspace*{-1.5mm} \ell_{11} & \ell_{12}\hspace*{-1.5mm} \\
\hspace*{-1.5mm} \ell_{21} & \ell_{22}\hspace*{-1.5mm}
\end{array} \right)},$$then one of the following necessary conditions to have $$L[X]\equiv-i[H,X]+\frac{1}{2}\left\{[\ell X,\ell^{\dagger}]+[\ell,X\ell^{\dagger}]\right\}=0$$ $H \textrm{ selfadjoint},\ell\textrm{ arbitrary}$ for all $\rho$ must hold
1. If $v={\left(\begin{array}{cc}
\hspace*{-1.5mm} \beta_{1} & \beta_{2}\hspace*{-1.5mm} \\
\hspace*{-1.5mm} 0 & 0\hspace*{-1.5mm}
\end{array} \right)}$, then $$\label{case1}\ell_{12}\ell_{11}^{*}=i2h_{12}\qquad\ell_{21}=0$$
2. If $v={\left(\begin{array}{cc}
\hspace*{-1.5mm} 0 & 0\hspace*{-1.5mm} \\
\hspace*{-1.5mm} \beta_{1} & \beta_{2}\hspace*{-1.5mm}
\end{array} \right)}$, then $$\label{case2}\ell_{21}\ell_{22}^{*}=i2h_{21}\qquad\ell_{12}=0$$
3. If $v={\left(\begin{array}{cc}
\hspace*{-1.5mm} \beta_{1} & \beta_{2}\hspace*{-1.5mm} \\
\hspace*{-1.5mm} \beta_{3} & \beta_{4}\hspace*{-1.5mm}
\end{array} \right)}\qquad\beta_{k}$’s not corresponding to the previous cases, then $$\begin{aligned}
& \alpha_{1}\Lambda_{11}+\alpha_{2}\Lambda_{21}+\alpha_{3}\Lambda_{31}+\alpha_{4}\Lambda_{41}=0 & {\nonumber}\\ & \alpha_{1}\Lambda_{12}+\alpha_{2}\Lambda_{22}+\alpha_{3}\Lambda_{32}+\alpha_{4}\Lambda_{42}=0 & {\nonumber}\\
& &
\end{aligned}$$ where the $\alpha_{i}$’s and the $\Lambda_{ij}$’s are complicated expressions concerning the elements of $H$ and $\ell$ (cf. Appendix \[AppendProp4\]).
Cf. Appendix \[AppendProp4\].
To get a decay process we only need case 2– eq. (\[case2\])– since the Lindblad operator now reads $\ell=\gamma E_{21}$ which is only possible if $v$ is of type $\left(\begin{smallmatrix}0&0\\\beta_{1}&\beta_{2}\end{smallmatrix}\right)$ which following (\[case2\]) makes the Lindblad components time-dependent and the Hamiltonian diagonal.
Far from invalidating the whole formalism this, in our opinion, suggests the correct interpretation to be assumed in the introduction of the stochastic nature of the evolution operator: the former scheme must only be applied to closed systems. The question then is *why is it possible to apply this formalism only to traditionally closed systems?* The answer can be inspired in the canonical example used so far. In an orthodox description, the presence of just a single photon changes the whole picture, since now there’s a **real** interaction between the electromagnetic field and the atom. In the former case, with no photon, i.e. with electromagnetic vacuum, the notion of interaction does not appear in a natural way, since there’s nothing the atom interacts with. But this does not mean that an open (interacting) system may not show stochastic behaviour, what this means is that to obtain the correct stochastic evolution operator for an open system, we must apply the previous formalism to the system+environment compound and then trace out the latter’s degrees of freedom. In the general case this is nearly impossible since it supposes to be able to control all environmental degrees of freedom. To illustrate this idea and check whether the qualitative behaviour described in the previous section is still valid, we will work out a simple and manageable example, namely a two-level atom described by a Jaynes-Cummings Hamiltonian
$$H_{JF}=\omega_{0}S_{z}+\omega a^{\dagger}a+\epsilon(a^{\dagger}S_{-}+aS_{+}).$$
This represents an atom interacting only with one single mode of the electromagnetic field. We will consider only a multiple energy-preserving interaction, i.e. for the whole atom+field system we consider a stochastic evolution operator of the form[^8]
$$\begin{aligned}
{U_{st}(t)}&=&\mathbb{E}{U_{st}(t)}+\sum_{n=0}^{\infty}\lambda_{n+1}^{\pm}\circ W_{n+1}^{\pm}(t)|u_{n+1}^{\pm}\rangle\langle u_{n+1}^{\pm}|+{\nonumber}\\
&+&\lambda_{0}^{-}\circ W_{0}^{-}(t)|u_{0}^{-}\rangle\langle u_{0}^{-}|,\end{aligned}$$
where $\lambda_{n}^{\pm}(t)\equiv\sqrt{\gamma_{n}^{\pm}}\exp\left[-\frac{\gamma_{n}^{\pm}}{2}t\mp i\omega_{n}^{\pm}t\right]$ and $\mathbb{E}{U_{st}(t)}$ has the corresponding form given by Prop. \[ExpVal\].
The previous formalism yields the following solution for the density matrix entries:
$$\begin{aligned}
\label{OffDiag}\rho_{nm}^{\pm\pm}(t)&=&\exp\negthinspace\left[-i(\omega_{n}^{\pm}-\omega_{m}^{\pm})t-\frac{1}{2}(\gamma_{n}^{\pm}+\gamma_{m}^{\pm})t\right]\rho_{nm}^{\pm\pm}(0) \textrm{ if $n\neq m$ or different signs,}\\
\label{Diag}&=&\rho_{nn}^{\pm\pm}(0) \hspace{1cm}\textrm{otherwise,} \end{aligned}$$
which is the similar decohering behaviour previously shown by the single atom. But we are interested in the atom itself and not in the whole system, so we must trace out the field degrees of freedom, i.e.
$$\label{ParTrac}
\rho_{A}(t)={\textrm{tr}}_{F}\rho_{AF}(t).$$
To be concrete we will focus on two particular situations. First we will consider an initially correlated or uncorrelated excited atom and $n$ photons with energy $\hbar\omega$ ($n\neq 0$). Later we will study the case of no photon ($n=0$). Under these hypotheses the initial atom+field density matrix in the $|\pm,m\rangle$ basis for the first case is given by
$$\label{InitCond}
\rho(\cdot,\times;pq)=\delta_{\cdot +}\delta_{\times +}\delta_{pn}\delta_{qn},$$
where by $\cdot,\times$ we denote any of the $\pm$ signs. The partial trace (\[ParTrac\]) drives us through a tedious though elementary calculation to the result
$$\begin{aligned}
\rho_{A}&=&\left[\sum_{k=0}^{\infty}\left(\rho_{k+1 k+1}^{++}\cos^{2}\theta_{k+1}-(\rho_{k+1 k+1}^{+-}-\rho_{k+1 k+1}^{-+})\cos\theta_{k+1}\sin\theta_{k+1}+\rho_{k+1 k+1}^{--}\sin^{2}\theta_{k+1}\right)\right]|+\rangle\langle+|+{\nonumber}\\
&+&\Bigg[\sum_{k=1}^{\infty}\left(\rho_{k+1 k}^{++}\cos\theta_{k+1}\sin\theta_{k}+\rho_{k+1 k}^{+-}\cos\theta_{k+1}\cos\theta_{k}-\rho_{k+1 k}^{-+}\sin\theta_{k+1}\sin\theta_{k}-\right.{\nonumber}\\
&-&\left.\rho_{k+1 k+1}^{--}\sin\theta_{k+1}\cos\theta_{k}\right)+\rho_{10}^{+-}\cos\theta_{1}-\rho_{10}^{--}\sin\theta_{1}\Bigg]|+\rangle\langle-|+{\nonumber}\\
&+&\Bigg[\sum_{k=1}^{\infty}\left(\rho_{k k+1}^{++}\cos\theta_{k+1}\sin\theta_{k}-\rho_{k k+1}^{+-}\sin\theta_{k+1}\sin\theta_{k}+\rho_{k k+1}^{-+}\cos\theta_{k+1}\cos\theta_{k}-\right.{\nonumber}\\
&-&\left.\rho_{k k+1}^{--}\sin\theta_{k+1}\cos\theta_{k}\right)+\rho_{01}^{-+}\cos\theta_{1}-\rho_{01}^{--}\sin\theta_{1}\Bigg]|-\rangle\langle+|+{\nonumber}\\
&+&\Bigg[\sum_{k=1}^{\infty}\left(\rho_{k+1 k+1}^{--}\sin^{2}\theta_{k+1}+(\rho_{k+1 k+1}^{+-}+\rho_{k+1 k+1}^{-+})\cos\theta_{k+1}\sin\theta_{k+1}+\rho_{k+1 k+1}^{--}\cos^{2}\theta_{k+1}\right)+{\nonumber}\\
\label{PartDens}&+&\rho_{00}^{--}\Bigg]|-\rangle\langle-|,\end{aligned}$$
where $\rho_{pq}^{\cdot\times}$ are the density matrix components in the $\{|u_{n+1}^{\pm}\rangle,|u_{0}^{-}\rangle\}$ basis. Notice that all these matrix density entries in (\[PartDens\]) are time-dependent, the time dependence being given by (\[OffDiag\]) and (\[Diag\]). Indeed as a result of (\[OffDiag\]) and (\[Diag\]), and the initial conditions (\[InitCond\]) the final atom state after a time $t\gg T\equiv \max\{(\gamma_{i}+\gamma_{j})^{-1}\}$ has elapsed is a $2\times 2$ matrix given in the usual $|\pm\rangle$ basis by
$$\rho_{A}(t\gg T)\simeq\left(\begin{smallmatrix}\cos^{4}\theta_{n+1}+\sin^{4}\theta_{n+1}&0\\0& 2\cos^{2}\theta_{n+1}\sin^{2}\theta_{n+1}\end{smallmatrix}\right),$$
from where we can explicitly follow the decoherence suffered by the atom, and the probabilities of remaining and decaying in the excited and ground states respectively exactly coincide with the quantum ones. As initially chosen (in the election of the stochastic part), the energy of atom+field is conserved, so we only obtain decoherence in the compound system. However the energy of the system may vary, as its reduced density matrix shows ($\rho_{A}(--)\neq 0$), this energy variation being due to the interaction between the field and the atom.
The second situation in which there’s no photon present ($n=0$) is included to check consistency with the case of a single atom –see section \[NonPert\]. Now a quantitative difference should be expected, since we are only taking into account one single mode. However the qualitative behaviour, i.e. the *spontaneous decay* process should also be obtained. The same density matrix form for large $t$ is as a matter of fact obtained in a similar calculation
$$\rho_{A}(t\gg T)\simeq\left(\begin{smallmatrix}\cos^{4}\theta_{1}+\sin^{4}\theta_{1}&0\\0& 2\cos^{2}\theta_{1}\sin^{2}\theta_{1}\end{smallmatrix}\right).$$
As expected we do not obtain the same quantitative behaviour as in (\[SponEmiss\]). However a qualitative spontaneous emission is obtained, since starting from a system completely in the excited state (cf. eq. (\[InitCond\])) we have found that there’s nonnull probability of finding it in the ground level even though there’s no photon present in the field.
Vacuum Interactions vs. Stochastic Nature
=========================================
So far we have kept ourselves under the conceptual orthodoxy derived from the Quatnum Field Theory formalism in general, and the Quantum Electrodynamics in particular. But a natural question can be raised as to the conceptual framework which may be attached to the previous section scheme. Is it possible to found the motivation of use of stochastic methods on some other vacuum-independent notions? This is, we are aware, a delicate question, since it reaches the very interpretation of Quantum Mechanics itself. But nothing is further from our intention than providing a brand new interpretation of Quantum Mechanics. We only restrict ourselves to consider the possibility of merging together the stochastic nature of quantum systems (derived from the projection postulate) with the usual unitarity evolution. In this process it is straightforward to convince oneself that the original Dirac’s argument (cf. [@Dirac]) is still valid, i.e. any linear superposition of states must be conserved during the evolution of the system, hence a linear operator can be defined which carries the initial states onto the evolved states at time $t$.
In QED it is the virtual quantum fluctuations which drive the system from initial excited states into its ground state. This process is equivalent to recognizing a patent stochasticity in the whole system (what other meaning can *fluctuations* bear?), a system which is necessarily compound (atom+field), even though there’s no photon present in the field. Is it not much easier and more natural to assume that quantum systems possess an inherent random nature independent of outer entities (fields, environments, etc)? Instead of rooting from a fundamental point of view the stochasticity of quantum systems in external incontrollable perturbations (observations, measurements), cannot we assert that this randomness is due to their quantum nature itself? Another way of stating this hypothesis is by realizing that in orthodox Quantum Mechanics, indeterminism is only present in the act of measuring, thus neglecting this indeterminism in the nature of the quantum systems itself (cf. [@Dirac], page 108). As a matter of fact quantun systems which are not observed are deterministic. Nevertheless our experience in the laboratory clearly demands an indeterministic behaviour of these systems. Is it not more natural just to claim that these quantum systems are indeterministic themselves, with no resorting to external observations or interventions? Note that this does not suppose a major departure from quantum orthodoxy, so we do not lose any result characteristic of Quantum Mechanics. Indeed the stochasticity, present in the formalism through the operator $v(t)$ (cf. above), can be chosen to slightly modify the usual hamiltonian evolution. Suppose that the effects of the stochasticity are small compared to the hamiltonian evolution, i.e. we may write
$$\left[I-\int_{0}^{t}v^{\dagger}(s)v(s)ds\right]^{1/2}\approx I-\frac{1}{2}\int_{0}^{t}v^{\dagger}(s)v(s)ds,$$
where $v^{\dagger}(t)v(t)$ is of higher order than $H$ in the norm sense. Then we may write for the density operator
$$\rho(t)=\rho_{QM}(t)+\widetilde{\rho}_{st}(t).$$
The usual well-known quantum results are contained in the first part, whereas new effects are only present in the stochastic part. As we have seen for two-level quantum systems, these new effects amount to introducing either energy-preserving terms which only show decoherence (only achieved by resorting to other systems in orthodox Quantum Mechanics) or energy-nonpreserving terms which in a natural way show spontaneous decay phenomena also without resorting to external interventions.
On the other hand a mathematical scheme accounting for the stochastic nature of quantum systems is provided, as we have implicitly done, by letting theorem \[DecompTh\] be applied to each evolution operator component. Several comments should be remarked. First an elaborate argument should be offered for the cited theorem to be applied. This argument reads as follows. The stochasticity of quantum systems, as it is expressed by the projection postulate, roots on the impossibility of determining the evolution of a quantum system at a given instant $t$ (typically when a measurement upon it is performed). At most we can know the probabilities of evolution from that state to the subsequent ones (typically the eigenvectors of the observable to be measured). The need for the presence of an observer has been the origin of neverending discussions since its very introduction. The mathematical way to deal with this kind of stochastic objects in general is to use random variables, which are nothing more than deterministic (usual) variables with a probability measure associated to them. So cannot quantum states, i.e. Hilbert-space vectors be also attached probability measures? That’s all we have done. Once we have assumed such a hypothesis, the foregoing construction rests on mathematical results. If a linear operator tranforms quantum states into quantum states, then a stochastic linear operator will also transform stochastic quantum states into stochastic quantum states, hence endowing operators with an intrinsic random linear nature. This randomness can be expressed, as we have done, componentwise, so turning each operator (complex) entry into a complex random variable. Next step was then to apply theorem \[DecompTh\] to these complex random variables.
But a subtlety must be considered. When the whole process is built componentwise (cf. eq.(\[CompWis\])), one may (and should) inquiry *which basis is to be used*? This surpringly drives us to the *preferred basis problem*. But now instead of trying to provide more or less complicate theoretical arguments, we may allow Nature to freely choose the basis. To know what basis Nature chooses we make use of Prop. \[Prop4\] to realize that *the only basis (up to basis changes) in which the spontaneous decay is possible is the one which diagonalizes the free Hamiltonian*. This observation follows from noticing that as it is usually done in a phenomenological, though correct spirit to obtain decay we must add damping terms $-\gamma\rho_{\cdot\times}$ to the time evolution equation of each of the density matrix components $\rho_{\cdot\times}$ expressed in the free hamiltonian eigenvector basis (cf. [@Loudon], page 64), which is equivalent to using a Lindblad operator of the form $${\left(\begin{array}{cc}
\hspace*{-1.5mm} 0 & 0\hspace*{-1.5mm} \\
\hspace*{-1.5mm} \gamma & 0\hspace*{-1.5mm}
\end{array} \right)}$$Following Prop. \[Prop4\] this Lindblad operator can only be obtained when using the previous stochastic formalism if $H$ is diagonal, that is, in the free hamiltonian eigenvector basis. Another way of stating it is by noticing that it is the spontaneous emission which chooses the basis.
Finally the same hypotheses assumed in the orthodox quantum formalism (trace conservation, probability interpretation) are also taken up when the stochastic nature of quantum systems is considered.
Note that the whole scheme is only based on a simple hypothesis: inherent randomness, the quantitative consequences of which are derived using just well-established physics-free stochastic methods.
Conclusions
===========
In this paper we have considered the hypothesis of letting quantum systems show an inherent random nature. Mathematically we express this fact using a well-established theorem of decomposition of random variables using stochastic calculus. We then apply the usual definition of density operator for quantum systems being represented by stochastic state-vectors and impose normalization. As a result of these hypotheses we show how this stochastic evolution is determined by the usual hamiltonian $H$ of the system and a set of new operators $v_{k}(t)$ which characterizes the random behaviour of the evolution. The effect of these new elements in the description of the evolution of a quantum system makes possible to connect several originally independent facts.
Firstly under suitable physical conditions, we may establish the general form of a completely positive Markovian evolution, i.e. a Lindblad evolution, the Lindblad operators being a more or less complicated expression of the hamiltonian $H$ and the operators $v_{k}(t)$. Secondly, choosing adequately the operators $v_{k}(t)$ we may reproduce the phenomenon of spontaneous emission for a two-level quantum system in QED vacuum. Thirdly, choosing again these operators in a convenient way a system can show intrinsic decoherence, thus reducing the role of the observer in the measurement process. Finally, the formalism provides a natural way to attack the preferred basis problem, since now we have new phenomena such as spontaneous emission entering the physical picture which suggests a basis singularization.
Nevertheless some remaining points should also be addressed. First and most important the question of a physical principle which provides the form of the operators $v_{k}(t)$ must be investigated. In particular, in the same way as symmetry considerations usually provide a hint of the form of the hamiltonian $H$ of a system, it would be desirable to have some sort of similar reasoning to work out the form of $v_{k}(t)$ in each concrete situation. Second, though the principles of the scheme are well established above, a generalization to more than 2 energy-level systems is necessary. Work in both these directions is in progress.
Acknowledgments {#acknowledgments .unnumbered}
===============
We acknowledge the support of Spanish Ministry of Science and Technology under project No. BFM2000-0013. One of us (D.S.) must also acknowledge the support of Madrid Education Council under grant BOCAM 20-08-1999.
Proof of Prop. \[ExpVal\]
=========================
The proof of proposition \[ExpVal\] is a conjugation of Ito’s formula and the polar decomposition of operators acting upon Hilbert spaces. Using (\[StEvolOper\]) in (\[DensOp\]) and making use of Ito’s formula we get
$$\begin{aligned}
\rho(t)&=&\mathbb{E}U_{st}(t)\rho(0)\mathbb{E}U_{st}^{\dagger}(t)+{\nonumber}\\
\label{DensOP}&+&\sum_{nm=1}^{N^{2}}a_{nm}\int_{0}^{t}v_{n}(s)\rho(0)v_{m}^{\dagger}(s)ds,\end{aligned}$$
where the correlation matrix $\mathbb{E}[W_{t}^{n}W_{t}^{m*}]=a_{nm}t$ has been taken into account.
----------------------------------------------------------------------------------------------------- -- -----------------------------------------------------------------------------------------------------
$\Lambda_{11}=-|\ell_{21}|^{2}$ $\Lambda_{12}=ih_{12}+\frac{1}{2}[(2\ell_{11}-\ell_{22})\ell_{21}^{*}-\ell_{12}\ell_{11}^{*}]$
$\Lambda_{13}=-ih_{21}+\frac{1}{2}[(2\ell_{11}^{*}-\ell_{22}^{*})\ell_{12}-\ell_{12}^{*}\ell_{11}]$ $\Lambda_{14}=|\ell_{21}|^{2}$
$\Lambda_{21}=ih_{21}+\frac{1}{2}(\ell_{11}\ell_{12}^{*}-\ell_{21}\ell_{22}^{*})$ $\Lambda_{22}=-i(h_{11}-h_{22})+\frac{1}{2}[2\ell_{11}\ell_{22}^{*}-\sum_{i=1}^{4}|\ell_{ii}|^{2}]$
$\Lambda_{23}=\ell_{21}\ell_{12}^{*}$ $\Lambda_{24}=-ih_{21}+\frac{1}{2}[-\ell_{11}\ell_{12}^{*}+\ell_{21}\ell_{22}^{*}]$
$\Lambda_{31}=-ih_{12}+\frac{1}{2}[\ell_{11}^{*}\ell_{12}-\ell_{21}^{*}\ell_{22}]$ $\Lambda_{32}=\ell_{21}^{*}\ell_{12}$
$\Lambda_{33}=i(h_{11}-h_{22})+\frac{1}{2}[2\ell_{11}^{*}\ell_{22}-\sum_{i=1}^{4}|\ell_{ii}|^{2}$ $\Lambda_{34}=ih_{12}+\frac{1}{2}[-\ell_{11}^{*}\ell_{12}+\ell_{21}^{*}\ell_{22}]$
$\Lambda_{41}=|\ell_{12}|^{2}$ $\Lambda_{42}=-ih_{12}+\frac{1}{2}[\ell_{12}(2\ell_{22}^{*}-\ell_{11}^{*})-\ell_{21}^{*}\ell_{22}]$
$\Lambda_{43}=ih_{21}+\frac{1}{2}[\ell_{12}^{*}(2\ell_{22}-\ell_{11})-\ell_{21}\ell_{22}^{*}]$ $\Lambda_{44}=-|\ell_{12}|^{2}$
----------------------------------------------------------------------------------------------------- -- -----------------------------------------------------------------------------------------------------
Now forcing trace preservation ${\textrm{tr}}\rho(t)={\textrm{tr}}\rho(0)$ and using the cyclic property of the trace we may write
$$\begin{aligned}
{\textrm{tr}}[\mathbb{E}U_{st}^{\dagger}(t)\mathbb{E}U_{st}(t)\rho(0)&+&{\nonumber}\\
\sum_{nm=1}^{N^{2}}a_{nm}\int_{0}^{t}v_{m}^{\dagger}(s)v_{n}(s)ds\rho(0)]&=&{\textrm{tr}}\rho(0),{\nonumber}\\
& & \end{aligned}$$
which if it is to be valid for all $\rho(0)$ implies
$$\mathbb{E}U_{st}^{\dagger}(t)\mathbb{E}U_{st}(t)+\sum_{nm=1}^{N^{2}}a_{nm}\int_{0}^{t}v_{m}^{\dagger}(s)v_{n}(s)ds=I.$$
From this and making use of the polar decomposition of $\mathbb{E}{U_{st}(t)}$, we conclude
$$\mathbb{E}{U_{st}(t)}=\mathcal{U}(t)\left(I-\sum_{nm=1}^{N^{2}}a_{nm}\int_{0}^{t}v_{m}^{\dagger}(s)v_{n}(s)ds\right)^{1/2},$$
where $\mathcal{U}(t)$ is an arbitrary unitary operator which is expressed in the usual exponential form.
Proof of Prop. \[Prop4\] {#AppendProp4}
========================
The idea of the proof is to use the linearity of the space $\mathcal{M}_{2}(\mathbb{C})$. We will firstly focus upon the canonical basis $\{E_{11},E_{12},E_{21},E_{22}\}$ which we shall rename respectively as $\{E_{1},E_{2},E_{3},E_{4}\}$. Then we calculate $L[E_{k}]$ for each $k$. Denoting $$L[E_{n}]=\sum_{m=1}^{4}\Lambda_{nm}E_{m},$$ the results are shown in table \[LamCoef\].
We will agree that these 16 quantities configure the $4\times 4$ matrix $\Lambda$. By virtue of the linearity of $\mathcal{M}_{2}(\mathbb{C})$ , we write
$$X=\sum_{n=1}^{4}x_{n}E_{n}\Rightarrow L[X]=\sum_{n=1}^{4}x_{n}L[E_{n}].$$
Then the condition $L[X]=0$ transforms into
$$\label{NullCond}
(x_{1}\ x_{2}\ x_{3}\ x_{4})\cdot\Lambda=(0\ 0\ 0\ 0).$$
Now we must restrict the form of $X$, since it must be of the type $X=v\rho_{0} v^{\dagger}$, where $\rho_{0}$ is a density matrix (positive selfadjoint matrix with unity trace). Using again the linearity of $\mathcal{M}_{2}(\mathbb{C})$, we may express $$v=\sum_{n=1}^{4}\beta_{n}E_{n}$$ Then $X$ may be written as $X=\sum_{n}\alpha_{n}E_{n}$ with
$$\begin{aligned}
{\nonumber}\alpha_{1}={\left(\begin{array}{cc}
\hspace*{-1.5mm}\beta_{1} & \beta_{2}\hspace{-1.5mm}
\end{array} \right)}\rho_{0}{\left(\begin{array}{c}
\hspace*{-1.5mm}\beta_{1}^{*}\hspace*{-1.5mm} \\
\hspace*{-1.5mm}\beta_{2}^{*}\hspace*{-1.5mm}
\end{array}\right)} & &\alpha_{2}={\left(\begin{array}{cc}
\hspace*{-1.5mm}\beta_{1} & \beta_{2}\hspace{-1.5mm}
\end{array} \right)}\rho_{0}{\left(\begin{array}{c}
\hspace*{-1.5mm}\beta_{3}^{*}\hspace*{-1.5mm} \\
\hspace*{-1.5mm}\beta_{4}^{*}\hspace*{-1.5mm}
\end{array}\right)} \\ {\nonumber}\alpha_{3}={\left(\begin{array}{cc}
\hspace*{-1.5mm}\beta_{3} & \beta_{4}\hspace{-1.5mm}
\end{array} \right)}\rho_{0}{\left(\begin{array}{c}
\hspace*{-1.5mm}\beta_{1}^{*}\hspace*{-1.5mm} \\
\hspace*{-1.5mm}\beta_{2}^{*}\hspace*{-1.5mm}
\end{array}\right)} & &\alpha_{4}={\left(\begin{array}{cc}
\hspace*{-1.5mm}\beta_{3} & \beta_{4}\hspace{-1.5mm}
\end{array} \right)}\rho_{0}{\left(\begin{array}{c}
\hspace*{-1.5mm}\beta_{3}^{*}\hspace*{-1.5mm} \\
\hspace*{-1.5mm}\beta_{4}^{*}\hspace*{-1.5mm}
\end{array}\right)} \\ {\nonumber}\end{aligned}$$
These $\alpha_{i}'s$ show the following immediate properties
1. $\alpha_{1}=0\quad\forall\rho_{0}\Leftrightarrow\beta_{1}=\beta_{2}=0\qquad\Rightarrow\alpha_{2}=0\quad\forall\rho_{0}$
2. $\alpha_{4}=0\quad\forall\rho_{0}\Leftrightarrow\beta_{3}=\beta_{4}=0\qquad\Rightarrow\alpha_{3}=0\quad\forall\rho_{0}$
3. $\alpha_{2}=0\quad\forall\rho_{0}\Leftrightarrow\alpha_{3}=0\quad\forall\rho_{0}$
4. $\alpha_{2}=0\quad\forall\rho_{0}\Rightarrow \beta_{1}=\beta_{2}=0 \textrm{ or }\beta_{3}=\beta_{4}=0$. In the first case we get $\alpha_{1}=0$ whereas in the second we get $\alpha_{4}=0$.
Then, in matricial form, $X=v\rho_{0}v^{\dagger}$ may show the following aspects:
- If $v={\left(\begin{array}{cc}
\hspace*{-1.5mm} \beta_{1} & \beta_{2}\hspace*{-1.5mm} \\
\hspace*{-1.5mm} 0 & 0\hspace*{-1.5mm}
\end{array} \right)}$, then $X=\alpha_{1}E_{1}$.
- If $v={\left(\begin{array}{cc}
\hspace*{-1.5mm} 0 & 0\hspace*{-1.5mm} \\
\hspace*{-1.5mm} \beta_{3} & \beta_{4}\hspace*{-1.5mm}
\end{array} \right)}$, then $X=\alpha_{4}E_{4}$.
- If $v={\left(\begin{array}{cc}
\hspace*{-1.5mm} \beta_{1} & \beta_{2}\hspace*{-1.5mm} \\
\hspace*{-1.5mm} \beta_{3} & \beta_{4}\hspace*{-1.5mm}
\end{array} \right)}$ with at least three $\beta_{k}$’s not equal to $0$, then $X=\sum_{n=1}^{4}\alpha_{n}E_{n}$.
Carrying these $\alpha_{i}$’s to (\[NullCond\]) and taking into account the expression for $\Lambda_{ij}$’s we obtain the conditions announced in Prop. \[Prop4\].
The Jaynes-Cummings model
=========================
The Jaynes-Cummings hamiltonian can be straightforwardly diagonalized by noticing that in the $|\pm,n\rangle$ basis it adopts the matricial representation\
$$\left(\begin{smallmatrix}
-\omega_{0}/2&0&0&\cdots&0&0&\cdots\\
0&\omega_{0}/2&\epsilon&\cdots&0&0&\cdots\\
0&\epsilon&-\omega_{0}/2+\omega&\cdots&0&0&\cdots\\
\vdots&\vdots&\vdots&\ddots&0&0&\cdots\\
0 & \cdots & \cdots & 0 & \omega_{0}/2+n\omega & \epsilon\sqrt{n+1} & 0 \\
0 & \cdots & \cdots & 0 & \epsilon\sqrt{n+1} & -\omega_{0}/2+(n+1)\omega & 0 \\
0 & \cdots & \cdots & \cdots & 0 & 0 & \ddots
\end{smallmatrix}\right)$$ from which we extract the eigenvalues ($n\ge 0$)
$$\begin{aligned}
E_{n+1}^{\pm}&=&(n+\frac{1}{2})\omega\pm\sqrt{\left(\frac{\omega-\omega_{0}}{2}\right)^{2}+(n+1)\epsilon^{2}}\\
E_{0}^{-}&=&-\omega_{0}/2\end{aligned}$$
and the eigenvectors ($n\ge 0$)
$$\begin{aligned}
|u_{n+1}^{+}\rangle&=&\phantom{-}\cos\theta_{n+1}|+,n\rangle+\sin\theta_{n+1}|-,n+1\rangle\\
|u_{n+1}^{-}\rangle&=&-\sin\theta_{n+1}|+,n\rangle+\cos\theta_{n+1}|-,n+1\rangle\\
|u_{0}^{-}\rangle&=&|-,0\rangle,\end{aligned}$$
where
$$\begin{aligned}
\tan\theta_{n+1}=\frac{\frac{\omega-\omega_{0}}{2}+\sqrt{\left(\frac{\omega-\omega_{0}}{2}\right)^{2}+(n+1)\epsilon^{2}}}{\epsilon\sqrt{n+1}}.\end{aligned}$$
These are the eigenvalues and eigenvectors of the dressed atom, which is commonly used in quantum optics (cf. [@Peng]). Notice that the set $\{|u_{n}^{\pm}\rangle,|u_{0}^{-}\rangle\}$ constitutes an orthonormal basis for the joint Hilbert space of atom+field.
[99]{}
H.M. Wiseman and L. Diósi, submitted to J. Chem. Phys., special issue on “Open Quantum Systems”. quant-ph/0012016 4-Dec-2000.
P.A.M. Dirac, *Principles of Quantum Mechanics, 4th revised edition* (Oxford University Press, Oxford, 1954).
B.G. Englert, M.O. Scully and H. Walther, Am. J. Phys. **67** (4), 325 (1999).
G.C. Ghirardi, P. Pearle and A. Rimini, Phys. Rev. A **42**, 78 (1990).
N. Gisin, Helv. Phys. Acta **63**, 929 (1990).
D. Giulini *et al.*, *Decoherence and the Appearance of a Classical World in Quantum Theory* (Springer, Berlin, 1996).
N.G. van Kampen, *Stochactic Processes in Physics and Chemistry* (North-Holland Personal Library, Amsterdam, 1981).
D. Nualart, *The Malliavin Calculus and Other Related Topics* (Springer, New York, 1995).
C.W. Gardiner, *Handbook of Stochastic Methods, 2nd ed.* (Springer, Berlin, 1985).
Parthasarathy, *An Introduction to Quantum Stochastic Calculus* (Birkhauser, Berlin, 1992).
S.L. Adler, Phys. Lett. A **265**, 58 (2000).
G. Lindblad, Commun. Math. Phys. **48**, 119 (1976).\
V. Gorini, A. Kossakowski and E.C.G. Sudarshan, J. Math. Phys. **17**, 821 (1976).
V. Gorini *et al.*, Rep. Math. Phys. **13**, 149 (1978).
R. Alicki and K. Lendi, *Quantum Dynamical Semigroups and Applications*, edited by H. Araki *et al.*, Lecture Notes in Physics vol. 286 (Springer, Berlin, 1987).
I.C. Percival, *Quantum State Diffusion* (Cambridge University Press, Cambridge, 1999).
J.-S. Peng and G.-X.Li, *Introduction to Modern Quantum Optics* (World Scientific, Singapore, 1998).
R. Loudon, *The Quantum Theory of Light* (Oxford University Press, Oxford, 1998).
[^1]: E-mail: [email protected]
[^2]: E-mail:[email protected]
[^3]: Cf. e.g. [@Gardiner] for a general reference on stochastic methods.
[^4]: We are aware that the following condition only refers to partial isometry and not unitarity itself. We, for the time being, do not care about mathematical refinements.
[^5]: The lack of a hamiltonian part can be viewed as a sort of hamiltonian-interaction picture election.
[^6]: As usual $\rho_{ij}(t)$ denote the $ij$th entry of the density matrix $\rho(t)$.
[^7]: The rotating wave approximation has already been assumed.
[^8]: We denote $\lambda_{j}^{\pm}\circ W_{j}^{\pm}(t)\equiv\int_{0}^{t}\lambda_{j}^{\pm}(s)dW_{j}^{\pm}(s)$.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We present a way of constructing multi-time-step monolithic coupling methods for elastodynamics. The governing equations for constrained multiple subdomains are written in dual Schur form and enforce the continuity of velocities at system time levels. The resulting equations will be in the form of differential-algebraic equations. To crystallize the ideas we shall employ Newmark family of time-stepping schemes. The proposed method can handle multiple subdomains, and allows different time-steps as well as different time stepping schemes from the Newmark family in different subdomains. We shall use the energy method to assess the numerical stability, and quantify the influence of perturbations under the proposed coupling method. Two different notions of energy preservation are introduced and employed to assess the performance of the proposed method. Several numerical examples are presented to illustrate the accuracy and stability properties of the proposed method. We shall also compare the proposed multi-time-step coupling method with some other methods available in the literature.'
author:
- 'S. Karimi'
- |
K. B. Nakshatrala\
[Department of Civil and Environmental Engineering, University of Houston, Houston, Texas 77204–4003.]{}\
[Correspondence to: ***e-mail:*** [email protected], ***Phone:*** +1-713-743-4418]{}
bibliography:
- 'Master\_References/Master\_References.bib'
- 'Master\_References/Books.bib'
title: 'On multi-time-step monolithic coupling algorithms for elastodynamics'
---
ACKNOWLEDGMENTS {#acknowledgments .unnumbered}
===============
The authors acknowledge the support of the National Science Foundation under Grant no. CMMI 1068181. The opinions expressed in this paper are those of the authors and do not necessarily reflect that of the sponsor(s).
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'This paper considers extreme values attained by a centered, multidimensional Gaussian process $X(t)= (X_1(t),\ldots,X_n(t))$ minus drift $d(t)=(d_1(t),\ldots,d_n(t))$, on an arbitrary set $T$. Under mild regularity conditions, we establish the asymptotics of $$\log{\mathbb P}\left(\exists{t\in T}:\bigcap_{i=1}^n\left\{X_i(t)-d_i(t)>q_iu\right\}\right),$$ for positive thresholds $q_i>0$, $i=1,\ldots,n$ and $u{\to\infty}$. Our findings generalize and extend previously known results for the single-dimensional and two-dimensional cases. A number of examples illustrate the theory.'
address:
- 'Mathematical Institute, University of Wrocław, pl. Grunwaldzki 2/4, 50-384 Wrocław, Poland.'
- 'E[urandom]{}, Eindhoven University of Technology, the Netherlands; Korteweg-de Vries Institute for Mathematics, University of Amsterdam, the Netherlands.'
- 'Korteweg-de Vries Institute for Mathematics, University of Amsterdam, the Netherlands; E[urandom]{}, Eindhoven University of Technology, the Netherlands; CWI, Amsterdam, the Netherlands'
- 'Mathematical Institute, University of Wrocław, pl. Grunwaldzki 2/4, 50-384 Wrocław, Poland.'
author:
- 'K. Dbicki'
- 'K.M. Kosiński'
- 'M. Mandjes'
- 'T. Rolski'
bibliography:
- 'Gaussian.bib'
title: |
Extremes of multidimensional\
Gaussian processes
---
[^1]
Introduction {#Intro}
============
Owing to its relevance in various application domains, in the theory of stochastic processes, substantial attention has been paid to estimating the tail distribution of the maximum value attained. In mathematical terms, the setting considered involves an ${\mathbb R}$-valued stochastic process $X=\{X(t): t\in T\}$ for some arbitrary set $T$ and a threshold level $u> 0$, where the focus is on characterizing the probability $$\label{eq:pu}
{\mathbb P}\left(\sup_{t\in T} X(t)>u\right)={\mathbb P}\left(\exists{t\in T}: X(t)>u\right).$$ More specifically, the case in which $X$ is a Gaussian process has been studied in detail. This hardly led to any explicit results for , but there is quite a large body of literature on results for the asymptotic regime in which $u$ grows large. The prototype case dealt with a centered Gaussian process with bounded trajectories for which the [*logarithmic asymptotics*]{} were found: it was shown that $$\label{eq:log}
\lim_{u{\to\infty}}u^{-2}\log {\mathbb P}\left(\sup_{t\in T}X(t)>u\right)=-\left(2\sigma_T^2\right)^{-1},$$ where $$\sigma_T^2:=\sup_{t\in T} {\mathbb E}X^2(t).$$ See @Adler90 [p. 42] or @Lifshits95 [Section 12] for this and related results. The monographs @Lifshits95 and @Piterbarg96 contain more refined results: under appropriate conditions, an explicit function $\phi(u)$ is given such that the ratio of and $\phi(u)$ tends to 1 as $u{\to\infty}$ (so-called [*exact asymptotics*]{}). The logarithmic asymptotics can easily be extended to the case of noncentered Gausssian processes if the mean function is bounded. The situation gets interesting if both trajectories and the mean function of the process are unbounded. In this respect we mention @Duffield95 and @Debicki99, where the logarithmic asymptotics of ${\mathbb P}(\sup_{t\ge0} (X(t)-d(t))>u)$ for general centered Gaussian processes $X$, under some regularity assumptions on the drift function $d$, were derived; see also @Husler99, @Dieker05a and references therein.
While the above results all relate to one-dimensional suprema, considerably less attention has been paid to their multidimensional counterparts. One of few exceptions is provided by the work of @Piterbarg05, who considered the case of two ${\mathbb R}$-valued, possibly dependent, centered Gaussian processes $\{X_1(t_1):t_1\in T_1\}$ and $\{X_2(t_2):t_2\in T_2\}$. They found the logarithmic asymptotics of $$\label{eq:PS}
{\mathbb P}(\exists{(t_1,t_2)\in T}: X_1(t_1)>u, X_2(t_2)>u)$$ for some $T\subseteq T_1\times T_2$, under the assumption that the trajectories of $X_1$ and $X_2$ are bounded. In the sequel we shall use the boldface notation for vectors with the understanding that for instance ${\boldsymbol}x =(x_1,\ldots, x_n)$ and $n$ is an implicit dimension size.
In this paper our objective is to obtain the logarithmic asymptotics of (following the convention that vectors are written in bold) $$\label{goal}
P(u):={\mathbb P}\left( \exists{{\boldsymbol}t\in T}:\bigcap_{i=1}^n \{X_i({\boldsymbol}t)-d_i({\boldsymbol}t)>q_i u\}\right);$$ here $\{{\boldsymbol}X({{\boldsymbol t}}):{{\boldsymbol t}}\in T\}$, with ${\boldsymbol}X({{\boldsymbol t}})= (X_1({{\boldsymbol t}}),\ldots,X_n({{\boldsymbol t}}))'$, is an ${\mathbb R}^n$-valued centered Gaussian processes defined on an arbitrary set $T\subseteq {\mathbb R}^m$, for some $m,n\in{\mathbb N}$, the $d_i(\cdot)$ are drift functions and $q_i>0$ are threshold levels, $i=1,\ldots,n$. Our setup is rich enough to cover both of the cases in which $P(u)$ corresponds to the event in which (i) it is required that there is a [*single*]{} time epoch $t\in{\mathbb R}$ such that $X_i(t)-d_i(t)>q_i u$ for all $i=1,\ldots,n$ and (ii) there are $n$ epochs $(t_1,\ldots,t_n)$ such that $X_i(t_i)-d_i(t_i)>q_i u$ for all $i=1,\ldots,n$. We get back to this issue in detail in Remark \[rem:sim\], where it is also noted that the theory covers a variety of situations between these two extreme situations.
Compared to the one-dimensional setting, the multidimensional case requires various technical complications to be settled. The derivations of logarithmic asymptotics usually rely on an upper and lower bound, where the latter is based on the inequality $$P(u)\ge
\sup_{{{\boldsymbol}t}\in T}{\mathbb P}\left(\bigcap_{i=1}^n \{X_i({{\boldsymbol}t})-d_i({{\boldsymbol}t})>q_i u\}\right).$$ Strikingly, in terms of the logarithmic asymptotics, this lower bound is actually tight, which is essentially due to the common ‘large deviations heuristic’: the decay rate of the probability of a union of events coincides with the decay rate of the most likely event among these events. A first contribution of the present paper is that we show that this argument essentially carries over to the multidimensional setting. In order to obtain the lower bound one needs asymptotics of tail probabilities that correspond to multivariate normal distributions. In this domain a wealth of results are available (see, e.g., @Hashorva05 and references therein), but for our purposes we need estimates which are, in some specific sense, uniform. A version of such estimates, that is tailored to our needs, is presented in Lemma \[lemma:lower\].
The upper bound is based on what we call a ‘saddle point equality’ presented in Lemma \[lemma:saddle\]. It essentially allows us to approximate suprema of multidimensional Gaussian process ${\boldsymbol}X$ by a specific one-dimensional Gaussian process, namely a properly weighted sum of the coordinates $X_i$ of ${\boldsymbol}X$. Formally, we identify weights $w_i=w_i(t,u)\ge 0$ such that the inequality $$P(u)\le
{\mathbb P}\left(\exists{{{\boldsymbol}t}\in T}:\sum_{i=1}^n w_iX_i({{\boldsymbol t}})>\sum_{i=1}^n w_i(uq_i+d_i({{\boldsymbol t}}))\right),$$ is logarithmically asymptotically exact, as $u\to\infty$. The reduction of the dimension of the problem allows us to use one-dimensional techniques (such as the celebrated Borell inequality). Interestingly, the optimal weights can be interpreted in terms of the solution to a convex programming problem that corresponds to an associated Legendre transform of the covariance matrix of ${\boldsymbol}X$. A different weighting technique has been developed in @Piterbarg05 for the case $n=2$, but without a motivation for the weights chosen. We recover the result from [@Piterbarg05] in Remark \[rmk:Pit\]. Our analysis of extends the results from [@Debicki99; @Piterbarg05], in the first place because ${\mathbb R}^n$-valued Gaussian processes are covered (for arbitrary $n\in{\mathbb N}$). The other main improvement relates to the considerable generality in terms of the drift functions allowed; these were not covered in [@Piterbarg05].
The paper is organized as follows. In Section \[MaP\] we introduce notation, describe in detail objects of main interest to us, and state our main result; we also pay special attention to the rationale behind the assumptions that we impose. In Section \[Ex\] we illustrate the main theorem by presenting a number of examples; one of these relates to Gaussian processes with regularly varying variance functions. We also explain the potential application of our result in queueing and insurance theory. In Section \[PMT\] we describe how the multidimensional process ${\boldsymbol}X$ can be approximated by a one-dimensional process $Z$, obtained by appropriately weighting the coordinates $X_i$. We prove some preliminary results about the characteristics of the process $Z$. This section also contains the saddle point equality mentioned above, Lemma \[lemma:saddle\], which is the crucial element of the proof of our main result. Section \[PMT\] also contains all other lemmas needed to prove Theorem \[thm:main\], as well as the proof of our main result itself.
Model, notation, and the main theorem {#MaP}
=====================================
In this section we formally introduce the model, state the main theorem, and provide the intuition behind the assumptions imposed.
Model and notation
------------------
Let $T\subseteq{\mathbb R}^m$, for some $m\in{\mathbb N}$. In this paper we consider an ${\mathbb R}^n$-valued (separable) centered Gaussian process ${\boldsymbol}X\equiv\{{\boldsymbol}X({{\boldsymbol t}}),{{\boldsymbol t}}\in T\}$ given by ${\boldsymbol}X({{\boldsymbol t}})=(X_1({{\boldsymbol t}}),\ldots, X_n({{\boldsymbol t}}))'$. Let the so-called drift function be denoted by ${\boldsymbol}d({{\boldsymbol t}})=(d_1({{\boldsymbol t}}),\ldots,d_n({{\boldsymbol t}}))'$. Now, denote the covariance matrix of ${\boldsymbol}X({{\boldsymbol t}})$ by $\Sigma_{{{\boldsymbol t}}}$. Throughout the paper it is assumed that the matrix $\Sigma_{{{\boldsymbol t}}}$ is invertible for every ${{\boldsymbol t}}\in T$. Here and in the sequel, we use the following notation and conventions:
- We say ${\boldsymbol}v\ge {\boldsymbol}w$ if $v_i\ge w_i$ for all $i=1,\ldots, n.$
- We write $\operatorname{diag}({\boldsymbol}v)$ for the diagonal matrix with $v_i$ on the $i$th position of the diagonal.
- We define ${\boldsymbol}v{\boldsymbol}w:=\operatorname{diag}({\boldsymbol}v) {\boldsymbol}w' =(v_1w_1,\ldots, v_n w_n)'.$
- For $a\in{\mathbb R}$, we let ${\boldsymbol}i(a)$ be an $n$-dimensional vector $(a,\ldots,a)'$ and also let ${\boldsymbol}0 =(0,\ldots,0)'$.
- We adopt the usual definitions of norms of vectors $\|{\boldsymbol}x \|:=({\left\langle {\boldsymbol}x, {\boldsymbol}x \right\rangle})^{1/2}$, where ${\left\langle \cdot, \cdot \right\rangle}$ is the Euclidean inner product.
- We let $f(u)\sim g(u)$ denote that $\lim_{u{\to\infty}}f(u)/g(u)=1.$
- We write ${\mathbb R}^n_+:=\{{\boldsymbol}x\in{\mathbb R}^n:{\boldsymbol}x\ge0, {\boldsymbol}x\ne{\boldsymbol}0\}.$
Throughout the paper not all vectors are of dimension $n$ (for instance ${\boldsymbol}t$ is of dimension $m$), but the above notation should be understood with obvious changes.
With each $\Sigma_{{{\boldsymbol t}}}$ we associate the matrix $K_{{{\boldsymbol t}}}=(k_{i,j}({{\boldsymbol t}}))_{i,j\le n}$, defined as $$K_{{{\boldsymbol t}}} =\operatorname{diag}(\partial_{1,1}^{-1/2}({{\boldsymbol t}}),\ldots, \partial_{n,n}^{-1/2}({{\boldsymbol t}})) \Sigma_{{{\boldsymbol t}}}^{-1}
\operatorname{diag}(\partial_{1,1}^{-1/2}({{\boldsymbol t}}),\ldots, \partial_{n,n}^{-1/2}({{\boldsymbol t}}))$$ with $\Sigma_{{{\boldsymbol t}}}^{-1}=(\partial_{i,j}({{\boldsymbol t}}))_{i,j\le n}$. We mention that $k_{i,j}({{\boldsymbol t}})\in[-1,1]$ and that $-k_{i,j}({{\boldsymbol t}})$ is commonly interpreted as some sort of partial correlation between $X_i({{\boldsymbol t}})$ and $X_j({{\boldsymbol t}})$ controlling all other variables $X_k({{\boldsymbol t}})$, $k\neq i,j$.
Main result
-----------
Throughout the paper, we impose the following assumptions.
[**A1**]{} $\sup_{{{\boldsymbol t}}\in T} k_{i,j}({{\boldsymbol t}}) <1$ for all $i\ne j,$ $i,j=1,\ldots,n.$
[**A2**]{} $\sup_{{{\boldsymbol t}}\in T}(X_i({{\boldsymbol t}})-\varepsilon d_i({{\boldsymbol t}}))<\infty$ [a.s. for all]{} $i=1,\ldots,n$ and all $\varepsilon\in(0,1]$.
If a process ${\boldsymbol}X$ and a drift function ${\boldsymbol}d$ comply with assumptions [**A1**]{}-[**A2**]{}, then to shorten the notation, we will write that $({\boldsymbol}X,{\boldsymbol}d)$ satisfies [**A1**]{}-[**A2**]{}.
For a point ${{\boldsymbol t}}\in T$ and a vector ${\boldsymbol}q>{\boldsymbol}0$, define $$\begin{aligned}
M_{{\boldsymbol}X,{\boldsymbol}d,{\boldsymbol}q}(u,{{\boldsymbol t}})&:=\inf_{ {\boldsymbol}v\ge u {\boldsymbol}q} {\left\langle {\boldsymbol}v+ {\boldsymbol}d({{\boldsymbol t}}), \Sigma_{{{\boldsymbol t}}}^{-1} ( {\boldsymbol}v+ {\boldsymbol}d({{\boldsymbol t}})) \right\rangle},\\
M_{{\boldsymbol}X,{\boldsymbol}d,{\boldsymbol}q}(u;T)&:={\frac{1}{2}}\inf_{{{\boldsymbol t}}\in T}M_{{\boldsymbol}X,{\boldsymbol}d,{\boldsymbol}q}(u,{{\boldsymbol t}}).\end{aligned}$$
With these preliminaries we are ready to state our main result. The following theorem can be seen as an $n$-dimensional extension of [@Piterbarg05 Theorem 1] and [@Debicki99 Theorem 2.1].
\[thm:main\] Assume that $({\boldsymbol}X,{\boldsymbol}d)$ satisfies [**A1**]{}-[**A2**]{}. Then, for any ${\boldsymbol}q >{\boldsymbol}0$, $$\label{thm:main:eq:2}
\log{\mathbb P}\left(\exists{{{\boldsymbol t}}\in T}:{\boldsymbol}X({{\boldsymbol t}})-{\boldsymbol}d({{\boldsymbol t}})>u {\boldsymbol}q\right)
\sim -M_{{\boldsymbol}X,{\boldsymbol}d,{\boldsymbol}q}(u;T){\quad\text{as}\quad u{\to\infty}}.$$
\[rem:sim\] The result stated in Theorem \[thm:main\] enables us to analyze, with $T_i\subseteq{\mathbb R}$, $$\begin{aligned}
{\mathbb P}\left(\bigcap_{i=1}^n\left\{\sup_{t_i\in T_i}\left( X_i(t_i)-d_i(t_i) \right)>u q_i\right\}\right).\label{multi_sup}\end{aligned}$$ To see this, let $T:=T_1\times\ldots\times T_n$. Also define processes $\{Y_i({\boldsymbol}t):{\boldsymbol}t\in T\}$, $i=1,\ldots, n$, such that $Y_i({\boldsymbol}t):= X_i(t_i)$, for $i=1,\ldots,n$. Analogously, let $m_i({\boldsymbol}t):= d_i(t_i)$, $i=1,\ldots,n$. Then equals $${\mathbb P}\left( \exists{{\boldsymbol}t\in T}:{\boldsymbol}Y ({\boldsymbol}t)-{\boldsymbol}m({\boldsymbol}t)>u {\boldsymbol}q \right),$$ which, under the proviso that [**A1**]{}- [**A2**]{} are complied with by the newly constructed $({\boldsymbol}Y,{\boldsymbol}m)$, fits in the framework of Theorem \[thm:main\]. This example naturally extends to the situation where the sets $T_i$ are of dimension higher than 1.
Discussion of the assumptions
-----------------------------
In this subsection we motivate the assumptions that we imposed.
\[rmk:mainassum\]Assumption [**A1**]{} plays a crucial role in the proof of Lemma \[lemma:lower\]. It can be geometrically interpreted as follows. For a fixed ${{\boldsymbol t}}\in T$, the distribution of ${\boldsymbol}X({{\boldsymbol t}})$ equals that of $B_{{{\boldsymbol t}}}\, {\mathcal{N}}$, where $B_{{{\boldsymbol t}}}$ is a matrix such that $\Sigma_{{{\boldsymbol t}}}= B_{{{\boldsymbol t}}}B_{{{\boldsymbol t}}}'$ and ${\mathcal{N}}$ is an ${\mathbb R}^n$-valued standard normal random variable. For some quadrant $Q_{{{\boldsymbol t}}}$, we need in the proof of Lemma \[lemma:lower\] a lower estimate of ${\mathbb P}({\boldsymbol}X({{\boldsymbol t}}) \in Q_{{{\boldsymbol t}}})={\mathbb P}({\mathcal{N}}\in B_{{{\boldsymbol t}}}^{-1} Q_{{{\boldsymbol t}}})$. For $i=1,\ldots,n$ let ${\boldsymbol}e_i$ be, as usual, the standard basis vectors of ${\mathbb R}^n$. Then the cosine of the angle $\alpha_{i,j}$ between $B_{t}^{-1} {\boldsymbol}e_i$ and $B_{t}^{-1} {\boldsymbol}e_j$ is given by $$\cos(\alpha_{i,j})=\frac{{\left\langle B_{{{\boldsymbol t}}}^{-1} {\boldsymbol}e_i, B_{{{\boldsymbol t}}}^{-1} {\boldsymbol}e_j \right\rangle}}{\|B_{{{\boldsymbol t}}}^{-1} {\boldsymbol}e_i\|\|B_{{{\boldsymbol t}}}^{-1} {\boldsymbol}e_j\|}
=\frac{{\left\langle {\boldsymbol}e_i, \Sigma_{{{\boldsymbol t}}}^{-1} {\boldsymbol}e_j \right\rangle}}{\|B_{{{\boldsymbol t}}}^{-1} {\boldsymbol}e_i\|\|B_{{{\boldsymbol t}}}^{-1} {\boldsymbol}e_j\|}=
\frac{\partial_{i,j}({{\boldsymbol t}})}{\sqrt{\partial_{i,i}({{\boldsymbol t}})\partial_{j,j}({{\boldsymbol t}})}}=k_{i,j}({{\boldsymbol t}}).$$ We thus observe that [**A1**]{} entails that, for all ${{\boldsymbol t}}\in T$, there is no pair of vector $B_{{{\boldsymbol t}}}^{-1} {\boldsymbol}e_i$ and $B_{{{\boldsymbol t}}}^{-1} {\boldsymbol}e_i$, with $i\ne j$, that ‘essentially coincide’, i.e., the angles remain bounded away from 0. Therefore, for any ${\boldsymbol}x \in B_{{{\boldsymbol t}}}^{-1} Q_{{{\boldsymbol t}}}$, one can always find a set $A_{{{\boldsymbol t}}}$ such that ${\boldsymbol}x\in A_{{{\boldsymbol t}}}\subset B_{{{\boldsymbol t}}}^{-1} Q_{{{\boldsymbol t}}}$ and $A_{{{\boldsymbol t}}}$ has a diameter that is bounded, and a volume that is bounded away from zero, [*uniformly*]{} in ${{\boldsymbol t}}\in T$.
For $\varepsilon=1$, assumption [**A2**]{} assures that the event $$\bigcup_{{{\boldsymbol t}}\in T}\{{\boldsymbol}X({{\boldsymbol t}})-{\boldsymbol}d({{\boldsymbol t}})>u{\boldsymbol}q\}$$ is not satisfied trivially. The following example shows that if [**A2**]{} is not complied with, then it is not ensured that we remain in the realm of exponential decay. Consider a one-dimensional case in which $X\equiv\{X(t):t\ge0\}$ is a standard Brownian motion, and for any $\delta>0$ let $d(t):=(1+\delta)\sqrt{2t\log\log t}$. From the law of the iterated logarithm we conclude that the process $X$ does not satisfy [**A2**]{} for every $\varepsilon\in(0,1]$. On the other hand we have (take $t:=u^4$) $${\mathbb P}\left(\sup_{t\ge0} \left(X(t)-(1+\delta)\sqrt{2t\log\log t}\right)>u\right)
\ge
{\mathbb P}\left(\frac{u{\mathcal{N}}}{1+(1+\delta)u\sqrt{2\log(4\log u)}}>1\right),$$ where here ${\mathcal{N}}$ is the real-valued standard normal random variable. On the logarithmic scale the latter probability behaves roughly, for $u$ large, as $$-(1+\delta)^2\log\log u.$$ For the case of $n=1$, [**A2**]{} has been required in [@Debicki99 Theorem 2.1] as well.
\[r4\] The drift functions $d_i$, $i=1,\ldots,n$, are not assumed to be increasing, but under assumption [**A2**]{} we have $\ell_i:=\inf_{{{\boldsymbol t}}\in T} d_i({{\boldsymbol t}})>-\infty$. Because we are interested in the asymptotic behavior of the probability in as $u{\to\infty}$, we can assume that $u>u_0:=-\min_i(\ell_i/q_i)$, and therefore the coordinates of $u {\boldsymbol}q+{\boldsymbol}d({{\boldsymbol t}})$ stay positive for all ${{\boldsymbol t}}\in T$. In what follows we shall always assume that $u>u_0$.
Examples {#Ex}
========
In this section we present examples that demonstrate the consequences of Theorem \[thm:main\]. We focus on computing the decay rate $M_{{\boldsymbol}X,{\boldsymbol}d,{\boldsymbol}q}(u;T)$ in two cases: (i) the case of ${\boldsymbol}X$ having bounded sample paths a.s.; (ii) the case of the $X_i$ having stationary increments, regularly varying variance functions, and $d_i(\cdot)$ being linear. While in the former example the drift functions do not influence the asymptotics, in the latter example the drifts [*do have an*]{} impact on the decay rate.
Bounded sample paths and drift function
---------------------------------------
We here analyze the case of $({\boldsymbol}X,{\boldsymbol}d)$ satisfying
[**B1**]{} The process ${\boldsymbol}X$ has bounded sample paths a.s.
[**B2**]{} There exists $D<\infty$ such that $|d_i({{\boldsymbol t}})|\le D$ for all ${{\boldsymbol t}}\in T$ and $i=1,\ldots,n$.
We note that under [**B1**]{}-[**B2**]{}, it trivially holds that assumption [**A2**]{} is complied with as well. Assumptions [**B1**]{}-[**B2**]{} are satisfied when $T$ is compact, ${\boldsymbol}X$ has continuous sample paths a.s. and ${\boldsymbol}d$ is continuous for instance. Let us introduce the following notation $$I_{{\boldsymbol}X,{\boldsymbol}q}(T):=\inf_{{{\boldsymbol t}}\in T}\inf_{{\boldsymbol}v\ge {\boldsymbol}q}{\left\langle {\boldsymbol}v, \Sigma_{{{\boldsymbol t}}}^{-1} {\boldsymbol}v \right\rangle}.$$ The following corollary is an immediate consequence of Theorem \[thm:main\].
\[cor:bounded\] Assume that $({\boldsymbol}X,{\boldsymbol}d)$ satisfies [**A1**]{} and [**B1**]{}-[**B2**]{}. Then, $$\log{\mathbb P}\left(\exists{{{\boldsymbol t}}\in T}:{\boldsymbol}X({{\boldsymbol t}})-{\boldsymbol}d({{\boldsymbol t}})>u {\boldsymbol}q \right)
\sim -\frac{u^2}{2}
I_{{\boldsymbol}X,{\boldsymbol}q}(T),{\quad\text{as}\quad u{\to\infty}}.$$
The above proposition states that in the ‘bounded case’ that we are currently considering, we encounter the same asymptotic decay as in the driftless case (${\boldsymbol}d\equiv {\boldsymbol}0$).
\[rmk:Pit\] Some special cases of Proposition \[cor:bounded\] have been treated before in the literature. In particular, let $X_1\equiv\{X_1(t_1):t_1\in T_1\}$ and $X_2\equiv\{X_2(t_2):t_2\in T_2\}$ be two centered and bounded ${\mathbb R}$-valued Gaussian processes. We introduce the notation $\sigma_i(t_i):=
\sqrt{\operatorname{\mathbb{V}ar}(X_i(t_i))}$, $r({{\boldsymbol t}}):=\operatorname{\mathbb{C}orr}(X_1(t_1),X_2(t_2))$ and also $$c_{{\boldsymbol}q}({{\boldsymbol t}}):=\min\left\{\frac{q_1}{\sigma_1(t_1)}
\frac{\sigma_2(t_2)}{q_2},\frac{\sigma_1(t_1)}{q_1}\frac{q_2}{\sigma_2(t_2)}\right\}.$$ Then, upon combining Proposition \[cor:bounded\] with Remark \[rem:sim\], we obtain, with $T\subseteq T_1\times T_2$, $$\begin{aligned}
\lefteqn{
\log{\mathbb P}\left(\exists{(t_1,t_2)\in T}: X_1(t_1)>q_1u,X_2(t_2)>q_2u\right)
}\\
&\sim&
-\frac{u^2}{2}
\inf_{(t_1,t_2)\in T} \frac{1}{\left(\min\left\{\sigma_1(t_1)/q_1,\sigma_2(t_2)/q_2\right\}\right)^2}
\left(1+\frac{(c_{{\boldsymbol}q}({{\boldsymbol t}})-r({{\boldsymbol t}}))^2}{1-r^2({{\boldsymbol t}})}1_{\{r({{\boldsymbol t}})<c_{{\boldsymbol}q}({{\boldsymbol t}})\}}\right),\end{aligned}$$ as $u\to\infty$. Observe that the above formula is also valid for $r({{\boldsymbol t}})=\pm1$. This recovers the result of @Piterbarg05. [^2]
Stationary increments, linear drift {#EX:rv}
-----------------------------------
This section focuses on the logarithmic asymptotics of $\{{\boldsymbol}X(t)-{\boldsymbol}i (t) :t\ge 0\}$, where ${\boldsymbol}X(t)=S{\boldsymbol}Y(t)$ for some invertible matrix $S$ and, as usual, ${\boldsymbol}Y(t)=(Y_1(t),\ldots,Y_n(t))'$. We assume that, for $i=1,\ldots, n$,
[**C1**]{} $\{Y_i(t):t\ge0\}$ are mutually independent, ${\mathbb R}$-valued, centered Gaussian processes with stationary increments.
[**C2**]{} The variance functions $\sigma_i^2(t):=\operatorname{\mathbb{V}ar}(Y_i(t))$ are regularly varying at $\infty$ with indexes $\alpha_i\in(0,2)$. Without loss of generality we assume that $0<\alpha_1\le\ldots\le\alpha_n<2$. Moreover, assume that there exists $\kappa\in\{1,\ldots,n\}$ such that $\sigma_1^2\sim\ldots\sim c_\kappa\sigma_\kappa^2$ for some $c_i>0$ and $\lim_{t{\to\infty}}\sigma_{\kappa}(t)/\sigma_{\kappa+1}(t)=0$ (if $\kappa=1$, then set $c_\kappa=1$; if the first condition is satisfied with $\kappa=n$, then the second one is redundant).
[**C3**]{} $\lim_{t\to 0} \sigma_i^2(t)|\log|t||^{1+\varepsilon}<\infty$ for some $\varepsilon>0$.
We analyze $$\label{multi.reg}
{\mathbb P}\left(\exists{t\ge 0}:{\boldsymbol}X(t)-{\boldsymbol}i(t)\ge u{\boldsymbol}q\right).$$ Probabilities of this type play an important role in risk theory, describing the probability of simultaneous ruin of multiple (dependent) companies; see @Avram08 for related results. The one-dimensional counterpart of (\[multi.reg\]) was considered in @Debicki99 in the context of Gaussian fluid models. Related examples and further references can be found in the monograph [@Mandjes07]. In the following proposition we derive the logarithmic asymptotics of .
With $c_i$ as in [**C2**]{}, set $$C:=\operatorname{diag}(1,c_2,\ldots,c_\kappa,0,\ldots,0)$$ and $$J(C,S,{\boldsymbol}q,\alpha):=
\inf_{t\ge0}\inf_{ {\boldsymbol}v\ge{\boldsymbol}q} \frac{{\left\langle S^{-1}({\boldsymbol}v+ {\boldsymbol}i (t)), CS^{-1} ({\boldsymbol}v+ {\boldsymbol}i(t)) \right\rangle}}{t^{\alpha}}.$$
\[prop:reg\] Assume that ${\boldsymbol}Y$ satisfies [**C1**]{}-[**C3**]{}, and $S$ is an invertible matrix. Then, for $\{{\boldsymbol}X(t):t\ge 0\}:=\{S{\boldsymbol}Y(t):t\ge 0\}$, $$\log{\mathbb P}\left(\exists{t\ge 0}:{\boldsymbol}X(t)-{\boldsymbol}i(t)\ge u{\boldsymbol}q\right)\sim -
\frac{u^2}{2\sigma_1^2(u)}J(C,S,{\boldsymbol}q,\alpha_1),{\quad\text{as}\quad u{\to\infty}}.$$
We start by checking that [**A1**]{}-[**A2**]{} are satisfied for $({\boldsymbol}X, {{\boldsymbol}i})$. Indeed, let us note that the matrix $K_t=K$ is constant. Besides, since $S$ is invertible, then $K$ is invertible too, which combined with the fact that $K$ is positive-definite and $k_{i,i}=1$, straightforwardly implies that assumption [**A1**]{} is satisfied.
Since ${\boldsymbol}Y$ has stationary increments, then under [**C1**]{}-[**C3**]{} $\lim_{t{\to\infty}}Y_i(t)/t= 0$ almost surely and therefore (using that ${\boldsymbol}X$ consists of linear combinations of the $Y_i$, $i=1,\ldots,n$) assumption [**A2**]{} is complied with; see [@Dieker05b Lemma 3] for details. Now following Theorem \[thm:main\], $$\begin{aligned}
M_{{\boldsymbol}X,{\boldsymbol}i,{\boldsymbol}q}(u;[0,\infty))&={\frac{1}{2}}\inf_{t\ge0}\inf_{{\boldsymbol}v\ge u {\boldsymbol}q} {\left\langle S^{-1}({\boldsymbol}v+{\boldsymbol}i(t)), R_t^{-1} S^{-1}({\boldsymbol}v+{\boldsymbol}i(t)) \right\rangle}\\
&=
{\frac{1}{2}}\inf_{t\ge0}\inf_{{\boldsymbol}v\ge{\boldsymbol}q} {\left\langle S^{-1}(u{\boldsymbol}v+u{\boldsymbol}i(t)), R_{ut}^{-1} S^{-1}(u{\boldsymbol}v+u{\boldsymbol}i(t)) \right\rangle}\\
&=
\frac{u^2}{2}\inf_{t\ge0}\inf_{{\boldsymbol}v\ge{\boldsymbol}q} {\left\langle S^{-1}({\boldsymbol}v+{\boldsymbol}i(t)), R_{ut}^{-1}S^{-1} ({\boldsymbol}v+{\boldsymbol}i(t)) \right\rangle},\end{aligned}$$ where the matrix $R_{t}^{-1}$ equals $\operatorname{diag}(\sigma_1^{-2}(t),\ldots,\sigma_n^{-2}(t))$, which is the inverse of the covariance matrix of ${\boldsymbol}Y$. Using the regular variation of $\sigma_i^2(\cdot)$, we find that, as $u{\to\infty}$, $$\sigma_1^2(u)R_{ut}^{-1}\to t^{-\alpha_1}C,{\quad\text{as}\quad u{\to\infty}}.$$ By virtue of the uniform convergence theorem we arrive at $$M_{{\boldsymbol}X,{\boldsymbol}i,{\boldsymbol}q}(u;[0,\infty))
\sim
\frac{u^2}{2\sigma_1^2(u)}
\inf_{t\ge0}\inf_{{\boldsymbol}v\ge{\boldsymbol}q} \frac{{\left\langle S^{-1}({\boldsymbol}v+{\boldsymbol}i(t)), CS^{-1} ({\boldsymbol}v+{\boldsymbol}i(t)) \right\rangle}}{t^{\alpha_1}},$$ as $u{\to\infty}$. This completes the proof.
The proof of the main theorem {#PMT}
=============================
This section is devoted to the proof of our main result – Theorem \[thm:main\]. We will achieve this by establishing an upper bound and a lower bound. We start by presenting the following ‘saddle point equality’ that plays a crucial role in the upper bound.
\[lemma:saddle\] Let $A$ be any positive-definite matrix. Then, $$\sup_{{\boldsymbol}w\in{\mathbb R}^n_+}\frac{{\left\langle {\boldsymbol}w, {\boldsymbol}q \right\rangle} ^2}{{\left\langle {\boldsymbol}w, A{\boldsymbol}w \right\rangle}}
=
\inf_{{\boldsymbol}v\ge{\boldsymbol}q} {\left\langle {\boldsymbol}v, A^{-1} {{\boldsymbol}v} \right\rangle},$$ for any vector ${\boldsymbol}q\in{\mathbb R}^n_+$. Moreover, if ${\boldsymbol}v{^\star}$ is the optimizer of the infimum problem in the right-hand side, then ${\boldsymbol}w{^\star}:=A^{-1}{\boldsymbol}v{^\star}$ is an optimizer of the supremum problem in the left-hand side.
Decompose $A=BB'$ for some nondegenerate matrix $B$. Then, $$\frac{{\left\langle {\boldsymbol}w, {\boldsymbol}q \right\rangle} ^2}{{\left\langle {\boldsymbol}w, A{\boldsymbol}w \right\rangle}}=\frac{{\left\langle {\boldsymbol}w, {\boldsymbol}q \right\rangle} ^2}{\|B'{\boldsymbol}w\|^2}
\quad\text{and}\quad
{\left\langle {\boldsymbol}v, A^{-1}{\boldsymbol}v \right\rangle} = \|B^{-1}{\boldsymbol}v\|^2.$$ Now, for ${\boldsymbol}w\in{\mathbb R}^n_+$, the Cauchy-Schwarz inequality yields $${\left\langle {\boldsymbol}w, {\boldsymbol}q \right\rangle} =\inf_{{\boldsymbol}v\ge{\boldsymbol}q} {\left\langle {\boldsymbol}w, {\boldsymbol}v \right\rangle} = \inf_{{\boldsymbol}v\ge{\boldsymbol}q} {\left\langle B'{\boldsymbol}w, B^{-1}{\boldsymbol}v \right\rangle}
\le \|B'{\boldsymbol}w\|\inf_{{\boldsymbol}v\ge{\boldsymbol}q} \|B^{-1}{\boldsymbol}v\|.$$ Dividing both sides by $\|B'{\boldsymbol}w\|>0$ and optimizing the left-hand side of the previous display, we arrive at $$\sup_{{\boldsymbol}w\in{\mathbb R}^n_+}\frac{{\left\langle {\boldsymbol}w, {\boldsymbol}q \right\rangle} ^2}{{\left\langle {\boldsymbol}w, A{\boldsymbol}w \right\rangle}}
\le \inf_{{\boldsymbol}v\ge{\boldsymbol}q} {\left\langle {\boldsymbol}v, A^{-1} {\boldsymbol}v \right\rangle}.$$ To show the opposite inequality, assume that ${\boldsymbol}v{^\star}$ is such that $$\inf_{{\boldsymbol}v\ge{\boldsymbol}q} {\left\langle {\boldsymbol}v, A^{-1}{\boldsymbol}v \right\rangle}=
{\left\langle {\boldsymbol}v{^\star}, A^{-1} {\boldsymbol}v{^\star} \right\rangle}.$$ The Lagrangian function of the above problem is given by $L({\boldsymbol}v,{\boldsymbol}\lambda):=
{\left\langle {\boldsymbol}v, A^{-1}{\boldsymbol}v \right\rangle}-{\left\langle {\boldsymbol}\lambda, {\boldsymbol}v-{\boldsymbol}q \right\rangle}$ for ${\boldsymbol}\lambda\ge{\boldsymbol}0$, and due to complementary-slackness considerations we necessarily have that $A^{-1}{\boldsymbol}v{^\star}\ge{\boldsymbol}0$, and if $(A^{-1}{\boldsymbol}v{^\star})_i>0$, then $v{^\star}_i=q_i$. Thus take ${\boldsymbol}w{^\star}=A^{-1}{\boldsymbol}v{^\star}\in{\mathbb R}^n_+$, so that $$\frac{{\left\langle {\boldsymbol}w{^\star}, {\boldsymbol}q \right\rangle} ^2}{{\left\langle {\boldsymbol}w{^\star}, A{\boldsymbol}w{^\star} \right\rangle}}=
\frac{{\left\langle A^{-1}{\boldsymbol}v{^\star}, {\boldsymbol}q \right\rangle} ^2}{{\left\langle A^{-1}{\boldsymbol}v{^\star}, {\boldsymbol}v{^\star} \right\rangle}}= {\left\langle {\boldsymbol}v{^\star}, A^{-1} {\boldsymbol}v{^\star} \right\rangle}.$$ Indeed, the last equality is equivalent to $${\left\langle A^{-1}{\boldsymbol}v{^\star}, {\boldsymbol}q-{\boldsymbol}v{^\star} \right\rangle}= 0,$$ but recall that if $(A^{-1}{\boldsymbol}v{^\star})_i\ne 0$, then $({\boldsymbol}q-{\boldsymbol}v{^\star})_i=0$. Hence finally, $$\sup_{{\boldsymbol}w\in{\mathbb R}^n_+}\frac{{\left\langle {\boldsymbol}w, {\boldsymbol}q \right\rangle} ^2}{{\left\langle {\boldsymbol}w, A{\boldsymbol}w \right\rangle}}\ge\inf_{{\boldsymbol}v\ge{\boldsymbol}q} {\left\langle {\boldsymbol}v, A^{-1}{\boldsymbol}v \right\rangle},$$ which proves the opposite inequality. This finishes the proof.
The main idea behind the proof of the upper bound of Theorem \[thm:main\] is that the ${\mathbb R}^n$-valued process ${\boldsymbol}X({{\boldsymbol t}})-{\boldsymbol}d({{\boldsymbol t}})$ can be effectively replaced by a suitably chosen [*${\mathbb R}$-valued*]{} Gaussian process. The asymptotics of the latter process can then be handled using the familiar techniques for real-valued Gaussian processes.
For any vector ${\boldsymbol}w\in {\mathbb R}^n_+$, define $$Z_{u,{\boldsymbol}w}({{\boldsymbol t}}):=\frac{{\left\langle {\boldsymbol}w, {\boldsymbol}X({{\boldsymbol t}}) \right\rangle}}{{\left\langle {\boldsymbol}w, u{\boldsymbol}q+{\boldsymbol}d({{\boldsymbol t}}) \right\rangle}},$$ and observe that (with $u>u_0$; cf. Remark \[r4\]) $${\mathbb P}\left(\exists{{{\boldsymbol t}}\in T}:{\boldsymbol}X({{\boldsymbol t}})- {\boldsymbol}d({{\boldsymbol t}})>u{\boldsymbol}q \right)
\le
{\mathbb P}\left(\sup_{{{\boldsymbol t}}\in T} Z_{u,{\boldsymbol}w}({{\boldsymbol t}})>1\right).$$ The vector ${\boldsymbol}w$ in the process $Z_{u,{\boldsymbol}w}$ can be seen as a vector of weights assigned to the coordinates of ${\boldsymbol}X$. For fixed $u$ and ${\boldsymbol}w$ the process $Z_{u,{\boldsymbol}w}$ is a centered Gaussian process. We shall show that it also has almost surely bounded sample paths.
\[lem:Zuw\] Under [**A1**]{}-[**A2**]{}, the process $Z_{u,{\boldsymbol}w}$ is a centered Gaussian process with bounded sample paths almost surely, for each ${\boldsymbol}w\in {\mathbb R}^n_+$ and $u>u_0$. Moreover, $$\sup_{{{\boldsymbol t}}\in T}Z_{u,{\boldsymbol}w}({{\boldsymbol t}}){\stackrel{{\mathbb P}}{\to}}0{\quad\text{as}\quad u{\to\infty}}.$$
Without loss of generality we can assume that $\|{\boldsymbol}w\|=1$. For any $L\ge1$, recalling the definition of ${\boldsymbol}\ell$ from Remark \[r4\], $$\begin{aligned}
\lefteqn{\hspace{-1cm}
{\mathbb P}\left(\sup_{{{\boldsymbol t}}\in T} Z_{u,{\boldsymbol}w}({{\boldsymbol t}})>L\right)=
{\mathbb P}\left(\exists{{{\boldsymbol t}}\in T}: {\left\langle {\boldsymbol}w, {\boldsymbol}X({{\boldsymbol t}}) \right\rangle}>{\left\langle {\boldsymbol}w, Lu{\boldsymbol}q+L{\boldsymbol}\ell+L({\boldsymbol}d({{\boldsymbol t}})-{\boldsymbol}\ell) \right\rangle}\right)}\\
&\le&
{\mathbb P}\left(\exists{{{\boldsymbol t}}\in T}: {\left\langle {\boldsymbol}w, {\boldsymbol}X({{\boldsymbol t}}) \right\rangle}>{\left\langle {\boldsymbol}w, L(u{\boldsymbol}q+{\boldsymbol}\ell)+({\boldsymbol}d({{\boldsymbol t}})-{\boldsymbol}\ell) \right\rangle}\right)\\
&\le&
{\mathbb P}\left(\exists{{{\boldsymbol t}}\in T}: {\left\langle {\boldsymbol}w, {\boldsymbol}X({{\boldsymbol t}})-{\boldsymbol}d({{\boldsymbol t}}) \right\rangle}>{\left\langle {\boldsymbol}w, L(u{\boldsymbol}q+{\boldsymbol}\ell) -{\boldsymbol}\ell \right\rangle}\right)\\
&\le&
{\mathbb P}\left(\sum_{i=1}^nw_i\sup_{{{\boldsymbol t}}\in T}(X_i({{\boldsymbol t}})-d_i({{\boldsymbol t}}))>
L{\left\langle {\boldsymbol}w, u{\boldsymbol}q+{\boldsymbol}\ell \right\rangle}-{\left\langle {\boldsymbol}w, {\boldsymbol}\ell \right\rangle}\right)\\
&\le&
{\mathbb P}\left(\sum_{i=1}^n\sup_{{{\boldsymbol t}}\in T}(X_i({{\boldsymbol t}})-d_i({{\boldsymbol t}}))^+>
L\min_i(uq_i+\ell_i)/\sqrt n -\|{\boldsymbol}\ell\|\right),\end{aligned}$$ where the last probability tends to zero with $L{\to\infty}$ due to [**A2**]{}. This proves that $Z_{u,{\boldsymbol}w}$ has bounded sample paths almost surely.
The last probability also tends to zero with $L\ge 1$ fixed and $u{\to\infty}$. On the other hand, for any $L<1$ we have $$\begin{aligned}
{\mathbb P}\left(\sup_{{{\boldsymbol t}}\in T} Z_{u,{\boldsymbol}w}({{\boldsymbol t}})>L\right)&=
{\mathbb P}\left(\exists{{{\boldsymbol t}}\in T}: {\left\langle {\boldsymbol}w, {\boldsymbol}X({{\boldsymbol t}})-L{\boldsymbol}d({{\boldsymbol t}}) \right\rangle}>L{\left\langle {\boldsymbol}w, u{\boldsymbol}q \right\rangle}\right)\\
&\le
{\mathbb P}\left(\sum_{i=1}^n\sup_{{{\boldsymbol t}}\in T}(X_i({{\boldsymbol t}})-Ld_i({{\boldsymbol t}}))^+>uL\min_i q_i/\sqrt n\right),\end{aligned}$$ where the last probability tends to zero with $u{\to\infty}$ by virtue of [**A2**]{}. We therefore have that $\sup_{{{\boldsymbol t}}\in T} Z_{u,{\boldsymbol}w}({{\boldsymbol t}})$ converges to $0$ in probability.
The above considerations remain true even if ${\boldsymbol}w$ depends on $u$ and ${{\boldsymbol t}}$. This observation allows us to optimize the variance of the process $Z_{u,{\boldsymbol}w}$, while retaining its sample path properties. Notice that $$\operatorname{\mathbb{V}ar}(Z_{u,{\boldsymbol}w}({{\boldsymbol t}}))=\frac{{\left\langle {\boldsymbol}w, \Sigma_{{{\boldsymbol t}}}{\boldsymbol}w \right\rangle}}{{\left\langle {\boldsymbol}w, u{\boldsymbol}q +{\boldsymbol}d({{\boldsymbol t}}) \right\rangle}^2}.$$ Therefore, take ${\boldsymbol}w{^\star}\equiv {\boldsymbol}w{^\star}(u,{{\boldsymbol t}})$ such that $$\label{eq:w}
\frac{{\left\langle {\boldsymbol}w{^\star}, \Sigma_{{\boldsymbol t}}{\boldsymbol}w{^\star} \right\rangle}}{{\left\langle {\boldsymbol}w{^\star}, u{\boldsymbol}q +{\boldsymbol}d(t) \right\rangle}^2}=
\inf_{{\boldsymbol}w\in {\mathbb R}^n_+}\frac{{\left\langle {\boldsymbol}w, \Sigma_{{{\boldsymbol t}}}{\boldsymbol}w \right\rangle}}{{\left\langle {\boldsymbol}w, u{\boldsymbol}q +{\boldsymbol}d({{\boldsymbol t}}) \right\rangle}^2}$$ and denote by $Y_u({{\boldsymbol t}})$ the process $Z_{u,{\boldsymbol}w{^\star}}({{\boldsymbol t}})$ with the weights ${\boldsymbol}w={\boldsymbol}w{^\star}$ chosen as above. Let $\sigma_u^2({{\boldsymbol t}})$ be the variance function of the process $Y_u({{\boldsymbol t}})$. Then, by Lemma \[lemma:saddle\], $$\label{eq:Yvar}
\sigma_u^{-2}({{\boldsymbol t}})=M_{{\boldsymbol}X,{\boldsymbol}d,{\boldsymbol}q}(u,{{\boldsymbol t}}).$$
To estimate the tail of the supremum of the process $Y_u({{\boldsymbol t}})$ we intend to use Borell’s inequality [@Adler90 Theorem 2.1]. To apply this result, we need to verify that the expectation of $\sup_{{{\boldsymbol t}}\in T} Y_u({{\boldsymbol t}})$ vanishes as $u{\to\infty}$. This is done in the next lemma.
\[lem:impl\] Under [**A1**]{}-[**A2**]{}, with $u_0$ as in Remark \[r4\],
1. $M_{{\boldsymbol}X,{\boldsymbol}d,{\boldsymbol}q}(u;T)>0$ for each $u>u_0$;
2. $\lim_{u{\to\infty}} M_{{\boldsymbol}X,{\boldsymbol}d,{\boldsymbol}q}(u;T)=\infty$;
3. $\lim_{u{\to\infty}} {\mathbb E}\sup_{{{\boldsymbol t}}\in T}Y_u({{\boldsymbol t}})=0$.
From Lemma \[lem:Zuw\] we know that for a fixed $u$ the process $Y_u$ has bounded sample paths almost surely. This implies that $\sup_{{{\boldsymbol t}}\in T} \sigma_u^2({{\boldsymbol t}})<\infty$. But $$\sup_{{{\boldsymbol t}}\in T}\sigma_u^2({{\boldsymbol t}})=\sup_{{{\boldsymbol t}}\in T}(M_{{\boldsymbol}X,{\boldsymbol}d,{\boldsymbol}q}(u,{{\boldsymbol t}}))^{-1}={\frac{1}{2}}(M_{{\boldsymbol}X,{\boldsymbol}d,{\boldsymbol}q}(u;T))^{-1}$$ and claim (1) follows.
The proof of (2) is a consequence of the fact that under [**A2**]{} $${\mathbb P}\left(\sup_{{{\boldsymbol t}}\in T} Y_u({{\boldsymbol t}})>1\right)\to0{\quad\text{as}\quad u{\to\infty}},$$ and for ${\mathcal{N}}$ being a standard normal random variable $${\mathbb P}\left(\sup_{{{\boldsymbol t}}\in T} Y_u({{\boldsymbol t}})>1\right)\ge
\sup_{{{\boldsymbol t}}\in T}{\mathbb P}\left(Y_u({{\boldsymbol t}})>1\right)=
{\mathbb P}\left({\mathcal{N}}>\inf_{{{\boldsymbol t}}\in T} \sqrt{M_{{\boldsymbol}X,{\boldsymbol}d,{\boldsymbol}q}(u,{{\boldsymbol t}})}\right).$$
To prove the last claim, observe that the almost sure boundedness of sample paths of $Y_u({{\boldsymbol t}})$ implies that ${\mathbb E}\sup_{{{\boldsymbol t}}\in T} Y_u({{\boldsymbol t}})<\infty$ and it easily follows that the family $(\sup_{{{\boldsymbol t}}\in T}Y_u({{\boldsymbol t}}))_u$ is uniformly integrable. Now claim (3) follows from the second part of Lemma \[lem:Zuw\].
Before we proceed to the proof of Theorem \[thm:main\] we state a technical lemma, which is a prerequisite for the proof of the lower bound.
\[lemma:lower\] Under [**A1**]{}, there exist constants $C_1<\infty$, $C_2>0$ such that for any ${{\boldsymbol t}}\in T$ $$\log{\mathbb P}\left({\boldsymbol}X({{\boldsymbol t}})-{\boldsymbol}d({{\boldsymbol t}})>u{\boldsymbol}q\right)\ge -{\frac{1}{2}}M_{{\boldsymbol}X,{\boldsymbol}d,{\boldsymbol}q}(u,{{\boldsymbol t}})-C_1 M_{{\boldsymbol}X,{\boldsymbol}d,{\boldsymbol}q}^{1/2}(u,{{\boldsymbol t}})+C_2.$$
Set $$Q_{{{\boldsymbol t}}}:=\{{\boldsymbol}x\in{\mathbb R}^n:{\boldsymbol}x > u{\boldsymbol}q+{\boldsymbol}d({{\boldsymbol t}})\},$$ and let $B_{{{\boldsymbol t}}}$ be such that $B_{{{\boldsymbol t}}}B_{{{\boldsymbol t}}}'=\Sigma_{{{\boldsymbol t}}}$. Then $X({{\boldsymbol t}})\stackrel{\rm d}{=} B_{{{\boldsymbol t}}}{\mathcal{N}}$, where ${\mathcal{N}}$ is an ${\mathbb R}^n$-valued standard normal random variable with the density function $$f({\boldsymbol}x)=D_n\exp\left(-{\frac{1}{2}}{\left\langle {\boldsymbol}x, {\boldsymbol}x \right\rangle}\right),$$ for some normalizing constant $D_n$. In this notation, we have $${\mathbb P}\left({\boldsymbol}X({{\boldsymbol t}})-{\boldsymbol}d({{\boldsymbol t}})>u{\boldsymbol}q\right)={\mathbb P}\left({\boldsymbol}X({{\boldsymbol t}})\in Q_{{{\boldsymbol t}}}\right)
=
{\mathbb P}\left({\mathcal{N}}\in B_{{{\boldsymbol t}}}^{-1}Q_{{{\boldsymbol t}}}\right).$$ Now let ${\boldsymbol}x{^\star}={\boldsymbol}x{^\star}(u,{{\boldsymbol t}})\in B_{{{\boldsymbol t}}}^{-1}Q_{{{\boldsymbol t}}}$ be such that $$M_{{\boldsymbol}X,{\boldsymbol}d,{\boldsymbol}q}(u,{{\boldsymbol t}})=\inf_{{\boldsymbol}x\in Q_{{\boldsymbol t}}}{\left\langle {\boldsymbol}x, \Sigma^{-1}_{{\boldsymbol}t}{\boldsymbol}x \right\rangle}
=\inf_{{\boldsymbol}x\in B_{{\boldsymbol}t}^{-1}Q_{{\boldsymbol t}}}{\left\langle {\boldsymbol}x, {\boldsymbol}x \right\rangle}
={\left\langle {\boldsymbol}x{^\star}, {\boldsymbol}x{^\star} \right\rangle},$$ and let $A_{{{\boldsymbol t}}}:=\mathcal{B}(x{^\star},1)\cap B_{{{\boldsymbol t}}}^{-1}Q_{{{\boldsymbol t}}}$, where $\mathcal{B}(x{^\star},1)$ is a ball in ${\mathbb R}^n$ of radius $1$ and center $x{^\star}$. Then, $${\mathbb P}\left({\mathcal{N}}\in B_{{{\boldsymbol t}}}^{-1}Q_{{{\boldsymbol t}}}\right)
\ge
\int_{A_{{{\boldsymbol t}}}}f({\boldsymbol}x)\,d{\boldsymbol}x.$$ Set $\Delta({\boldsymbol}x,{\boldsymbol}x{^\star}):={\left\langle {\boldsymbol}x, {\boldsymbol}x \right\rangle}-{\left\langle {\boldsymbol}x{^\star}, {\boldsymbol}x{^\star} \right\rangle}.$ Then $${\mathbb P}\left({\mathcal{N}}\in B_{{{\boldsymbol t}}}^{-1}Q_{{{\boldsymbol t}}}\right)
\ge D_n\operatorname{Vol}(A_{{{\boldsymbol t}}})
\exp\left(-{\frac{1}{2}}M_{{\boldsymbol}X,{\boldsymbol}d,{\boldsymbol}q}(u,{{\boldsymbol t}})-{\frac{1}{2}}\sup_{{\boldsymbol}x \in A_{{\boldsymbol t}}}\Delta({\boldsymbol}x, {\boldsymbol}x{^\star})\right).$$ Since $$\Delta({\boldsymbol}x,{\boldsymbol}x{^\star})
\le
2\|{\boldsymbol}x-{\boldsymbol}x{^\star}\|{\left\langle {\boldsymbol}x{^\star}, {\boldsymbol}x{^\star} \right\rangle}^{1/2}+\|{\boldsymbol}x-{\boldsymbol}x{^\star}\|^2,$$ we have that $$\sup_{{\boldsymbol}x\in A_{{\boldsymbol t}}} \Delta({\boldsymbol}x,{\boldsymbol}x{^\star})
\le
2\operatorname{diam}(A_{{\boldsymbol t}})M_{{\boldsymbol}X,{\boldsymbol}d,{\boldsymbol}q}^{1/2}(u,{{\boldsymbol t}})+\operatorname{diam}^2(A_{{\boldsymbol t}}).$$ Therefore the claim follows if $\operatorname{diam}(A_{{\boldsymbol t}})$ and $\operatorname{Vol}(A_{{\boldsymbol t}})$ can be bounded uniformly in ${{\boldsymbol t}}\in T$ from above and below, respectively.
Observe that, by the construction of $A_{{\boldsymbol t}}$, $\operatorname{diam}(A_{{\boldsymbol t}})\le 1$. Besides, the quadrant $Q_{{\boldsymbol t}}$ is spanned by the standard basis $({\boldsymbol}e_i)$ in ${\mathbb R}^n$ fixed in the point $u{\boldsymbol}q+{\boldsymbol}d({{\boldsymbol t}})$. The cosine of the angle $\alpha_{i,j}$ between $B^{-1}_{{\boldsymbol t}}{\boldsymbol}e_i$ and $B^{-1}_{{\boldsymbol t}}{\boldsymbol}e_j$ is given by $\cos(\alpha_{i,j})=k_{i,j}$; see Remark \[rmk:mainassum\]. Under $\bf{A1}$ this angle is bounded away from zero, uniformly in ${{\boldsymbol t}}\in T$. Therefore $\inf_{{{\boldsymbol t}}\in T}\operatorname{Vol}(A_{{\boldsymbol t}})>0$. This completes the proof.
Now we are ready to prove the main theorem.
Put $P(u):={\mathbb P}\left(\exists{{{\boldsymbol t}}\in T}:{\boldsymbol}X({{\boldsymbol t}})-{\boldsymbol}d({{\boldsymbol t}})>u{\boldsymbol}q\right)$. We split the proof into two parts: the lower and the upper bound.\
Lower bound: The lower bound follows directly from Lemma \[lemma:lower\] and the inequality $$\log P(u)\ge\sup_{{{\boldsymbol t}}\in T}\log{\mathbb P}\left({\boldsymbol}X({{\boldsymbol t}})-{\boldsymbol}d({{\boldsymbol t}})>u{\boldsymbol}q\right).$$ Upper bound: Let ${\boldsymbol}w{^\star}:{\mathbb R}_+\times T\to{\mathbb R}^n_+$ be the mapping chosen in . Now as in the definition of the process $Y_u$, $$\begin{aligned}
P(u)
&\le
{\mathbb P}\left(\exists{{{\boldsymbol t}}\in T}: {\left\langle {\boldsymbol}w{^\star}, {\boldsymbol}X({{\boldsymbol t}}) \right\rangle} >{\left\langle {\boldsymbol}w{^\star}, u{\boldsymbol}q+{\boldsymbol}d({{\boldsymbol t}}) \right\rangle}\right)\\
&=
{\mathbb P}\left(\sup_{{{\boldsymbol t}}\in T}\frac{{\left\langle {\boldsymbol}w{^\star}, {\boldsymbol}X({{\boldsymbol t}}) \right\rangle}}{{\left\langle {\boldsymbol}w{^\star}, u{\boldsymbol}q+{\boldsymbol}d({{\boldsymbol t}}) \right\rangle}}> 1\right)
=
{\mathbb P}\left(\sup_{{{\boldsymbol t}}\in T}Y_u({{\boldsymbol t}})> 1\right),\end{aligned}$$ where the passage from the $n$-dimensional quadrant to the tangent increases the probability. Recall that the variance $\sigma^2_u({{\boldsymbol t}})$ of $Y_u({{\boldsymbol t}})$ equals $(M_{{\boldsymbol}X,{\boldsymbol}d,{\boldsymbol}q}(u,{{\boldsymbol t}}))^{-1}$; cf. . Moreover, thanks to Lemma \[lem:impl\], the Gaussian process $Y_u$ has bounded sample paths almost surely. Therefore, Borell’s inequality implies that $${\mathbb P}\left(\sup_{{{\boldsymbol t}}\in T}Y_u({{\boldsymbol t}})> 1\right)
\le
2\exp\left(-\left(1-{\mathbb E}\sup_{{{\boldsymbol t}}\in T}Y_u({{\boldsymbol t}})\right)^2M_{{\boldsymbol}X,{\boldsymbol}d,{\boldsymbol}q}(u;T)\right).$$ Now from (2) and (3) of Lemma \[lem:impl\] we obtain $$\limsup_{u{\to\infty}}\frac{\log{\mathbb P}\left(\sup_{{{\boldsymbol t}}\in T}Y_u({{\boldsymbol t}})> 1\right)}{M_{{\boldsymbol}X,{\boldsymbol}d,{\boldsymbol}q}(u;T)}\le -1$$ and the claim follows.
From the proof of the upper bound we obtain the useful inequality $${\mathbb P}\left(\exists{{{\boldsymbol t}}\in T}:{\boldsymbol}w{^\star}X({{\boldsymbol t}})>{\boldsymbol}w{^\star}(u{\boldsymbol}q+{\boldsymbol}d({{\boldsymbol t}}))\right)
\le
{\mathbb P}\left(\exists{{{\boldsymbol t}}\in T}: {\left\langle {\boldsymbol}w{^\star}, {\boldsymbol}X({{\boldsymbol t}}) \right\rangle} >{\left\langle {\boldsymbol}w{^\star}, u{\boldsymbol}q+{\boldsymbol}d({{\boldsymbol t}}) \right\rangle}\right),$$ which we have proven to be exact in terms of logarithmic asymptotics. Let ${\boldsymbol}v{^\star}\equiv{\boldsymbol}v{^\star}(u,{{\boldsymbol t}})$ be such that $${\left\langle {\boldsymbol}v{^\star}+{\boldsymbol}d({{\boldsymbol t}}), \Sigma_{{{\boldsymbol t}}}^{-1} ({\boldsymbol}v{^\star}+{\boldsymbol}d({{\boldsymbol t}})) \right\rangle}=
\inf_{{\boldsymbol}v\ge u{\boldsymbol}q}{\left\langle {\boldsymbol}v+{\boldsymbol}d({{\boldsymbol t}}), \Sigma_{{{\boldsymbol t}}}^{-1} ({\boldsymbol}v+{\boldsymbol}d({{\boldsymbol t}})) \right\rangle}.$$ Then the optimal weights ${\boldsymbol}w{^\star}$ are given by ${\boldsymbol}w{^\star}(u,{{\boldsymbol t}})=\Sigma_{{{\boldsymbol t}}}^{-1}{\boldsymbol}v{^\star}(u,{{\boldsymbol t}}),$ or alternatively, due to Lemma \[lemma:saddle\], by $${\boldsymbol}w{^\star}(u,{{\boldsymbol t}})=\arg\sup_{{\boldsymbol}w\in {\mathbb R}^n_+}\frac{{\left\langle {\boldsymbol}w, u{\boldsymbol}q+{\boldsymbol}d({{\boldsymbol t}}) \right\rangle}^2}{{\left\langle {\boldsymbol}w, \Sigma_{{{\boldsymbol t}}}{\boldsymbol}w \right\rangle}}.$$ Observe that the weights do not depend on $u$ in the case of ${\boldsymbol}d\equiv {\boldsymbol}0$.
[^1]: The first and third authors thank the Isaac Newton Institute, Cambridge, UK, for hospitality. The research of the first and the fourth authors was supported by MNiSW Research Grant N N201 394137 (2009-2011). The second author thanks the Mathematical Institute, University of Wrocław, Poland, for hospitality. The research of the second author was supported by NWO grant 613.000.701.
[^2]: KK: Assumption [**A1**]{} does not cover this case.
|
{
"pile_set_name": "ArXiv"
}
|
[ Spontaneously Generated Tensor Field Gravity]{}
**J.L. Chkareuli**$^{1,2}$**, C.D. Froggatt**$^{3}$**, H.B. Nielsen**$^{4}$
$^{1}$*Center for Elementary Particle Physics, ITP, Ilia State University, 0162 Tbilisi, Georgia*
$^{2}$*Particle Physics Department, *A*ndronikashvili Institute of Physics, 0177 Tbilisi, Georgia *
$^{3}$*Department of Physics and Astronomy, Glasgow University, Glasgow G12 8QQ, Scotland\
*
$^{4}$*Niels Bohr Institute, Blegdamsvej 17-21, DK 2100 Copenhagen, Denmark*
**Abstract**
An arbitrary local theory of a symmetric two-tensor field $H_{\mu \nu }$ in Minkowski spacetime is considered, in which the equations of motion are required to be compatible with a nonlinear length-fixing constraint $H_{\mu
\nu }^{2}=\pm M^{2}$ leading to spontaneous Lorentz invariance violation, SLIV ($M$ is the proposed scale for SLIV). Allowing the parameters in the Lagrangian to be adjusted so as to be consistent with this constraint, the theory turns out to correspond to linearized general relativity in the weak field approximation, while some of the massless tensor Goldstone modes appearing through SLIV are naturally collected in the physical graviton. In essence the underlying diffeomophism invariance emerges as a necessary condition for the tensor field $H_{\mu \nu }$ not to be superfluously restricted in degrees of freedom, apart from the constraint due to which the true vacuum in the theory is chosen by SLIV. The emergent theory appears essentially nonlinear, when expressed in terms of the pure Goldstone tensor modes and contains a plethora of new Lorentz and $CPT$ violating couplings. However, these couplings do not lead to physical Lorentz violation once this tensor field gravity is properly extended to conventional general relativity.
*Keywords:* Spontaneous Lorentz violation; Goldstone bosons; Emergent Gravity
Introduction
============
It is conceivable that spontaneous Lorentz invariance violation (SLIV) could provide a dynamical approach to quantum electrodynamics, gravity and Yang-Mills theories with the photon, graviton and gluons appearing as massless Nambu-Goldstone bosons [@bjorken; @ph; @eg; @suz] (for some later developments see [@cfn; @kraus; @kos; @car; @cjt])[^1]. However, in contrast to the spontaneous violation of internal symmetries, SLIV seems not to necessarily imply a physical breakdown of Lorentz invariance. Rather, when appearing in a gauge theory framework, this may eventually result in a noncovariant gauge choice in an otherwise gauge invariant and Lorentz invariant theory. In substance the SLIV ansatz, due to which the vector field develops a vacuum expectation value (vev) $<A_{\mu }(x)>$ $=n_{\mu }M$ (where $n_{\mu }$ is a properly oriented unit Lorentz vector, while $M$ is the proposed SLIV scale), may itself be treated as a pure gauge transformation with a gauge function linear in coordinates, $\omega (x)=$ $n_{\mu }x^{\mu }M$. In this sense, gauge invariance in QED leads to the conversion of SLIV into gauge degrees of freedom of the massless Goldstonic photon, unless it is disturbed by some extra (potential-like) terms. This is what one could refer to as the generic non-observability of SLIV in QED. Moreover, as was shown some time ago [@cfn], gauge theories, both Abelian and non-Abelian, can be obtained by themselves from the requirement of the physical non-observability of SLIV induced by vector fields rather than from the standard gauge principle.
A possible source for such a kind of unobserved SLIV is nonlinearly realized Lorentz symmetry imposed just by postulate on an underlying vector field $A_{\mu }$ through the length-fixing constraint $$A_{\mu }A^{\mu }=n^{2}M^{2}\text{ , \ \ }n^{2}\equiv n_{\nu }n^{\nu }=\pm 1,
\label{const}$$rather than due to some vector field potential. This constraint was first studied in the gauge invariant QED framework by Nambu [@nambu] quite a long time ago[^2], and then in more detail later [@az; @kep; @jej; @cj]. The constraint ([const]{}) is in fact very similar to the constraint appearing in the nonlinear $\sigma $-model for pions [@GLA]. It means, in essence, that the vector field $A_{\mu }$ develops some constant background value and the Lorentz symmetry $SO(1,3)$ formally breaks down to $SO(3)$ or $SO(1,2)$, depending on the time-like ($n^{2}>0$) or space-like ($n^{2}<0$) nature of SLIV. The point is, however, that, in sharp contrast to the nonlinear $%
\sigma $ model for pions, the nonlinear QED theory ensures that all the physical Lorentz violating effects strictly cancel out among themselves, due to the starting gauge invariance involved[^3].
Furthermore, the most important property of the nonlinear vector field constraint (\[const\]) was shown [@cj] to be that one does not need to specially postulate the starting gauge invariance. This was done in the framework of an arbitrary relativistically invariant Lagrangian containing adjustable parameters, which is proposed only to possess some global internal symmetry. Indeed, the SLIV constraint (\[const\]) causing the condensation of a generic vector field or vector field multiplet, due to which the true vacuum in a theory is chosen, happens by itself to be powerful enough to require adjustment of the parameters to give gauge invariance. Namely, the existence of the constraint (\[const\]) is taken to be upheld by adjusting the parameters of the Lagrangian, in a way that leads to gauging of the starting global symmetry of the interacting vector and matter fields involved. In essence, the gauge invariance appears as a necessary condition for these vector fields not to be superfluously restricted in degrees of freedom as soon as the SLIV constraint holds. Indeed, a further reduction in the number of independent $A_{\mu }$ components would make it impossible to set the required initial conditions in the appropriate Cauchy problem and, in quantum theory, to choose self-consistent equal-time commutation relations [@ogi3].
Extending the above argumentation, we consider here spontaneous Lorentz violation realized through a nonlinear length-fixing tensor field constraint of the type$$H_{\mu \nu }H^{\mu \nu }=\mathfrak{n}^{2}M^{2}\text{ , \ \ \ \ }\mathfrak{n}%
^{2}\equiv \mathfrak{n}_{\mu \nu }\mathfrak{n}^{\mu \nu }=\pm 1\text{. }
\label{const3}$$ Here $\mathfrak{n}_{\mu \nu }$ is a properly oriented ‘unit’ Lorentz tensor, while $M$ is the proposed scale for Lorentz violation. We show that such a type of SLIV induces massless tensor Goldstone modes some of which can naturally be collected in the physical graviton. The underlying diffeomophism (diff) invariance appears as a necessary condition for a symmetric two-tensor field $H_{\mu \nu }$ in Minkowski spacetime not to be superfluously restricted in degrees of freedom, apart from the constraint due to which the true vacuum in a theory is chosen by the Lorentz violation.
Outline of the paper
--------------------
The paper is organized in the following way. Further in this section we discuss the main features of SLIV regarding both input and emergent gauge invariance. The focus of this paper will be on emergent gauge invariance. In section 2 we review the emergent QED and Yang-Mills theories [@cj], which appear due to a SLIV constraint being put on a vector field or a vector field multiplet, respectively. In section 3 we generalize this approach to the tensor field case and find the emergent gravity theory whose vacuum is also determined by spontaneous Lorentz violation. Finally, in section 4, we present a résumé and conclude.
SLIV: an intact physical Lorentz invariance
-------------------------------------------
The original models realizing the SLIV conjecture were based on a four fermion (current-current) interaction, where the massless vector NG modes appear as fermion-antifermion pair composite states [@bjorken]. This is in complete analogy with the massless composite scalar modes in the original Nambu-Jona-Lazinio model [@NJL]. Unfortunately, owing to the lack of a starting gauge invariance in such models and the composite nature of the NG modes which appear, it is hard to explicitly demonstrate that these modes together really form a massless vector boson as a gauge field candidate universally interacting with all kinds of matter. Rather, there are in general three separate massless NG modes, two of which may mimic the transverse photon polarizations, while the third one must be appropriately suppressed.
In this connection, the more instructive laboratory for SLIV consideration proves to be a simple class of QED type models [@nambu] having from the outset a gauge invariant form, in which the spontaneous Lorentz violation is realized through the nonlinear constraint (\[const\]). Remarkably, this type of model makes the vector Goldstone boson a true gauge boson (photon), whereas the physical Lorentz invariance is left intact. Indeed, despite an evident similarity with the nonlinear $\sigma $-model for pions, the nonlinear QED theory ensures that all the physical Lorentz violating effects prove to be non-observable, due to the starting gauge invariance involved. It was shown [@nambu], while only in the tree approximation and for time-like SLIV ($n^{2}>0$), that the non-linear constraint (\[const\]) implemented as a supplementary condition into the standard QED Lagrangian containing the charged fermion field $\psi (x)$ $$L_{QED}=-\frac{1}{4}F_{\mu \nu }F^{\mu \nu }+\overline{\psi }(i\gamma
\partial +m)\psi -eA_{\mu }\overline{\psi }\gamma ^{\mu }\psi \text{ , \ }%
A_{\mu }A^{\mu }=n^{2}M^{2}\text{\ } \label{lag11}$$appears in fact as a possible gauge choice for the vector field $A_{\mu }$. At the same time the $S$-matrix remains unaltered under such a gauge convention. Really, this nonlinear QED contains a plethora of Lorentz and $%
CPT$ violating couplings when it is expressed in terms of the pure Goldstonic photon modes ($a_{\mu }$) according to the constraint condition (\[const\]) $$A_{\mu }=a_{\mu }+\frac{n_{\mu }}{n^{2}}(M^{2}-n^{2}a^{2})^{\frac{1}{2}}%
\text{ , \ }n_{\mu }a_{\mu }=0\text{ \ \ \ \ (}a^{2}\equiv a_{\mu }a^{\mu }%
\text{).} \label{gol}$$In addition there is an effective Higgs" mode $(n_{\mu
}/n^{2})(M^{2}-n^{2}a^{2})^{1/2}$ given by the constraint (for definiteness, one takes the positive sign for the square root when expanding it in powers of $a^{2}/M^{2}$). However, the contributions of these Lorentz violating couplings to physical processes completely cancel out among themselves. So, SLIV was shown to be superficial as it affects only the gauge of the vector potential $A_{\mu }$, at least in the tree approximation [@nambu].
Some time ago, this result was extended to the one-loop approximation and for both time-like ($n^{2}>0$) and space-like ($n^{2}<0$) Lorentz violation [@az]. All the contributions to the photon-photon, photon-fermion and fermion-fermion interactions violating physical Lorentz invariance were shown to exactly cancel among themselves, in the manner observed by Nambu long ago for the simplest tree-order diagrams. This means that the constraint (\[const\]), having been treated as a nonlinear gauge choice at the tree (classical) level, remains as a gauge condition when quantum effects are taken into account as well. So, in accordance with Nambu’s original conjecture, one can conclude that physical Lorentz invariance is left intact at least in the one-loop approximation, provided that we consider the standard gauge invariant QED Lagrangian (\[lag11\]) taken in flat Minkowski spacetime. Later this result was also confirmed for spontaneously broken massive QED [@kep], non-Abelian theories [@jej] and tensor field gravity [@cjt]. Some interesting aspects of SLIV in nonlinear QED were considered in [@ur].
SLIV: emergent gauge symmetries
-------------------------------
In the above-discussed models, due to the assumed gauge symmetry, physical Lorentz invariance always appears intact, in the sense that all Lorentz non-invariant effects caused by the vector field vacuum expectation values (vevs) are physically unobservable. However the most important property of the nonlinear vector field SLIV constraint (\[const\]), was shown [@cj] to be that one does not have to impose gauge symmetry directly. Indeed we showed that gauge invariance was unavoidable, if the equations of motion should have enough freedom to allow a constraint like (\[const\]) to be fulfilled. This need for gauge symmetry was deduced in a model with the nonlinear $\sigma $-model type spontaneous Lorentz violation, in the framework of an arbitrary relativistically invariant Lagrangian for elementary vector and matter fields, which are proposed only to possess some global internal symmetry. One simply assumes that the existence of the constraint (\[const\]) is to be upheld by adjusting the parameters of the Lagrangian. The SLIV conjecture happens to be powerful enough by itself to require gauge invariance, provided that we allow the parameters in the corresponding Lagrangian density to be adjusted so as to ensure self-consistency without losing too many degrees of freedom. Namely, due to the spontaneous Lorentz violation determined by the constraint (\[const\]), the true vacuum in such a theory is chosen so that this theory acquires on its own a gauge-type invariance, which gauges the starting global symmetry of the interacting vector and matter fields involved. In essence, the gauge invariance (with a proper gauge-fixing term) appears as a necessary condition for these vector fields not to be superfluously restricted in degrees of freedom.
Let us dwell upon this point in more detail. Generally, while a conventional variation principle requires the equations of motion to be satisfied, it is possible to eliminate one component of a general 4-vector field $A_{\mu }$, in order to describe a pure spin-1 particle by imposing a supplementary condition. In the massive vector field case there are three physical spin-1 states to be described by the $A_{\mu }$ field. Similarly in the massless vector field case, although there are only two physical (transverse) photon spin states, one cannot construct a massless 4-vector field $A_{\mu }$ as a linear combination of creation and annihilation operators for helicity $\pm
1 $ states in a relativistically covariant way, unless one fictitious state is added [@GLB]. So, in both the massive and massless vector field cases, only one component of the $A_{\mu }$ field may be eliminated and still preserve Lorentz invariance. Once the SLIV constraint (\[const\]) is imposed, it is therefore not possible to satisfy another supplementary condition, since this would superfluously restrict the number of degrees of freedom for the vector field. In fact a further reduction in the number of independent $A_{\mu }$ components would make it impossible to set the required initial conditions in the appropriate Cauchy problem and, in quantum theory, to choose self-consistent equal-time commutation relations [@ogi3].
We now turn to the question of the consistency of a constraint with the equations of motion for a general 4-vector field $A_{\mu }$ Actually, there are only two possible covariant constraints for such a vector field in a relativistically invariant theory - the holonomic SLIV constraint, $%
C(A)=A_{\mu }A^{\mu }-n^{2}M^{2}=0$ (\[const\]), and the non-holonomic one, known as the Lorentz condition, $C(A)=\partial _{\mu }A^{\mu }=0$. In the presence of the SLIV constraint $C(A)=A^{\mu }A_{\mu }-n^{2}M^{2}=0$, it follows that the equations of motion can no longer be independent. The important point is that, in general, the time development would not preserve the constraint. So the parameters in the Lagrangian have to be chosen in such a way that effectively we have one less equation of motion for the vector field. This means that there should be some relationship between all the (vector and matter) field Eulerians ($E_{A}$, $E_{\psi }$, ...) involved[^4]. Such a relationship can quite generally be formulated as a functional - but by locality just a function - of the Eulerians, $%
F(E_{A},E_{\psi },...)$, being put equal to zero at each spacetime point with the configuration space restricted by the constraint $C(A)=0$: $$F(C=0;\text{ \ }E_{A},E_{\psi },...)=0\text{ .} \label{FF}$$This relationship must satisfy the same symmetry requirements of Lorentz and translational invariance, as well as all the global internal symmetry requirements, as the general starting Lagrangian $L(A,\psi ,...)$ does. We shall use this relationship in subsequent sections as the basis for gauge symmetry generation in the SLIV constrained vector and tensor field theories.
Let us now consider a Taylor expansion" of the function F expressed as a linear combination of terms involving various field combinations multiplying or derivatives acting on the Eulerians[^5]. The constant term in this expansion is of course zero since the relation (\[FF\]) must be trivially satisfied when all the Eulerians vanish, i.e. when the equations of motion are satisfied. We now consider just the terms containing field combinations (and derivatives) with mass dimension 4, corresponding to the Lorentz invariant expressions $$\partial _{\mu }(E_{A})^{\mu },\text{ }A_{\mu }(E_{A})^{\mu },\text{ }%
E_{\psi }\psi ,\text{ }\overline{\psi }E_{\overline{\psi }}. \label{fff}$$All the other terms in the expansion contain field combinations and derivatives with higher mass dimension and must therefore have coefficients with an inverse mass dimension. We expect the mass scale associated with these coefficients should correspond to a large fundamental mass (e.g. the Planck mass $M_{P}$). Hence we conclude that such higher dimensional terms must be highly suppressed and can be neglected. A priori these neglected terms could lead to the breaking of the spontaneously generated gauge symmetry at high energy. However it could well be that a more detailed analysis would reveal that the imposed SLIV constraint requires an exact gauge symmetry. Indeed, if one uses classical equations of motion, a gauge breaking term will typically predict the development of the gauge in a way that is inconsistent with our gauge fixing constraint $C(A)=0$. Thus the theory will generically only be consistent if it has exact gauge symmetry.
In the above discussion we have simply considered a single vector field. However in sections 2 and 3 we shall also consider a non-Abelian vector field $A_{\mu }^{a}$ and a tensor field $H_{\mu \nu }$ respectively. In these cases the lowest mass dimension terms analogous to the expressions (\[fff\]) have symmetry indices. The function analogous to $F$ in equation (\[FF\]), which is a linear combination of these terms, must respect the assumed global non-Abelian symmetry and Lorentz symmetry. So all the terms must transform in the same way and carry the same symmetry index, $a$ or $%
\nu $ respectively, which is then inherited by the function analogous to $F$. Since gravitational interactions vanish in the low energy limit, we have to include dimension 5 terms in our function $\mathcal{F}^{\mu }$ for the gravity case.
The other possible Lorentz covariant constraint $\partial _{\mu }A^{\mu }=0,$ while also being sensitive to the form of the constraint-compatible Lagrangian, leads to massive QED and massive Yang-Mills theories [@ogi3].
In the case of a symmetric two-tensor field $H_{\mu \nu }$, we consider spontaneous Lorentz violation realized through a nonlinear tensor field constraint of the type (\[const3\]). This constraint fixes the length of the tensor field in an analogous way to that of the vector field above by the constraint (\[const\]). For consistency between this constraint ([const3]{}) and the equations of motion, we require the parameters of the theory to be chosen in such a way that the above-mentioned relationship $%
\mathcal{F}^{\mu }=0$ be satisfied. As a result, the theory turns out to correspond to linearized general relativity in the weak field approximation, while some of the massless tensor Goldstone modes appearing through SLIV are naturally collected in the physical graviton. The accompanying diffeomophism invariance appears as a necessary condition for the symmetric two-tensor field $H_{\mu \nu }$ in Minkowski spacetime not to be superfluously restricted in degrees of freedom, apart from the constraint due to which the true vacuum in the theory is chosen by the Lorentz violation. The emergent theory looks essentially nonlinear when expressed in terms of the pure Goldstone tensor modes and contains, besides general relativity (GR) in the weak-field limit approximation, a variety of new Lorentz and $CPT$ violating couplings. However, they do not lead to physical Lorentz violation, due to the simultaneously generated diffeomophism invariance, once the tensor field gravity theory (being considered as the weak-field limit of general relativity) is properly extended to GR[^6]. So, this formulation of SLIV seems to amount to the fixing of a gauge for the tensor field in a special manner, making the Lorentz violation only superficial just as in the nonlinear QED framework [@nambu]. From this viewpoint, both conventional QED and GR theories appear to be Goldstonic theories, in which some of the gauge degrees of freedom of these fields are condensed and eventually emerge as a noncovariant gauge choice. The associated massless NG modes are collected in photons and gravitons, in such a way that physical Lorentz invariance is ultimately preserved.
The Vector Goldstone Boson Primer
=================================
Emergent QED
------------
Let us consider an arbitrary relativistically invariant Lagrangian $L(A,\psi
)$ of one vector field $A_{\mu }$ and one complex matter field $\psi $, taken to be a charged fermion for definiteness, in an Abelian model with the corresponding global $U(1)$ charge symmetry imposed. For convenience and the apparent simplicity of the method, we choose to impose the SLIV constraint (\[const\]) using a well-known classical procedure for holonomic constraints (see, for example, [@lan]), involving a Lagrange multiplier term in an appropriately extended Lagrangian $L^{\prime}(A,\psi, \lambda )$. Since the main point of the present article is to consider theories that become inconsistent unless they have special relations between the parameters of the theory – making them into gauge theories – we want to impose the SLIV constraint in a way that leads generically to such an inconsistency. The trick we use to achieve this is to arrange for the Lagrange multiplier field $\lambda (x)$ to disappear from the equations of motion (Eulerians) for the other fields. In order that the auxiliary field $%
\lambda (x)$, which acts as the Lagrange multiplier, should not appear in the equations of motion, we take a quadratic form for the Lagrange multiplier term as follows $$L^{\prime }(A,\psi ,\lambda )=L(A,\psi )-\frac{1}{4}\lambda \left( A_{\nu
}A^{\nu }-n^{2}M^{2}\right) ^{2}. \label{qu}$$Varying $L^{\prime }(A,\psi, \lambda )$ with respect to the auxiliary field $%
\lambda (x)$ gives the equation of motion $$E_{\lambda }^{\prime }=\partial L^{\prime }/\partial \lambda =\frac{1}{4}%
\left( A_{\nu }A^{\nu }-n^{2}M^{2}\right) ^{2}=0, \label{on11}$$leading to just the SLIV condition (\[const\]). The equations of motion for $A_{\mu }$ in this case are independent of the $\lambda (x)$, which completely decouples from them rather than acting as some extra source of charge density, as it would in the case of a linear Lagrange multiplier term[^7].
Now, under the assumption that the SLIV constraint is preserved under the time development given by the equations of motion, we show how gauge invariance of the starting Lagrangian $L(A,\psi )$ is established. A conventional variation principle applied to the total Lagrangian $L^{\prime
}(A,\psi ,\lambda )$ requires the following equations of motion for the vector field $A_{\mu }$ and the auxiliary field $\lambda $ to be satisfied$$(E_{A}^{\prime })^{\mu }=(E_{A})^{\mu }=0\text{ , \ \ \ }C(A)=A_{\nu }A^{\nu
}-n^{2}M^{2}=0\text{\ }, \label{em}$$where the Eulerian $(E_{A})^{\mu }$ is given by the starting Lagrangian $%
L(A,\psi )$. However, in accordance with the general argumentation given in the Introduction, the existence of five equations for the 4-component vector field $A^{\mu }$ (one of which is the constraint) means that not all of the vector field Eulerian components can be independent. Therefore, there must be a relationship of the form $F(C=0;\text{ \ }E_{A},E_{\psi },...)=0$ given in equation (\[FF\]), expressed as a linear combination of the dimension 4 Lorentz invariant expressions given in equation (\[fff\]). It follows that the parameters in the Lagrangian $L(A,\psi )$ must be chosen so as to satisfy an identity between the vector and matter field Eulerians of the following type $$\partial _{\mu }(E_{A})^{\mu }=cA_{\mu }(E_{A})^{\mu }+itE_{\psi }\psi -it%
\overline{\psi }E_{\overline{\psi }}. \label{div}$$This identity immediately signals the invariance of the basic Lagrangian $%
L(A,\psi )$ under vector and fermion field local transformations whose infinitesimal form is given by[^8] $$\delta A_{\mu }=\partial _{\mu }\omega +c\omega A_{\mu },\text{ \ \ }\delta
\psi \text{\ }=it\omega \psi \text{ .} \label{trans}$$Here $\omega (x)$ is an arbitrary function, only being restricted by the requirement to conform with the nonlinear constraint (\[const\])$$(A_{\mu }+\partial _{\mu }\omega +c\omega A_{\mu })(A^{\mu }+\partial ^{\mu
}\omega +c\omega A_{\mu })=n^{2}M^{2}\text{ .}$$Conversely, the identity (\[div\]) follows from the invariance of the Lagrangian $L(A_{\mu },\psi )$ under the transformations (\[trans\]). Indeed, both direct and converse assertions are particular cases[^9] of Noether’s second theorem [noeth]{}. The point is, however, that these transformations cannot in general form a group unless the constant $c$ vanishes. In fact, by constructing the corresponding Lie bracket operation $(\delta _{1}\delta _{2}-\delta
_{2}\delta _{1})$ for two successive vector field variations we find that, while the fermion transformation in (\[trans\]) is an ordinary Abelian local one with zero Lie bracket, for the vector field transformations there appears a non-zero result $$(\delta _{1}\delta _{2}-\delta _{2}\delta _{1})A_{\mu }=c(\omega
_{1}\partial _{\mu }\omega _{2}-\omega _{2}\partial _{\mu }\omega _{1}),
\label{SL1}$$which is proportional to the constant $c$. Thus we necessarily require $c=0$ for the bracket operation to be closed. Note also that for non-zero $c$ the variation of $A_{\mu }$ given by (\[SL1\]) is an essentially arbitrary vector function. Such a freely varying $A_{\mu }$ is only consistent with a trivial Lagrangian (i.e. $L=const$). Thus, in order to have a non-trivial Lagrangian, it is necessary to have $c=0$ and the theory given by the basic Lagrangian $L(A_{\mu },\psi )$ then possesses an Abelian local symmetry[^10].
We have now shown how the choice of a vacuum conditioned by the SLIV constraint (\[const\]) enforces the choice of the parameters in the starting Lagrangian $L(A_{\mu },\psi )$, so as to convert the starting global $U(1)$ charge symmetry into a local one. This SLIV induced local Abelian symmetry (\[trans\]) allows the total Lagrangian $L^{\prime }$ to be determined in full. For a theory with renormalizable coupling constants, it is in fact the conventional QED Lagrangian (\[lag11\]) extended by the Lagrange multiplier term, which provides the SLIV constraint (\[const\]) imposed on the vector field $A_{\mu }.$ Thus, we eventually come to the total Lagrangian$$L^{\prime }(A,\psi ,\lambda )=L_{QED}-\frac{1}{4}\lambda \left( A_{\mu
}A^{\mu }-n^{2}M^{2}\right) ^{2} \label{qed2}$$in the most direct way. This type of Abelian vector field theory with a quadratic Lagrange multiplier term was recently considered in [@kkk]. The equations of motion generated by this theory are the equations in the absence of the constraint (\[const\]) plus the constraint itself. Thus the introduction of the quadratic Lagrange-multiplier type of term is in fact equivalent at the classical level to imposing the constraint on the equations of motion by hand[^11]. This theory is closely related to the Nambu QED model ([lag11]{}), in which the SLIV constraint is proposed to be substituted into the Lagrangian before varying the action, although the correspondence is not exact. The Nambu model yields a total of four equations for the fields: the constraint by itself and three equations of motion from the variation. Meanwhile, the model (\[qed2\]) yields five equations of motion instead, one of which is the constraint. The extra equation corresponds to the Gauss law, which in the Nambu approach is imposed as a separate initial condition that subsequently holds at all times, by virtue of the three equations of motion and the constraint [@kkk]. They both lead to SLIV, which generates massless Goldstone modes associated with photons and forces the massive mode to vanish. This pattern of SLIV emerges as a noncovariant gauge choice in an otherwise gauge invariant and Lorentz invariant theory, as was already discussed in the Introduction.
Emergent Yang-Mills theories
----------------------------
We shall here discuss the non-Abelian internal symmetry case and show that the Yang-Mills gauge fields also appear as possible vector Goldstone modes, when the true vacuum in the theory is chosen by the non-Abelian SLIV constraint $$Tr(\boldsymbol{A}_{\mu }\boldsymbol{A}^{\mu })=\boldsymbol{n}^{2}M^{2},\text{
\ \ }\boldsymbol{n}^{2}\equiv \boldsymbol{n}_{\mu }^{a}\boldsymbol{n}^{\mu
,a}=\pm 1, \label{const2}$$where $\boldsymbol{n}_{\mu }^{a}$ is now some ‘unit’ rectangular matrix. We consider a general Lorentz invariant Lagrangian $\mathrm{L}(\boldsymbol{A}%
_{\mu },\boldsymbol{\psi })$ for the vector and matter fields involved possessing some global internal symmetry given by a group $G$ with $D$ generators $t^{a}$ $$\lbrack t_{a},t_{b}]=ic_{abc}t_{c},\text{ \ }Tr(t_{a}t_{b})=\delta _{ab}%
\text{ \ \ }(a,b,c=0,1,...,D-1), \label{com}$$where $c_{abc}$ are the structure constants of $G$. The corresponding vector fields, which transform according to the adjoint representation of $G$, are given in the matrix form $\boldsymbol{A}_{\mu }=\boldsymbol{A}_{\mu
}^{a}t_{a}$. The matter fields (fermion fields for definiteness) are taken in the fundamental representation column $\boldsymbol{\psi }^{\sigma }$ ($%
\sigma =0,1,...,d-1$) of $G$.
We impose the SLIV constraint (\[const2\]), as in the above Abelian case, by introducing an extended Lagrangian $\mathrm{L}^{\prime }$ containing a quadratic Lagrange multiplier term $$\mathrm{L}^{\prime }(\boldsymbol{A}_{\mu },\boldsymbol{\psi }, \lambda )=%
\mathrm{L}(\boldsymbol{A}_{\mu },\boldsymbol{\psi })- \lambda/4[Tr(%
\boldsymbol{A}_{\mu }\boldsymbol{A}^{\mu })-\boldsymbol{n}^{2}M^{2}]^{2}.
\label{tot}$$The variation of $\mathrm{L}^{\prime}(\boldsymbol{A}_{\mu },\boldsymbol{\psi
}, \lambda)$ with respect to $\boldsymbol{A}_{\mu }$ gives the vector field equation of motion $$(\mathrm{E}_{\boldsymbol{A}})_{a}^{\mu }-\lambda\boldsymbol{A}_{a}^{\mu }[Tr(%
\boldsymbol{A}_{\mu }\boldsymbol{A}^{\mu })-\boldsymbol{n}^{2}M^{2}]=0\text{
\ \ \ }(a=0,1,...,D-1). \label{eqmI}$$Here the vector field Eulerian $\mathrm{E}_{\boldsymbol{A}}$ is determined by the starting Lagrangian $\mathrm{L}(\boldsymbol{A}_{\mu },\boldsymbol{%
\psi })$, while the Eulerian of the auxiliary field $\lambda(x) $ taken on-shell$$\mathrm{E}_{\lambda }^{\prime }=\partial \mathrm{L}^{\prime }/\partial
\lambda =\frac{1}{4}[Tr(\boldsymbol{A}_{\mu }\boldsymbol{A}^{\mu })-%
\boldsymbol{n}^{2}M^{2}]^{2}=0\text{ \ \ \ } \label{on2}$$gives the constraint (\[const2\]). So, once the constraint holds, one has the following simplified equations for the vector fields $$(\mathrm{E}_{\boldsymbol{A}})_{a}^{\mu }=0\text{ , \ \ } C(\boldsymbol{A}%
_{\mu }) = Tr(\boldsymbol{A}_{\mu }\boldsymbol{A}^{\mu }) -\boldsymbol{n}%
^{2}M^{2}=0, \label{eqs}$$whereas the auxiliary field $\lambda(x),$ as in the Abelian case, entirely decouples from the vector field dynamics.
The need to preserve the constraint $C(\boldsymbol{A}_{\mu }) = 0$ with time implies that the equations of motion for the vector fields $\boldsymbol{A}%
_{\mu }^{a}$ cannot be all independent. Consequently the parameters in the Lagrangian $\mathrm{L}(\boldsymbol{A}_{\mu },\boldsymbol{\psi })$ must be chosen so as to give a relationship between the Eulerians for the vector and matter fields analogous to equation (\[FF\]). We include just the lowest dimensional Lorentz invariant expressions constructed from the Eulerians in this relationship, on the grounds that other terms will be suppressed by a large mass parameter like $M_P$. These lowest dimension terms include $%
\partial _{\mu }(\mathrm{E}_{\boldsymbol{A}})_{a}^{\mu }$ and all the terms in the relationship must transform in the same way under the global symmetry group G. Hence the relationship must transform as the adjoint representation of G and carry the symmetry index $a$ $$\boldsymbol{F}_{a}(C=0;\text{ \ }\mathrm{E}_{\boldsymbol{A}},\mathrm{E}_{%
\boldsymbol{\psi }},...)=0\text{ \ \ \ }(a=0,1,...,D-1). \label{FF1}$$ It therefore takes the following form $$\partial _{\mu }(\mathrm{E}_{\boldsymbol{A}})_{a}^{\mu }= d_{\boldsymbol{A}%
}c_{abc}\boldsymbol{A}_{\mu }^{b}(\mathrm{E}_{\boldsymbol{A}})^{\mu ,c} + d_{%
\boldsymbol{\psi}}\mathrm{E}_{\boldsymbol{\psi }} (it_{a})\boldsymbol{\psi }%
+ d_{\overline{\boldsymbol{\psi }}}\overline{\boldsymbol{\psi }}(-it_{a})%
\mathrm{E}_{\overline{\boldsymbol{\psi }}}, \label{id11}$$ where $d_{\boldsymbol{A}}$, $d_{\boldsymbol{\psi}}$ and $d_{\overline{%
\boldsymbol{\psi }}}$ are as yet undetermined constants. Noether’s second theorem [@noeth] can be applied directly to this identity (\[id11\]), in order to derive the invariance of $\mathrm{L}(\boldsymbol{A}_{\mu },%
\boldsymbol{\psi })$ under vector and fermion field local transformations having the infinitesimal form $$\delta \boldsymbol{A}_{\mu }^{a}=\partial _{\mu }\omega ^{a}+ d_{\boldsymbol{%
A}}c_{abc}\boldsymbol{A}_{\mu }^{b}\omega^{c}, \text{ \ \ }\delta
\boldsymbol{\psi } \text{\ }= d_{\boldsymbol{\psi}}(it_{a})\omega ^{a}%
\boldsymbol{\psi } ,\text{ \ \ }\delta \overline{\boldsymbol{\psi }}\text{\ }%
= d_{\overline{\boldsymbol{\psi }}}\overline{\boldsymbol{\psi }}
(-it_{a})\omega ^{a}. \label{trans1}$$
Of course from the symmetry transformations (\[trans1\]) one can generate the commutators $(\delta_1\delta_2-\delta_2\delta_1)\boldsymbol{A}_{\mu
}^{a} $, $(\delta_1\delta_2-\delta_2\delta_1)\boldsymbol{\psi}$ and $%
(\delta_1\delta_2-\delta_2\delta_1)\overline{\boldsymbol{\psi }}$ as new symmetry transformations. However, in order to avoid generating too many symmetry transformations which would essentially only be consistent with the Lagrangian density being a constant, we need that the Lie algebra of the transformations should close. That is to say we need relations between the above Lie brackets of the form $$(\delta_1\delta_2-\delta_2\delta_1) = \delta_{br}, \label{lb}$$ where the functions $\omega_{br}^a(x)$ associated with the transformation $%
\delta_{br}$ are expressed in terms of the functions $\omega_1^a(x)$ and $%
\omega_2^a(x)$ for the transformations $\delta_1$ and $\delta_2$. For example $$(\delta_1\delta_2-\delta_2\delta_1)\boldsymbol{\psi} = d_{\boldsymbol{\psi}%
}^2[it_a,it_b]\omega_1^a\omega_2^b\boldsymbol{\psi} \label{lbpsi}$$ can be interpreted as $$\delta_{br}\boldsymbol{\psi} = d_{\boldsymbol{\psi}}it_c\omega_{br}^c
\boldsymbol{\psi} \label{brpsi}$$ provided that $$\omega_{br}^c=-d_{\boldsymbol{\psi}}c_{abc}\omega_1^a\omega_2^b.
\label{ombrpsi}$$ Corresponding formulas apply for the Lie bracket of two symmetry transformations acting on $\overline{\boldsymbol{\psi }}$ with $$\omega_{br}^c=-d_{\overline{\boldsymbol{\psi }}} c_{abc}\omega_1^a\omega_2^b.
\label{ombrpsibar}$$ Similarly the Lie bracket for the $\boldsymbol{A}_{\mu}^a$ field is given by $$(\delta_1\delta_2-\delta_2\delta_1)\boldsymbol{A}_{\mu}^a = -d_{\boldsymbol{A%
}}c_{abc}\partial_{\mu}(\omega_1^b\omega_2^c) + d_{\boldsymbol{A}%
}^2c_{abc}c_{bde}(\omega_1^c\omega_2^e- \omega_2^c\omega_1^e)\boldsymbol{A}%
_{\mu}^d.$$ Using the Jacobi identity, we then obtain the closure of the Lie algebra on the $\boldsymbol{A}_{\mu}^a$ field with $$\omega_{br}^c=-d_{\boldsymbol{A}}c_{abc}\omega_1^a\omega_2^b. \label{ombra}$$ In order to obtain full closure of the Lie algebra for all the fields, we require that the three expressions (\[ombrpsi\]), (\[ombrpsibar\]) and (\[ombra\]) for $\omega_{br}^c$ should be identical. Thus we obtain $$d_{\boldsymbol{A}} = d_{\boldsymbol{\psi}} = d_{\overline{\boldsymbol{\psi}}%
}. \label{deq}$$ Here the $\omega ^{a}(x)$ are arbitrary functions only being restricted, again as in the above Abelian case, by the requirement to conform with the corresponding nonlinear constraint (\[const2\]).
So, by choosing the parameters in the Lagrangian to be consistent with the constraint (\[const2\]), we have obtained a non-Abelian gauge symmetry under the transformations (\[trans1\]) with the coefficients satisfying (\[deq\]). In order to construct a non-Abelian field tensor $\boldsymbol{F}%
_{\mu\nu}^a$ having the usual relationship $$\boldsymbol{F}_{\mu\nu}^a=\partial_{\mu}\boldsymbol{A}_{\nu}^a
-\partial_{\nu}\boldsymbol{A}_{\mu}^a +c_{abc}\boldsymbol{A}_{\mu}^b%
\boldsymbol{A}_{\nu}^c \label{fmunu}$$ with the gauge fields, we have to rescale $\boldsymbol{A}_{\mu}^a$ and $%
\omega^a$ by a factor of $d_{\boldsymbol{A}}^{-1}$ $$\boldsymbol{A}_{\mu}^a \rightarrow \frac{\boldsymbol{A}_{\mu}^a}{d_{%
\boldsymbol{A}}}, \text{ \ \ } \omega^a \rightarrow \frac{\omega^a}{d_{%
\boldsymbol{A}}}. \label{rescale}$$ Then the transformations (\[trans1\]) expressed in terms of the rescaled field (\[rescale\]) become the standard non-Abelian gauge transformations. For a theory with renormalizable coupling constants, this derived gauge symmetry leads to the conventional Yang-Mills type Lagrangian $$\mathrm{L}(\boldsymbol{A}_{\mu },\psi )=-\frac{1}{4g^2}\,Tr(\boldsymbol{F}%
_{\mu \nu }\boldsymbol{F}^{\mu \nu })+\overline{\boldsymbol{\psi }}(i\gamma
\partial -m)\boldsymbol{\psi }+\overline{\boldsymbol{\psi }}\boldsymbol{A}%
_{\mu }\gamma ^{\mu }\boldsymbol{\psi } \label{nab1}$$with an arbitrary gauge coupling constant $g$.
Let us turn now to the spontaneous Lorentz violation which is caused by the nonlinear vector field constraint (\[const2\]). Although the Lagrangian $%
\mathrm{L}(\boldsymbol{A}_{\mu },\boldsymbol{\psi })$ only has an $%
SO(1,3)\times G$ invariance, the chosen SLIV constraint (\[const2\]) possesses a much higher accidental symmetry $SO(D,3D)$ determined by the dimensionality $D$ of the adjoint representation of $G$ to which the vector fields $\boldsymbol{A}_{\mu }^{a}$ belong. This symmetry is spontaneously broken at a scale $M,$ together with the actual $SO(1,3)\otimes G$ symmetry, by the vev $$<\boldsymbol{A}_{\mu }^{a}(x)>\text{ }=\boldsymbol{n}_{\mu }^{a}M.
\label{vevv}$$Here the vacuum direction is now given by the matrix $\boldsymbol{n}_{\mu
}^{a} $ describing simultaneously both of the generalized SLIV cases, time-like ($SO(D,3D)$ $\rightarrow SO(D-1,3D)$) or space-like ($SO(D,3D)$ $%
\rightarrow SO(D,3D-1)$) respectively, depending on the sign of $\boldsymbol{%
n}^{2}\equiv \boldsymbol{n}_{\mu }^{a}\boldsymbol{n}^{\mu ,a}=\pm 1$. In both cases this matrix has only one non-zero element, subject to the appropriate $SO(1,3)$ and (independently) $G$ rotations. They are, specifically, $\boldsymbol{n}_{0}^{0}$ or $\boldsymbol{n}_{3}^{0}$ provided that the vacuum expectation value (\[vevv\]) is developed along the $a=0$ direction in the internal space and along the $\mu =0$ or $\mu =3$ direction respectively in the ordinary four-dimensional spacetime. Side by side with one true vector Goldstone boson, corresponding to the spontaneous violation of the actual $SO(1,3)\otimes G$ symmetry of the Lagrangian $\mathrm{L}$, $%
D-1$ vector pseudo-Goldstone bosons (PGB) are also produced[^12] due to the breaking of the accidental $SO(D,3D)$ symmetry of the constraint (\[const2\]). In contrast to the familiar scalar PGB case [@GLA], the vector PGBs remain strictly massless being protected by the simultaneously generated non-Abelian gauge invariance (\[nab1\]). Together with the above true vector Goldstone boson, they complete the whole gauge field multiplet of the internal symmetry group $G$.
After the explicit use of this constraint (\[const2\]), which constitutes one supplementary condition on the vector field multiplet $\boldsymbol{A}%
_{\mu }^{a}$, one can identify the pure Goldstone field modes $\boldsymbol{a}%
_{\mu }^{a}$ as follows $$\text{\ \ }\boldsymbol{A}_{\mu }^{a}=\boldsymbol{a}_{\mu }^{a}+\frac{%
\boldsymbol{n}_{\mu }^{a}}{\boldsymbol{n}^{2}}(M^{2}-\boldsymbol{n}^{2}%
\boldsymbol{a}^{2})^{\frac{1}{2}}\text{ },\text{ \ }\boldsymbol{n}_{\mu }^{a}%
\boldsymbol{a}^{\mu ,a}\text{\ }=0\text{ \ \ \ }(\boldsymbol{a}^{2}\equiv
\boldsymbol{a}_{\mu }^{a}\boldsymbol{a}^{\mu ,a}). \label{sup'}$$There is also an effective Higgs“ mode ($\boldsymbol{n}%
_{\mu }^{a}/\boldsymbol{n}^{2})(M^{2}-\boldsymbol{n}^{2}\boldsymbol{a}%
^{2})^{1/2}$ given by the SLIV constraint (one takes again the positive sign for the square root when expanding it in powers of $\boldsymbol{a}^{2}/M^{2}$). Note that, apart from the pure vector fields, the general Goldstonic modes $\boldsymbol{a}_{\mu }^{a}$ contain $D-1$ scalar modes, $\boldsymbol{a}%
_{0}^{a^{\prime }}$ or $\boldsymbol{a}_{3}^{a^{\prime }}$ ($a^{\prime
}=1...D-1$), for the time-like ($\boldsymbol{n}_{\mu }^{a}=n_{0}^{0}g_{\mu
0}\delta ^{a0}$) or space-like ($\boldsymbol{n}_{\mu }^{a}=n_{3}^{0}g_{\mu
3}\delta ^{a0}$) SLIV respectively. They can be eliminated from the theory if one imposes appropriate supplementary conditions on the $D-1$ $%
\boldsymbol{a}_{\mu }^{a}$ fields which are still free of constraints. Using their overall orthogonality (\[sup’\]) to the physical vacuum direction $%
\boldsymbol{n}_{\mu }^{a}$, one can formulate these supplementary conditions in terms of a general axial gauge for the entire $\boldsymbol{a}_{\mu }^{a}$ multiplet $$n\cdot \boldsymbol{a}^{a}\equiv n_{\mu }\boldsymbol{a}^{\mu ,a}=0,\text{ \ }%
a=0,1,...D-1. \label{sup''}$$Here $n_{\mu }$ is the unit Lorentz vector, analogous to that introduced in the Abelian case, which is now oriented in Minkowskian space-time so as to be parallel to the vacuum matrix[^13] $\boldsymbol{n}_{\mu }^{a}$. As a result, in addition to the Higgs” mode excluded earlier by the above orthogonality condition (\[sup’\]), all the other scalar fields are eliminated. Consequently only the pure vector fields, $%
\boldsymbol{a}_{i}^{a}$ ($i=1,2,3$ ) or $\boldsymbol{a}_{\mu ^{\prime }}^{a}$ ($\mu ^{\prime }=0,1,2$), for time-like or space-like SLIV respectively, are left in the theory. Clearly, the components $\boldsymbol{a}_{i}^{a=0}$ and $%
\boldsymbol{a}_{\mu ^{\prime }}^{a=0}$ correspond to the true Goldstone boson, for each type of SLIV respectively, while all the others (for $%
a=1...D-1$) are vector PGBs. Substituting the parameterization (\[sup’\]) with the SLIV constraint (\[const2\]) into the Lagrangian (\[nab1\]) and expanding the square root in powers of $\boldsymbol{a}^{2}/M^{2}$, one is led to a highly nonlinear theory in terms of the pure Goldstonic modes $%
\boldsymbol{a}_{\mu }^{a}$. The first and higher order terms in $1/M$ in this expansion of $\mathrm{L}(\boldsymbol{a}_{\mu }^{a}\boldsymbol{,\psi })$ are Lorentz and $CPT$ violating. Remarkably, however, this theory turns out to be physically equivalent to a conventional Yang-Mills theory. As was recently shown [@jej], the Lorentz and $CPT$ violating contributions to physical processes actually completely cancel out among themselves. Therefore, the SLIV constraint (\[const2\]) manifests itself as a noncovariant gauge condition which does not break physical Lorentz invariance in the theory.
All the above allows one to conclude that the Yang-Mills theories can naturally be interpreted as emergent theories caused by SLIV, although physical Lorentz invariance still remains intact due to the simultaneously generated gauge invariance. These emergent theories are in fact theories which provide the building blocks for the Standard Model and beyond, whether they be exact as in quantum chromodynamics or spontaneously broken as in grand unified theories and non-Abelian family symmetry models [@ram; @su3].
Emergent Tensor Field Gravity
=============================
Deriving diffeomorphism invariance
----------------------------------
Let us consider an arbitrary relativistically invariant Lagrangian $\mathcal{%
L}(H_{\mu \nu },\phi )$ for one symmetric two-tensor field $H_{\mu \nu }$ and one real scalar field $\phi $ (chosen as the simplest possible matter) in the theory taken in Minkowski spacetime. As in vector theories we restrict ourselves to the minimal dimension interactions. In contrast to vector fields, whose basic interactions contain dimensionless coupling constants, interactions with coupling constants of inverse mass dimensionality (and some of higher powers) are essential for symmetric tensor fields. Otherwise, one has only a free theory for the spin two components of the tensor field in the presence of matter fields.
We first turn to the imposition of the SLIV constraint $$H_{\mu \nu }H^{\mu \nu }=\mathfrak{n}^{2}M^{2}\text{ , \ \ \ \ }\mathfrak{n}%
^{2}\equiv \mathfrak{n}_{\mu \nu }\mathfrak{n}^{\mu \nu }=\pm 1\text{ }
\label{const3a}$$on the tensor fields $H_{\mu \nu }$ in the Lagrangian $\mathcal{L}$, which only possesses global Lorentz (and translational) invariance. Following the procedure used above for the vector field case, we introduce an extended Lagrangian $\mathcal{L}^{\prime }$ containing a quadratic Lagrange multiplier term $$\mathcal{L}^{\prime }(H_{\mu \nu },\phi ,\mathcal{\lambda })=\mathcal{L}%
(H_{\mu \nu },\phi )-\frac{1}{4}\mathfrak{\lambda }\left( H_{\mu \nu }H^{\mu
\nu }-\mathfrak{n}^{2}M^{2}\right) ^{2}. \label{lag22}$$The variation of $\mathcal{L}^{\prime }(H_{\mu \nu },\phi ,\mathcal{\lambda }%
)$ with respect to $H_{\mu \nu }$ gives[^14] the tensor field equation of motion $$(\mathcal{E}_{H})^{\mu \nu }-\mathfrak{\lambda }H^{\mu \nu }\left( H_{\rho
\sigma }H^{\rho \sigma }-\mathfrak{n}^{2}M^{2}\right) =0. \label{eqm2}$$Here the tensor field Eulerian $(\mathcal{E}_{H})^{\mu \nu }$ is determined by the starting Lagrangian $\mathcal{L}(H_{\mu \nu },\phi )$, while the Eulerian of the auxiliary field $\mathfrak{\lambda }(x)$ taken on-shell$$\mathcal{E}_{\mathfrak{\lambda }}^{\prime }=\partial \mathcal{L}^{\prime
}/\partial \mathfrak{\lambda }=\frac{1}{4}\left( H_{\mu \nu }H^{\mu \nu }-%
\mathfrak{n}^{2}M^{2}\right) ^{2}=0\text{ \ } \label{on1}$$gives the constraint (\[const3a\]). So, as soon as this constraint holds, one has the simplified equations of motion$$(\mathcal{E}_{H})^{\mu \nu }=0\text{ , \ \ }\mathcal{C}(H_{\mu \nu })=H_{\mu
\nu }H^{\mu \nu }-\mathfrak{n}^{2}M^{2}=0. \label{eqss}$$However, due to the quadratic form of the Lagrange multiplier term, the auxiliary field $\mathfrak{\lambda }(x)$ entirely decouples from the tensor field dynamics rather than acting as a source of energy-momentum density, as would be the case if we considered instead a linear Lagrange multiplier term.
The tensor field $H_{\mu \nu }$, both massive and massless, contains many components which are usually eliminated by imposing some supplementary conditions[^15]. In the massive tensor field case there are five physical spin-$2$ states to be described by $%
H_{\mu \nu }$. Similarly, in the massless tensor field case, although there are only two physical (transverse) spin states associated with the graviton, one cannot construct a symmetric two-tensor field $H_{\mu \nu }$ as a linear combination of creation and annihilation operators for helicity $\pm 2$ states. It is necessary to add three (and $2j-1,$ in general, for a spin $j$ massless field) fictitious states with other helicities [@GLB]. So, in both the massive and massless tensor field cases, at most five components in the $10$-component tensor field $H_{\mu \nu }$ may be eliminated and still preserve Lorentz invariance. Once the SLIV constraint (\[const3a\]) is imposed, it follows that only four further supplementary conditions are possible. In section 3.2 we shall actually only impose three further supplementary conditions, reducing the number of independent components of $%
H_{\mu \nu }$ to 6 as is done in the Hilbert-Lorentz gauge of general relativity.
We now turn to the question of the consistency of the SLIV constraint with the equations of motion for a general symmetric tensor field $H_{\mu \nu }$. For an arbitrary Lagrangian $\mathcal{L}(H_{\mu \nu },\phi )$, the time development of the fields would not preserve the constraint. So the parameters in the Lagrangian must be chosen so as to give a relationship between the the Eulerians for the tensor and matter fields. In addition to the lowest dimensional Lorentz covariant expressions constructed from the Eulerians, we also include the next to lowest dimensional Lorentz covariant expressions in this relationship. This is necessary in order to allow for gravitational interactions which vanish in the low energy limit. The lowest dimensional terms include $\partial _{\mu }(\mathcal{E}_{H})^{\mu \nu }$. Hence the relationship must transform as a Lorentz vector and carry the Lorentz index $\mu $ $$\mathcal{F}^{\mu }(\mathcal{C}=0;\text{ \ }\mathcal{E}_{H},\mathcal{E}_{\phi
},...)=0\text{ \ \ \ }(\mu =0,1,2,3). \label{fmu}$$It therefore takes the following form $$\partial _{\mu }(\mathcal{E}_{H})^{\mu \nu }=P_{\alpha \beta }^{\nu }(%
\mathcal{E}_{H})^{\alpha \beta }+Q^{\nu }\mathcal{E}_{\phi }. \label{idd}$$Here $P_{\alpha \beta }^{\nu }$ and $Q^{\nu }$ are operators which take the following general form $$\begin{aligned}
P_{\alpha \beta }^{\nu } &=&p_{0}\eta _{\alpha \beta }\partial ^{\nu
}+p_{1}\eta ^{\nu \rho }[H_{\alpha \rho }\partial _{\beta }+H_{\rho \beta
}\partial _{\alpha }+ \label{pp} \\
&&+a(\partial _{\beta }H_{\alpha \rho }+\partial _{\alpha }H_{\rho \beta
})+b\partial _{\rho }H_{\alpha \beta }+cH_{\alpha \beta }\partial _{\rho })]%
\text{ , \ } \notag \\
Q^{\nu } &=&q_{0}\partial ^{\nu }+q_{1}\eta ^{\nu \rho }(\partial _{\rho
}\phi +d\phi \partial _{\rho }).\text{\ } \label{ppp}\end{aligned}$$The constants $p_{0}$ and $q_{0}$ are dimensionless and associated with dimension 4 terms in the relationship, while $p_{1}$ and $q_{1}$ have an inverse mass dimension and are associated with dimension 5 terms in the relationship[^16]. In addition $a$, $b$, $c$ and $d$ are as yet undetermined dimensionless constants. According to Noether’s second theorem [@noeth], the identity (\[idd\]) implies the invariance of the corresponding action $$I=\int \mathcal{L}(H_{\mu \nu },\phi )d^{4}x \label{i}$$under local transformations of the tensor and scalar fields having the infinitesimal form $$\begin{aligned}
\delta H_{\mu \nu } &=&\partial _{\mu }\xi _{\nu }+\partial _{\nu }\xi _{\mu
}+p_{0}\eta _{\mu \nu }\partial _{\rho }\xi ^{\rho }+p_{1}[\partial _{\mu
}\xi ^{\rho }H_{\rho \nu }+\partial _{\nu }\xi ^{\rho }H_{\mu \rho }+
\label{tr} \\
&&+a(\xi ^{\rho }\partial _{\mu }H_{\rho \nu }+\xi ^{\rho }\partial _{\nu
}H_{\mu \rho })+b\xi ^{\rho }\partial _{\rho }H_{\mu \nu }+c\partial _{\rho
}\xi ^{\rho }H_{\mu \nu }], \notag \\
\delta \phi &=&q_{0}\partial _{\rho }\xi ^{\rho }+q_{1}(\xi ^{\rho
}\partial _{\rho }\phi +d\partial _{\rho }\xi ^{\rho }\phi ). \label{sc}\end{aligned}$$Here $\xi ^{\mu }(x)$ is an arbitrary 4-vector parameter function, only being required to conform with the nonlinear constraint (\[const3\]). These field transformations are treated by themselves as fixed coordinate system transformations[^17], changing only the functional forms of the fields. One should remember that we started from a fundamentally flat Minkowski spacetime with only one set of coordinates (modulo global Lorentz transformations). However it actually turns out that these field transformations correspond in the end to reparameterization transformations. Thus it becomes natural to think of using a modified set of coordinates, deviating from the original fundamental coordinate system $x^{\mu }$ by $\delta x^{\mu }=x^{\prime \,\mu }-x^{\mu
}\propto \xi ^{\mu }$. In going from the $x^{\mu }$ to the $x^{\prime \,\mu }
$ coordinate system, there are supposed to be infinitesimal coordinate variations $\delta x^{\mu }$ under which the action $I$ is also left invariant. The form of these variations will be established later.
In order to avoid generating too many symmetry transformations, which would only be consistent with a trivial Lagrangian (i.e. $\mathcal{L}=const$), we further require that the general transformations (\[tr\], \[sc\]) constitute a group. This means that they have to satisfy the Lie bracket operations $$\begin{aligned}
(\delta _{1}\delta _{2}-\delta _{2}\delta _{1})H_{\mu \nu } &=&\delta
_{br}H_{\mu \nu } , \label{br1} \\
(\delta _{1}\delta _{2}-\delta _{2}\delta _{1})\phi &=&\delta _{br}\phi.
\notag\end{aligned}$$ Here the 4-vector parameter function $\xi _{br}^{\mu }$ related to the Lie bracket transformation $\delta _{br}$ is supposed to be constructed from the parameter functions $\xi _{1}^{\mu }$ and $\xi _{2}^{\mu }$, which determine the single transformations $\delta _{1}$ and $\delta _{2}$ in (\[tr\]) and (\[sc\]). As in the vector field case, this requirement that the Lie algebra of transformations should close puts strong restrictions on the values of the constants appearing in (\[tr\]) and (\[sc\]). Actually, after a straightforward calculation similar to that given for the non-Abelian symmetry case in section 2.2, one finds that the Lie bracket relations (\[br1\]) are only satisfied for the following values of the constants in the field variations $\delta H_{\mu \nu }$ and $\delta \phi $: $$\text{ }a=0,\text{ }b=1,\text{ }c=p_{0}\text{ };\text{ \ }q_{0}=0,\text{ }%
q_{1}=p_{1}\text{ }. \label{rel1}$$The parameter function $\xi_{br}^{\mu }$ associated with the transformation $%
\delta_{br}$ is given by the expression $$\xi _{br}^{\mu }=p_{1}(\xi _{1}^{\rho }\partial _{\rho }\xi _{2}^{\mu }-\xi
_{2}^{\rho }\partial _{\rho }\xi _{1}^{\mu })\text{ .} \label{br}$$Remarkably, although the general transformations (\[tr\], \[sc\]) were only restricted to form a group, the emergent theory turns out to possess a diffeomorphism invariance provided that the field transformations (\[tr\], \[sc\]) are accompanied by an infinitesimal coordinate variation (see below).
Actually, for the quantity $g_{\mu \nu }$ defined by the equation $$g^{p_{0}/2}g_{\mu \nu }=\eta _{\mu \nu }+p_{1}H_{\mu \nu }\text{ , \ \ }%
g\equiv det(g_{\mu \nu }),\text{\ \ } \label{metr}$$the transformation (\[tr\]) may be written in the form[^18]$$\begin{aligned}
\delta g_{\mu \nu } &=&p_{1}(\partial _{\mu }\xi ^{\rho }g_{\rho \nu
}+\partial _{\nu }\xi ^{\rho }g_{\mu \rho }+\xi ^{\rho }\partial _{\rho
}g_{\mu \nu })+ \label{mt} \\
&&+p_{0}\left( p_{1}\partial _{\rho }\xi ^{\rho }+\frac{1}{2}p_{1}\xi ^{\rho
}g^{\alpha \beta }\partial _{\rho }g_{\alpha \beta }-\frac{1}{2}g^{\alpha
\beta }\delta g_{\alpha \beta }\right) g_{\mu \nu }\text{ ,} \notag
\label{deltagmunu}\end{aligned}$$after the above-determined values (\[rel1\]) of the constants are substituted in (\[tr\]). Even in the general case with a non-vanishing value of the constant $p_{0}$, the explicit solution to this equation for $%
\delta g_{\mu \nu }$ is still given just by the terms independent of $p_0$ in (\[deltagmunu\]) $$\delta g_{\mu \nu }=p_{1}(\partial _{\mu }\xi ^{\rho }g_{\rho \nu }+\partial
_{\nu }\xi ^{\rho }g_{\mu \rho }+\xi ^{\rho }\partial _{\rho }g_{\mu \nu }),
\label{mt1}$$provided that the contravariant tensor $g^{\mu \nu }$ is properly defined, that is $g_{\alpha \beta }g^{\beta \gamma }=\delta _{\alpha }^{\gamma }.$ In fact one can readily verify that $$p_{1}\partial _{\rho }\xi ^{\rho }+\frac{1}{2}p_{1}\xi ^{\rho }g^{\alpha
\beta }\partial _{\rho }g_{\alpha \beta }-\frac{1}{2}g^{\alpha \beta }\delta
g_{\alpha \beta }=0\text{ ,} \label{can}$$when the expression (\[mt1\]) is substituted for $\delta g_{\alpha \beta}$ in (\[can\]). So, one can see that $g_{\mu \nu }$ transforms as the metric tensor in Riemannian geometry with general coordinate transformations taken in the form $$\delta x^{\mu }=-p_{1}\xi ^{\mu }(x)\text{ .} \label{co}$$The constant $p_{1}$ may then be absorbed into the transformation 4-vector parameter function $\xi ^{\mu }$. Indeed, for this form of the coordinate variation, the metric changes to $$g^{\prime \mu \nu }=\frac{\partial x^{\prime \mu }}{\partial x^{\rho }}\frac{%
\partial x^{\prime \nu }}{\partial x^{\sigma }}g^{\rho \sigma } \label{met}$$Plugging in $g^{\mu \nu }=\eta ^{\mu \nu }-p_{1}H^{\mu \nu }+\cdot \cdot
\cdot $ and using $(\partial x^{\prime \mu }/\partial x^{\rho })=\delta
_{\rho }^{\mu }-p_{1}\partial _{\rho }\xi ^{\mu }$, one finds in the weak field limit (neglecting the terms containing $p_{1}H^{\mu \nu }$ and $\xi
^{\mu }$ altogether and properly lowering the indices with $\eta _{\mu \nu }$ to this order) the reduced transformation law for the tensor field $H_{\mu
\nu }$$$\delta H_{\mu \nu }=\partial _{\mu }\xi _{\nu }+\partial _{\nu }\xi _{\mu }%
\text{ .} \label{tr3}$$This result conforms with the general equation (\[mt1\]) taken in the same limit.
As to the scalar field $\phi (x),$ we can also simplify its transformation law (\[sc\], \[rel1\]) if we replace it by $\phi ^{\prime }=g^{-d/2}\phi
$$$\delta \phi ^{\prime }=-d/2g^{-d/2}(g^{\alpha \beta }\delta g_{\alpha \beta
})\phi +g^{-d/2}p_{1}(\xi ^{\rho }\partial _{\rho }\phi +d\partial _{\rho
}\xi ^{\rho }\phi )=p_{1}\xi ^{\rho }\partial _{\rho }\phi ^{\prime },
\label{phi1}$$where we have again used equation (\[can\]). Therefore, the transformations for the redefined field $\phi ^{\prime }$ (the prime will be omitted henceforth) amount to pure local translations.
So we have shown that, in the tensor field case, the imposition of the SLIV constraint (\[const3\]) promotes the starting global Poincare symmetry to the local diff invariance. This SLIV induced gauge symmetry now completely determines the Lagrangian $\mathcal{L}(H_{\mu \nu },\phi )$ appearing in the invariant action $I$ (\[i\]). Actually, as is well-known [@kibble], if one requires the action integral defined over any arbitrary region to be invariant (that is, $\delta I=0$) under a total variation, including the variations of the fields (\[tr3\], \[phi1\]) and of the coordinates ([co]{}), one must have $$\delta \mathcal{L}+\partial _{\mu }(\delta x^{\mu }\mathcal{L})=0.
\label{sd}$$This implies that the Lagrangian $\mathcal{L}(H_{\mu \nu },\phi )$ should transform like a scalar density rather than being invariant as it usually is in the internal symmetry case considered in section 2. Now the explicit form of the Lagrangian $\mathcal{L}(H_{\mu \nu },\phi )$ satisfying the condition (\[sd\]), which could be referred to as the action-invariant Lagrangian, is readily deduced. Indeed, in the weak field approximation, this is the well-known linearized gravity Lagrangian $$\mathcal{L}(H_{\mu \nu },\phi )=\mathcal{L}(H)+\mathcal{L}(\phi )+\mathcal{L}%
_{int}. \label{tl}$$It consists of the $H$ field kinetic term of the form$$\mathcal{L}(H)=\frac{1}{2}\partial _{\lambda }H^{\mu \nu }\partial ^{\lambda
}H_{\mu \nu }-\frac{1}{2}\partial _{\lambda }H_{tr}\partial ^{\lambda
}H_{tr}-\partial _{\lambda }H^{\lambda \nu }\partial ^{\mu }H_{\mu \nu
}+\partial ^{\nu }H_{tr}\partial ^{\mu }H_{\mu \nu }\text{ ,} \label{fp}$$($H_{tr}$ stands for the trace of $H_{\mu \nu },$ $H_{tr}=\eta ^{\mu \nu
}H_{\mu \nu }$) together with the scalar field free Lagrangian part and its interaction term $$\mathcal{L}(\phi )=\frac{1}{2}\left( \partial _{\rho }\phi \partial ^{\rho
}\phi -m^{2}\phi ^{2}\right) \text{ },\text{ \ \ \ \ }\mathcal{L}_{int}=-%
\frac{1}{2M_{P}}H_{\mu \nu }T^{\mu \nu }(\phi )\text{ . \ \ \ \ \ }
\label{fh}$$Here $T^{\mu \nu }(\phi )$ is the conventional energy-momentum tensor for a scalar field $$T^{\mu \nu }(\phi )=\partial ^{\mu }\phi \partial ^{\nu }\phi -\eta ^{\mu
\nu }\mathcal{L}(\phi ) \label{tt}$$and the proportionality coefficient $p_{1}$ in the metric (\[metr\]) is chosen to be just the inverse Planck mass, $p_{1}=1/M_{P}$. It is clear that, in contrast to the tensor free field terms given above by $\mathcal{L}%
(H)$, the scalar free field part $\mathcal{L}(\phi )$ and its interaction term $\mathcal{L}_{int}$ (\[fh\]) are only approximately action-invariant under the diff transformations (\[tr3\], \[phi1\]). This only works in the weak field limit, treating $\partial_{\mu }\xi_{\nu}$ as of the same order as $H_{\mu \nu }$.
We expect that the reparameterization symmetry will come out to all orders in $1/M_P$, because the full reparameterization symmetry is needed to ensure that the equations of motion are free to match with the constraint at all times. In order to determine the complete theory, one should consider the full variation of the Lagrangian $\mathcal{L}$ as a function of the metric $%
g_{\mu \nu }$ and its derivatives (including the second order ones) and solve a general identity of the type $$\delta \mathcal{L}(g_{\mu \nu },g_{\mu \nu ,\lambda },g_{\mu \nu ,\lambda
\rho };\phi ,\phi _{,\lambda })=\partial _{\mu }X^{\mu }. \label{iden}$$ Here subscripts after commas denote derivatives and $X^{\mu }$ is an unknown vector function. The latter must be constructed from the fields and local transformation parameters $\xi ^{\mu }(x)$, taking into account the requirement of compatibility with the invariance of $\mathcal{L}$ under Lorentz transformations and translations. Following this procedure [kibble,ogi4]{} for the field variations (\[mt1\], \[phi1\]) conditioned by the SLIV constraint (\[const3\]), one can eventually find the total Lagrangian $\mathcal{L}$. The latter turns out to be properly expressed in terms of quantities similar to the basic ones in Riemannian geometry (like the metric, connection, curvature etc.). Actually, this theory successfully mimics general relativity, which allows us to conclude that the Einstein equations can really be derived in flat Minkowski spacetime provided that the Lorentz symmetry is spontaneously broken.
While we will mainly be focused, in what follows, on the linearized gravity theory case, our discussion can be extended to general relativity as well.
Graviton as a tensor Goldstone boson
------------------------------------
Let us turn now to the spontaneous Lorentz violation which is caused by the nonlinear tensor field constraint (\[const3\]). This constraint can be written in the more explicit form $$H_{\mu \nu }^{2}=H_{00}^{2}+H_{i=j}^{2}+(\sqrt{2}H_{i\neq j})^{2}-(\sqrt{2}%
H_{0i})^{2}=\mathfrak{n}^{2}M^{2}=\pm \text{ }M^{2}\text{ \ \ \ \ \ \ \ }
\label{c4}$$(where the summing on indices $(i,j=1,2,3)$ is imposed) and means in essence that the tensor field $H_{\mu \nu }$ develops the vev configuration $$<H_{\mu \nu }(x)>\text{ }=\mathfrak{n}_{\mu \nu }M \label{v}$$determined by the matrix $\mathfrak{n}_{\mu \nu }$. The initial Lorentz symmetry $SO(1,3)$ of the Lagrangian $\mathcal{L}(H_{\mu \nu },\phi )$ given in (\[tl\]) then formally breaks down at a scale $M$ to one of its subgroups. We assume for simplicity a “minimal" vacuum configuration in the $SO(1,3)$ space with the vevs (\[v\]) developed on only one of the $H_{\mu
\nu }$ components. If so, there are in fact the following three possibilities$$\begin{aligned}
(a)\text{ \ \ \ \ }\mathfrak{n}_{00} &\neq &0\text{ , \ \ }%
SO(1,3)\rightarrow SO(3) \notag \\
(b)\text{ \ \ \ }\mathfrak{n}_{i=j} &\neq &0\text{ , \ \ }SO(1,3)\rightarrow
SO(1,2) \label{ns} \\
(c)\text{ \ \ \ }\mathfrak{n}_{i\neq j} &\neq &0\text{ , \ \ }%
SO(1,3)\rightarrow SO(1,1) \notag\end{aligned}$$for the positive sign in (\[c4\]), and $$(d)\text{ \ \ \ }\mathfrak{n}_{0i}\neq 0\text{ , \ \ }SO(1,3)\rightarrow
SO(2) \label{nss}$$for the negative sign. These breaking channels can be readily derived, by counting how many different eigenvalues the matrix $\mathfrak{n}_{\mu \nu }$ has for each particular case ($a$-$d$). Accordingly, there are only three Goldstone modes in the cases ($a,b$) and five modes in the cases ($c$-$d$). In order to associate at least one of the two transverse polarization states of the physical graviton with these modes, one could have any of the above-mentioned SLIV channels except for the case ($a$). Indeed, it is impossible for the graviton to have all vanishing spatial components, as happens for the Goldstone modes in the case ($a$). Therefore, no linear combination of the three Goldstone modes in case ($a$) could behave like the physical graviton (see [@car] for a more detailed consideration). In addition to the minimal vev configuration, there are many other possibilities. A particular case of interest is that of the traceless vev tensor $\mathfrak{n}_{\mu \nu }$$$\text{\ \ }\mathfrak{n}_{\mu \nu }\eta ^{\mu \nu }=0, \label{tll}$$in terms of which the Goldstonic gravity Lagrangian acquires an especially simple form (see below). It is clear that the vev in this case can be developed on several $H_{\mu \nu }$ components simultaneously, which in general may lead to total Lorentz violation with all six Goldstone modes generated. For simplicity we will use this form of vacuum configuration in what follows, while our arguments can be applied to any type of vev tensor $%
\mathfrak{n}_{\mu \nu }.$
In this connection the question naturally arises of the other components of the symmetric two-index tensor $H_{\mu \nu },$ in addition to the pure Goldstone modes. They turn out to be pseudo-Goldstone modes (PGMs) in the theory. Indeed, although we only propose Lorentz invariance of the Lagrangian $\mathcal{L}(H_{\mu \nu },\phi )$, the SLIV constraint ([const3]{}) formally possesses the much higher accidental symmetry $SO(7,3)$ of the constrained bilinear form (\[c4\]), when the $H_{\mu \nu }$ components are considered as the “vector" components under $SO(7,3)$. This symmetry is in fact spontaneously broken side by side with Lorentz symmetry at the scale $M$. Assuming again a minimal vacuum configuration in the $%
SO(7,3)$ space with the vev (\[v\]) developed on only one of the $H_{\mu
\nu }$ components, we have either time-like ($SO(7,3)$ $\rightarrow SO(6,3)$) or space-like ($SO(7,3)$ $\rightarrow SO(7,2)$) violations of the accidental symmetry depending on the sign of $\mathfrak{n}^{2}=\pm 1$ in (\[c4\]). According to the number of broken $SO(7,3)$ generators, just nine massless NG modes appear in both cases. Together with an effective Higgs component, on which the vev is developed, they complete the whole ten-component symmetric tensor field $H_{\mu \nu }$ of our Lorentz group. Some of them are true Goldstone modes of the spontaneous Lorentz violation. The others are PGMs since the accidental $SO(7,3)$ is not shared by the whole Lagrangian $\mathcal{L}(H_{\mu \nu },\phi )$ given in (\[tl\]). Notably, in contrast to the scalar PGM case [@GLA] and similarly to the vector PGMs, they remain strictly massless being protected by the simultaneously generated diff invariance[^19]. Owing to the latter invariance, some of the PGMs and Goldstone modes can be gauged away from the theory, as usual.
Now, one can rewrite the Lagrangian $\mathcal{L}(H_{\mu \nu },\phi )$ in terms of the Goldstone modes explicitly using the SLIV constraint ([const3]{}). For this purpose let us take the following handy parameterization for the tensor field $H_{\mu \nu }$ in the Lagrangian $\mathcal{L}(H_{\mu
\nu },\phi )$: $$H_{\mu \nu }=h_{\mu \nu }+\frac{\mathfrak{n}_{\mu \nu }}{\mathfrak{n}^{2}}(%
\mathfrak{n}\cdot H)\qquad (\mathfrak{n}\cdot H\equiv \mathfrak{n}_{\mu \nu
}H^{\mu \nu }), \label{par}$$where $h_{\mu \nu }$ corresponds to the pure Goldstonic modes[^20] satisfying $$\text{\ }\mathfrak{n}\cdot h=0\text{\ }\qquad (\mathfrak{n}\cdot h\equiv
\mathfrak{n}_{\mu \nu }h^{\mu \nu }). \label{sup}$$There is also an effective Higgs" mode (or the $H_{\mu \nu
}$ component in the vacuum direction) is given by the scalar product $%
\mathfrak{n}\cdot H$. Substituting this parameterization (\[par\]) into the tensor field constraint (\[const3\]), one obtains the following equation for $\mathfrak{n}\cdot H$: $$\text{\ }\mathfrak{n}\cdot H\text{\ }=(M^{2}-\mathfrak{n}^{2}h^{2})^{\frac{1%
}{2}}=M-\frac{\mathfrak{n}^{2}h^{2}}{2M}+O(1/M^{2}) \label{constr1}$$taking, for definiteness, the positive sign for the square root and expanding it in powers of $h^{2}/M^{2}$, $h^{2}\equiv h_{\mu \nu }h^{\mu \nu
}$. Putting then the parameterization (\[par\]) with the SLIV constraint (\[constr1\]) into the Lagrangian $\mathcal{L}(H_{\mu \nu },\phi )$ given in (\[fp\], \[fh\]), one obtains the Goldstonic tensor field gravity Lagrangian $\mathcal{L}(h_{\mu \nu },\phi )$ containing an infinite series in powers of the $h_{\mu \nu }$ modes. For the traceless vev tensor $%
\mathfrak{n}_{\mu \nu }$ (\[tll\]) it takes, without loss of generality, the especially simple form $$\begin{aligned}
\mathcal{L}(h_{\mu \nu },\phi ) &=&\frac{1}{2}\partial _{\lambda }h^{\mu \nu
}\partial ^{\lambda }h_{\mu \nu }-\frac{1}{2}\partial _{\lambda
}h_{tr}\partial ^{\lambda }h_{tr}-\partial _{\lambda }h^{\lambda \nu
}\partial ^{\mu }h_{\mu \nu }+\partial ^{\nu }h_{tr}\partial ^{\mu }h_{\mu
\nu }+ \label{gl} \\
&&+\frac{1}{2M}h^{2}\left[ -2\mathfrak{n}^{\mu \lambda }\partial _{\lambda
}\partial ^{\nu }h_{\mu \nu }+\mathfrak{n}^{2}(\mathfrak{n}\partial \partial
)h_{tr}\right] +\frac{1}{8M^{2}}h^{2}\left[ -\mathfrak{n}^{2}\partial
^{2}+2(\partial \mathfrak{nn}\partial )\right] h^{2} \notag \\
&&+\mathcal{L}(\phi )-\frac{M}{2M_{P}}\mathfrak{n}^{2}\left[ \mathfrak{n}%
_{\mu \nu }\partial ^{\mu }\phi \partial ^{\nu }\phi \right] -\frac{1}{2M_{P}%
}h_{\mu \nu }T^{\mu \nu }-\frac{1}{4MM_{P}}h^{2}\left[ -\mathfrak{n}_{\mu
\nu }\partial ^{\mu }\phi \partial ^{\nu }\phi \right] \notag\end{aligned}$$written in the $O(h^{2}/M^{2})$ approximation. In addition to the conventional graviton bilinear kinetic terms, the Lagrangian contains three- and four-linear interaction terms in powers of $h_{\mu \nu }$. Some of the notations used are collected below: $$\begin{aligned}
h^{2} &\equiv &h_{\mu \nu }h^{\mu \nu }\text{ , \ \ }h_{tr}\equiv \eta ^{\mu
\nu }h_{\mu \nu }\text{ , \ } \label{n} \\
\mathfrak{n}\partial \partial &\equiv &\mathfrak{n}_{\mu \nu }\partial ^{\mu
}\partial ^{\nu }\text{\ , \ \ }\partial \mathfrak{nn}\partial \equiv
\partial ^{\mu }\mathfrak{n}_{\mu \nu }\mathfrak{n}^{\nu \lambda }\partial
_{\lambda }\text{ .\ \ \ } \notag\end{aligned}$$
The bilinear scalar field term$$-\frac{M}{2M_{P}}\mathfrak{n}^{2}\left[ \mathfrak{n}_{\mu \nu }\partial
^{\mu }\phi \partial ^{\nu }\phi \right] \label{t}$$in the third line in the Lagrangian (\[gl\]) merits special notice. This term arises from the interaction Lagrangian $\mathfrak{L}_{int}$ (\[fh\]) after application of the tracelessness condition (\[tll\]) for the vev tensor $\mathfrak{n}_{\mu \nu }$. It could significantly affect the dispersion relation for the scalar field $\phi $ (and any other sort of matter as well), thus leading to an unacceptably large Lorentz violation if the SLIV scale $M$ were comparable with the Planck mass $M_{P}.$ However, this term can be gauged away by an appropriate choice of the gauge parameter function $\xi ^{\mu }(x)$ in the transformations (\[tr3\], \[phi1\]) of the tensor and scalar fields[^21]. Technically, one simply transforms the scalar field and its derivative to a new coordinate system $x^{\mu }\rightarrow $ $x^{\mu }-\xi
^{\mu }$ in the Goldstonic Lagrangian $\mathcal{L}(h_{\mu \nu },\phi )$. Actually, using the fixed-point variation of $\phi (x)$ given above in ([phi1]{}), with the coefficient $p_{1}$ absorbed into the parameter function $%
\xi ^{\mu }(x)$, and differentiating both sides with respect to $x^{\mu }$ one obtains $$\delta (\partial _{\mu }\phi )=\partial _{\mu }(\xi ^{\nu }\partial _{\nu
}\phi ).$$This gives in turn $$\delta _{tot}(\partial _{\mu }\phi )=\delta (\partial _{\mu }\phi )+\delta
x^{\nu }\partial _{\nu }(\partial _{\mu }\phi )= \partial _{\mu }\xi ^{\nu
}\partial _{\nu }\phi \label{red}$$for the total variation of the scalar field derivative. The corresponding total variation of the Goldstonic tensor $h_{\mu \nu }$, caused by the same transformation to the coordinate system $x^{\mu }-\xi ^{\mu }$, is given in turn by equations (\[tr3\]) and (\[par\]) to be $$\delta _{tot}h_{\mu \nu }=(\partial ^{\rho }\xi ^{\sigma }+\partial ^{\sigma
}\xi ^{\rho })\left( \eta _{\rho \mu }\eta _{\sigma \nu }-\frac{\mathfrak{n}%
_{\mu \nu }}{\mathfrak{n}^{2}}\mathfrak{n}_{\rho \sigma }\right) -\xi ^{\rho
}\partial _{\rho }h_{\mu \nu }. \label{dh}$$One can now readily see that, with the parameter function $\xi ^{\mu }(x)$ chosen as $$\xi ^{\mu }(x)= \frac{M}{2M_{P}}\mathfrak{n}^{2}\mathfrak{n}^{\mu \nu
}x_{\nu }\text{ },$$the dangerous term (\[t\]) is precisely cancelled[^22] by an analogous term stemming from the scalar field kinetic term in the $\mathfrak{L}(\phi )$ given in (\[fh\]), while the total variation of the tensor $h_{\mu \nu }$ reduces to just the second term in (\[dh\]). This term is of the natural order $O(\xi h)$, which can be neglected in the weak field approximation, so that to the present accuracy the tensor field variation $\delta _{tot}h_{\mu\nu }=0$. Indeed, since the diff invariance is an approximate symmetry of the Lagrangian $\mathcal{L}(h_{\mu \nu },\phi )$, the above cancellation will only be accurate up to the order corresponding to the linearized Lagrangian $%
\mathcal{L}(H_{\mu \nu },\phi )$ we started with in (\[tl\]). Actually, a proper extension of the tensor field theory to GR with its exact diff invariance will ultimately restore the usual form of the dispersion relation for the scalar (and other matter) fields. Taking this into account, we will henceforth omit the term (\[t\]) in $\mathcal{L}(h_{\mu \nu },\phi )$, thus keeping the “normal" dispersion relation for the scalar field in what follows.
Together with the Lagrangian one must also specify the other gauge fixing in addition to the general Goldstonic gauge“ $\mathfrak{n}%
_{\mu \nu }\cdot h^{\mu \nu }=0$ choice given above (\[sup\]). The point is that the spin $1$ states are still left in the theory[^23] and are described by some of the components of the new tensor $h_{\mu \nu }$. Usually, they (and one of the spin $0$ states) are excluded by the conventional Hilbert-Lorentz condition $$\partial ^{\mu }h_{\mu \nu }+q\partial ^{\nu }h_{tr}=0 \label{HL}$$($q$ is an arbitrary constant giving the standard harmonic gauge condition for $q=-1/2$). On the other hand, as we have already imposed the constraint (\[sup\]), we cannot use the full Hilbert-Lorentz condition (\[HL\]), eliminating four more degrees of freedom in $h_{\mu \nu }.$ Otherwise, one would have an overgauged” theory with a non-propagating graviton. In fact the simplest set of conditions which conforms with the Goldstonic condition (\[sup\]) turns out to be [@cjt] $$\partial ^{\rho }(\partial _{\mu }h_{\nu \rho }-\partial _{\nu }h_{\mu \rho
})=0 \label{gauge}$$This set excludes only three degrees of freedom[^24] in $h_{\mu \nu }$ and it automatically satisfies the Hilbert-Lorentz spin condition as well. So, with the Lagrangian (\[gl\]) and the supplementary conditions (\[sup\]) and (\[gauge\]) lumped together, one eventually comes to a working model for the Goldstonic tensor field gravity. Generally, from ten components in the symmetric-two $h_{\mu \nu }$ tensor, four components are excluded by the supplementary conditions (\[sup\]) and (\[gauge\]). For a plane gravitational wave propagating, say, in the $z$ direction another four components can also be eliminated. This is due to the fact that the above supplementary conditions still leave freedom in the choice of a coordinate system, $x^{\mu }\rightarrow $ $x^{\mu }-\xi ^{\mu }(t-z/c),$ much as takes place in standard GR. Depending on the form of the vev tensor $\mathfrak{n}%
_{\mu \nu }$, the two remaining transverse modes of the physical graviton may consist solely of Lorentz Goldstone modes or of Pseudo Goldstone modes or include both of them.
The theory derived looks essentially nonlinear and contains a variety of Lorentz (and $CPT$) violating couplings, when expressed in terms of the pure tensor Goldstone modes. Nonetheless, as was shown in recent calculations [@cjt], all the SLIV effects turn out to be strictly cancelled in the lowest order graviton-graviton scattering processes, due to the exact diffeomorphism invariance of the pure gravity part in the basic Lagrangian $%
\mathcal{L}$ (\[gl\]). At the same time, an actual Lorentz violation may appear in the matter field interaction sector, which only possesses an approximate diff invariance, through deformed dispersion relations of the matter fields involved. However, a proper extension of the tensor field theory to GR with its exact diffeomorphism invariance ultimately restores the dispersion relations for matter fields and, therefore, the SLIV effects vanish. So, one could generally argue, the measurable effects of SLIV, induced by elementary vector or tensor fields, can be related to the accompanying gauge symmetry rather than to spontaneous Lorentz violation. The latter appears by itself to be physically unobservable and only results in a noncovariant gauge choice in an otherwise gauge invariant and Lorentz invariant theory.
From this standpoint, the only way for physical Lorentz violation to appear would be if the above local invariance is slightly broken at very small distances in an explicit, rather than spontaneous, way. This is in fact a place where the emergent vector and tensor field theories may differ from conventional QED, Yang-Mills and GR theories. Actually, such a local symmetry breaking could lead in the former case to deformed dispersion relations for all the matter fields involved. This effect typically appears proportional to some power of the ratio $\frac{M}{M_{P}}$ (just as we have seen above for the scalar field in our model, see (\[t\])), though being properly suppressed due to the tiny gauge noninvariance. The higher the SLIV scale $M$ becomes the larger becomes the actual Lorentz violation which, for some value of the scale $M$, may become physically observable even at low energies. Another basic difference between Goldstonic theories with non-exact gauge invariance and conventional theories is the emergence of a mass for the graviton and other gauge fields (namely, for the non-Abelian ones), if they are composed from Pseudo Goldstone modes rather than from pure Goldstone ones. Indeed, these PGMs are no longer protected by gauge invariance and may acquire tiny masses. This may lead to a massive gravity theory, where the graviton mass emerges dynamically, thus avoiding the notorious discontinuity problem [@zvv]. So, while Goldstonic theories with exact local invariance are physically indistinguishable from conventional gauge theories, there are some principal differences when this local symmetry is slightly broken which could eventually allow us to differentiate between them in an observational way.
One could imagine how such a breaking might occur. As we have learned, only locally invariant theories provide the needed number of degrees of freedom for the interacting vector fields once SLIV occurs. Note that a superfluous restriction on a vector (or any other) field would make it impossible to set the required initial conditions in the appropriate Cauchy problem and, in quantum theory, to choose self-consistent equal-time commutation relations [@ogi3]. One could expect, however, that quantum gravity could in general hinder the setting of the required initial conditions at extra-small distances. Eventually this would manifest itself in an explicit violation of the above local invariance in a theory through some high-order operators stemming from the quantum gravity energy scale, which could lead to physical Lorentz violation. If so, one could have some observational evidence in favor of the emergent theories, just as was claimed at the very beginning when the SLIV idea was put forward [@bjorken]. However, is there really any strong theoretical reason left for the Lorentz invariance to be physically broken, if the Goldstonic gauge fields are anyway generated through the safe" nonlinear sigma type SLIV models which recover conventional Lorentz invariance? We may return to this question elsewhere.
Conclusion
==========
An arbitrary local theory of a symmetric two-tensor field $H_{\mu \nu }$ in Minkowski spacetime was considered, in which the equations of motion are required to be compatible with a nonlinear length-fixing constraint $H_{\mu
\nu }^{2}=\pm M^{2}$ leading to spontaneous Lorentz invariance violation ($M$ is the proposed scale for SLIV). Allowing the parameters in the Lagrangian to be adjusted so as to be compatible with this constraint, the theory turns out to correspond to general relativity (in the weak field approximation). Also some of the massless tensor Goldstone modes appearing through SLIV are naturally collected in the physical graviton. The underlying diffeomophism invariance directly follows from an application, of Noether’s second theorem [@noeth]. In fact we argued for a relation between the Eulerians (equation of motion expressions), which then by Noether’s second theorem implies the reparameterization symmetry of the Lagrangian. Such a relation (\[fmu\], \[idd\]) is needed for consistency, when the constraint $H_{\mu
\nu }^{2}=\pm M^{2}$ is to be upheld at all times. Otherwise the degrees of freedom of the symmetric two-tensor $H_{\mu \nu }$ would be superfluously restricted. Actually, this derivation of diffeomorphism symmetry excludes wrong couplings in the tensor field Lagrangian, which would otherwise distort the final Lorentz symmetry broken phase with unphysical extra states including ghost-like ones. Note that this procedure might, in some sense, be inspired by string theory where the coupling constants are just vacuum expectation values of the dilaton and moduli fields [@string]. So, the adjustment of coupling constants in the Lagrangian would mean, in essence, a certain choice for the vacuum configurations of these fields, which are thus correlated with SLIV.
The crucial point in our method of deriving gauge invariance seems to be that one degree of freedom for the vector or tensor field considered is not determined from the time development of their own equations of motion but solely by the relevant constraint (\[const\], \[const3\], \[const2\]). So, in order to avoid a possible inconsistency with an accordingly diminished number of independent degrees of freedom for the fields involved, their equations of motion must be generically prearranged to have less predictive power. Such a reduced predictive power is precisely what is achieved in gauge theories, where one cannot predict the evolution of gauge-fixing terms as time develops. The equations of motion in gauge theories are therefore less predictive by just the number of degrees of freedom corresponding to the number of gauge parameters, which are actually functions of spacetime. In order to allow for consistency with constraints like (\[const\], \[const3\], \[const2\]), one at first seems to need that the number of gauge degrees of freedom should be equal to the number of such constraints. But, as we have seen, even one constraint introduced as a length-fixing condition (\[const\], \[const3\], \[const2\]) may be enough for several gauge symmetry generators to emerge. Such a length-fixing constraint (\[const\]) applied to the one-vector field case (Section 2.1) leads to QED with only one gauge degree of freedom given by a gauge function $\omega (x)$. However, the analogous constraint (\[const2\]) in the non-Abelian case, with the starting global $G$ symmetry (Section 2.2), requires that $D$ conditions $\boldsymbol{F}_{a}(C=0;\text{ \ }\mathrm{E}_{%
\boldsymbol{A}},\mathrm{E}_{\boldsymbol{\psi }},...)=0$ have to be simultaneously fulfilled. This eventually leads to a gauge invariant Yang-Mills theory with $D$ gauge degrees of freedom given by the set of parameter functions $\omega ^{a}(x).$ Similarly in the tensor field case (Section 3.1), the length-fixing constraint (\[const2\]) requires that just four equations $\mathcal{F}^{\mu } (\mathcal{C}=0;\text{ \ }\mathcal{E}%
_{H},\mathcal{E}_{\phi},...)=0$ should be arranged to be automatically satisfied. This leads to the diffeomorphism invariance (\[tr\]) with the transformation 4-vector parameter function $\xi ^{\mu }(x).$
The appearance of gauge symmetries in our approach hinges strongly upon the imposition of a constraint. This can be done in either of the two following ways: (1) the constraint is imposed by hand prior to varying of the action or (2) the constraint is imposed by introducing a special quadratic Lagrange multiplier term, for which the Lagrange multiplier field is decoupled from the equations of motion and is thereby unable to ensure their consistency with the constraint. In both cases it is not possible to have consistency between the equations of motion and the constraint, unless the parameters in the Lagrangian are adjusted to allow for more freedom in the time development. This typically means that the Lagrangian should possess a generic, SLIV enforced, gauge invariance. As a result, all these vector and tensor field theories do not lead to any physical Lorentz violation and are in fact indistinguishable from conventional QED, Yang-Mills theories and general relativity[^25]. However, there might appear some principal distinctions if these emergent local symmetries were slightly broken at very small distances controlled by quantum gravity in an explicit, rather than spontaneous, way that could eventually allow one to differentiate between emergent and conventional gauge theories observationally.
Acknowledgments {#acknowledgments .unnumbered}
===============
One of us (J.L.C.) appreciates the warm hospitality shown to him during his visit to the Division of Elementary Particle Physics, Department of Physics, University of Helsinki where part of this work was carried out. We would like to thank Masud Chaichian, Oleg Kancheli, Archil Kobakhidze, Rabi Mohapatra and Giovanni Venturi for useful discussions and comments. Financial support from Georgian National Science Foundation (grant N 07\_462\_4-270) is gratefully acknowledged by J.L.C. Also C.D.F. would like to acknowledge support from STFC in UK.
[99]{} J.D. Bjorken, Ann. Phys. (N.Y.) **24** (1963) 174;
P.R. Phillips, Phys. Rev. 146, 966 (1966).
T. Eguchi, Phys.Rev. D **14** (1976)** **2755.
M. Suzuki, Phys. Rev. D **37** (1988) 210.
J.L. Chkareuli, C.D. Froggatt and H.B. Nielsen, Phys. Rev. Lett. **87** (2001)** **091601;
J.L. Chkareuli, C.D. Froggatt and H.B. Nielsen, Nucl. Phys. B **609** (2001) 46.
Per Kraus and E.T. Tomboulis, Phys. Rev. D **66** (2002) 045015.
V.A. Kostelecky, R. Potting, Phys.Rev.D** 79** (2009) 065018.
S.M. Carroll, H.Tam, I.K. Wehus, Phys.Rev. D **80** (2009) 025020.
J.L. Chkareuli, J.G. Jejelava and G. Tatishvili, Phys. Lett. B **696** (2011) 126.
H.B. Nielsen and I. Picek, Phys. Lett. B **114** (1982) 141, Nucl. Phys. B **211** (1983) 269;
S. Chadha and H.B. Nielsen, Nucl. Phys. B **217** (1983) 125.
S.M. Carroll, G.B. Field and R. Jackiw, Phys. Rev. D **41** (1990) 1231.
V.A. Kostelecky and R. Potting, Nucl. Phys. B** 359** (1991) 545;
D. Colladay and V.A. Kostelecky, Phys. Rev. D**58** (1998) 116002 ;
S. Coleman and S.L. Glashow, Phys. Rev. D **59** (1999)** **116008.
T. Jacobson, S. Liberati and D. Mattingly, Ann. Phys. (N.Y.) **321** (2006) 150.
Y. Nambu, Progr. Theor. Phys. Suppl. Extra 190 (1968).
P.A.M. Dirac, Proc. Roy. Soc. **209**A (1951) 292;
P.A.M. Dirac, Proc. Roy. Soc. **212**A (1952) 330.
R. Righi and G. Venturi, Lett. Nuovo Cim. **19** (1977) 633;
R. Righi, G. Venturi and V. Zamiralov, Nuovo Cim. **A47** (1978) 518.
A.T. Azatov and J.L. Chkareuli, Phys. Rev. D **73,** 065026 (2006).
J.L. Chkareuli and Z.R. Kepuladze, Phys. Lett. B **644**, 212 (2007).
J.L. Chkareuli and J.G. Jejelava, Phys. Lett. B **659** (2008) 754.
J.L. Chkareuli, C.D. Froggatt, J.G. Jejelava and H.B. Nielsen, Nucl. Phys. B **796** (2008) 211.
S.Weinberg, *The Quantum Theory of Fields,* v.2, Cambridge University Press, 2000.
V.I. Ogievetsky and I.V. Polubarinov, Ann. Phys. (N.Y.) **25** (1963) 358.
Y. Nambu and G. Jona-Lasinio, Phys. Rev. **122** (1961) 345.
J. Alfaro and L.F. Urrutia, Phys. Rev. D **81** (2010) 025007.
S. Weinberg, Phys. Rev. B **138** (1965) 988.
R. Jackiv, *Lorentz Violation in a Diffeomorphism-Invariant Theory,* arXiv: 0709.2348 \[hep-th\].
C. Lanczos, *The variational Principles of Mechanics*, Dover publications, 1986.
E. Noether, Nachrichten von der Kön. Ges. Wissenschaften zu Gettingen, Math.-Phys. Kl., **2,** 235 (1918);
J.G. Fletcher, Rev. Mod. Phys. **32** (1960) 45;
D. Bak et al, Phys. Rev. D **49** (1994) 5173.
R. Bluhm, Shu-Hong Fung and V. A. Kostelecky, Phys. Rev. D** 77** (2008) 065020;
R. Bluhm, N.L. Cage, R. Potting and A. Vrublevskis, Phys. Rev. D **77** (2008) 125007.
P. Ramond, *The Family Group in Grand Unified Theories*, hep-ph/9809459.
J.L. Chkareuli, JETP Lett. **32** (1980) 671, Pisma Zh. Eksp. Teor. Fiz. **32** (1980) 684;
J.L. Chkareuli, C.D. Froggatt and H.B. Nielsen, Nucl. Phys. B **626** (2002) 307.
T.W. Kibble, J. Math. Phys. **2** (1960) 212.
V.I. Ogievetsky and I.V. Polubarinov, Ann. Phys. (N.Y.) **35** (1965) 167.
H. van Dam and M. J. G. Veltman, Nucl. Phys. B **22**, 397 (1970);
V. I. Zakharov, JETP Lett. **12** (1970) 312.
M.B. Green, J.H. Schwartz and E. Witten, *Superstring Theory*, Cambridge University Press, 1988.
[^1]: Independently of the problem of the origin of local symmetries, Lorentz violation in itself has attracted considerable attention as an interesting phenomenological possibility which may be probed in direct Lorentz non-invariant extensions of quantum electrodynamics (QED) and the Standard Model [@chadha; @jakiw; @alan; @glashow; @ted].
[^2]: This constraint in the classical electrodynamics framework was originally suggested by Dirac [@dir] (see also [@ven] for further developments).
[^3]: Let us note, to make things clearer, that the length-fixing vector field constraint (\[const\]) is definitely Lorentz invariant by itself. Nonetheless, as is usual for the nonlinear sigma type models, this constraint means at the same time the spontaneous Lorentz violation. The point is, however, that in gauge invariant theories this violation becomes artificial being converted into gauge degrees of freedom rather than physical ones. In consequence, ordinary photons and other gauge fields (see below) appear in essence as the Goldstonic fields that could only be seen when taking the above nonlinear constraint (nonlinear gauge condition). In this connection, any other gauge, e.g. Coulomb gauge, is not in line with Goldstonic picture, since it breaks Lorentz invariance in an explicit rather than spontaneous way.
[^4]: $E_{A}$ stands for the vector-field Eulerian $(E_{A})^{\mu }\equiv
\partial L/\partial A_{\mu }-\partial _{\nu }[\partial L/\partial (\partial
_{\nu }A_{\mu })].$ We use similar notations for other field Eulerians as well.
[^5]: The Eulerians are of course just particular field combinations themselves and so this expansion" at first includes higher powers and higher derivatives of the Eulerians.
[^6]: Remarkably, the diff invariance appears so powerful that not only spontaneous but even explicit Lorentz violation may sometimes be converted into gauge degrees of freedom. One interesting example [@jiv] is related to Chern-Simons modified gravity where the apparent Lorentz symmetry breaking may in fact be just a choice of gauge.
[^7]: Indeed, in this case one could propose that the auxiliary field $\lambda (x)$ is chosen in such a way that this extra source current is conserved $%
\partial _{\mu }(\lambda A^{\mu })=0$, according to which if the auxiliary field $\lambda (x)$ is fixed at one instant of time its value at other times can be then determined by this conservation law. Otherwise, with an arbitrary $\lambda (x),$ this field could have an uncontrollable influence on the vector field dynamics. However, this conservation law would in fact constitute an additional condition on the theory since, in contrast to a conventional Noether fermion current in the starting $U(1)$ globally invariant Lagrangian $L(A,\psi )$, this current $j^{\mu }=\lambda A^{\mu }$ is not automatically conserved.
[^8]: Since the Eulerians are functional derivatives of the action, e.g. $%
(E_{A})^{\mu }=\frac{\delta S}{\delta A_{\mu }}$, a relation such as ([div]{}) between them implies that a certain combined variation of the various fields with the variations $\delta A_{\mu }$, $\delta \psi $,.. being proportional to the corresponding coefficients $cA_{\mu }$, $it\psi $,.. of the Eulerians in (\[div\]) does not change $S$.
[^9]: In general Noether’s theorem applies to the invariance of an action rather than the invariance of a Lagrangian. However these are both completely equivalent, unless one considers spacetime symmetries with a local variation of coordinates as well (see section 3).
[^10]: We shall see below that non-zero $c$-type coefficients appear in the non-Abelian internal symmetry case, resulting eventually in a Yang-Mills gauge invariant theory.
[^11]: Just the latter approach was used in our previous analysis [@cj] of gauge symmetry generation in SLIV constrained vector field theories. Here we follow the variational treatment of this constraint, although the only distinction between the two approaches is the presence of a decoupled Lagrange-multiplier field $\lambda (x)$ which is actually left undetermined in the theory.
[^12]: Note that in total there appear $4D-1$ pseudo-Goldstone modes, complying with the number of broken generators of $SO(D,3D)$, both for time-like and space-like SLIV. From these $4D-1$ pseudo-Goldstone modes, $3D$ modes correspond to the $D$ three-component vector states as will be shown below, while the remaining $D-1$ modes are scalar states which will be excluded from the theory.
[^13]: For such a choice the simple identity $\boldsymbol{n}_{\mu }^{\alpha }\equiv
\frac{n\cdot \boldsymbol{n}^{\alpha }}{n^{2}}n_{\mu }$ holds, showing that the rectangular vacuum matrix $\boldsymbol{n}_{\mu }^{\alpha }$ has the factorized two-vector" form.
[^14]: Keeping in mind an application to gravity, we could also admit second order derivatives of the tensor field $H_{\mu \nu }$ in the Lagrangian $%
\mathcal{L}$ so that the Eulerian $(\mathcal{E}_{H})^{\mu \nu }$ would have the form $(\mathcal{E}_{H})^{\mu \nu }=\partial \mathcal{L}/\partial
H_{\mu \nu }-\partial _{\rho }[\partial \mathcal{L}/\partial (\partial
_{\rho }H_{\mu \nu })]+\partial _{\rho }\partial _{\sigma }[\partial
\mathcal{L}/\partial (\partial _{\rho }\partial _{\sigma }H_{\mu \nu })]$ .
[^15]: Generally speaking, a symmetric two-tensor field $H_{\mu \nu }$ describes the states with spin $2$ (five components), spin $1$ (three components) and two spin $0$ states (each is described by one of its components). Among them spin $1$ must be necessarily excluded as the sign of the energy for spin $1$ is always opposite to that for spin $2$ and $0$.
[^16]: We note that the double divergence $\partial _{\mu }\partial _{\nu }(%
\mathcal{E}_{H})^{\mu \nu }$ does not appear in (\[fmu\], \[idd\]), since it would require a term of dimension 6 or higher in order to transform as a vector.
[^17]: We shall refer to such transformations as fixed point transformations.
[^18]: In order to obtain this result, one has first to use the conventional formulas $\delta g^{p_{0}/2}=$ $(p_{0}/2)g^{p_{0}/2}g^{\alpha \beta} \delta
g_{\alpha \beta }$ and $\partial_{\rho }g^{p_{0}/2}=(p_{0}/2)
g^{p_{0}/2}g^{\alpha \beta }\partial _{\rho }g_{\alpha \beta }$ for the variation of the determinant $g$ and its derivative respectively, and then to divide both of sides of the equation by $g^{p_{0}/2}$.
[^19]: For a non-minimal vacuum configuration when vevs are developed on several $%
H_{\mu \nu }$ components, thus leading to a more substantial breaking of the accidental $SO(7,3)$ symmetry, some extra PGMs are generated. However, they are not protected by diffeomorphism invariance and acquire masses of the order of the breaking scale ($M$).
[^20]: It should be particularly emphasized that the modes collected in $h_{\mu\nu
} $ are in fact the Goldstone modes of the broken accidental $SO(7,3)$ symmetry of the constraint (\[const3\]) thus containing the Lorentz Goldstone modes and PGMs altogether.
[^21]: Actually, in the Lagrangian $\mathcal{L}(H_{\mu \nu },\phi )$ satisfying the action invariance condition (\[sd\]), the vacuum shift of the tensor field $H_{\mu \nu }=h_{\mu \nu }+\frac{\mathfrak{n}_{\mu \nu }}{\mathfrak{n}^{2}}M$ is in fact a gauge transformation which, for the appropriately chosen transformation of the scalar field $\phi (x),$ leaves the action $I$ (\[i\]) invariant.
[^22]: In the general case, with the vev tensor $\mathfrak{n}_{\mu \nu }$ having a non-zero trace, this cancellation would also require the redefinition of the scalar field itself as $\phi \rightarrow \phi (1-\mathfrak{n}_{\mu \nu }\eta
^{\mu \nu }\frac{M}{M_{P}})^{-1/2}$.
[^23]: These spin $1$ states must necessarily be excluded as the sign of the energy for spin $1$ is always opposite to that for spin $2$ and $0$
[^24]: The solution for the gauge function $\xi _{\mu }(x)$ satisfying the condition$\ $(\[gauge\]) can generally be chosen to be $\xi _{\mu }=$ $\ \square
^{-1}(\partial ^{\rho }h_{\mu \rho })+\partial _{\mu }\theta $ where $\theta
(x)$ is an arbitrary scalar function, so that only three degrees of freedom in $h_{\mu \nu }$ are actually eliminated.
[^25]: Nonetheless, imposing nonlinear constraints in the emergent theories raises the question of unitarity and stability in them. Indeed, while the gauge invariant form for the vector (tensor) field kinetic terms in them prevents propagation of their longitudinal modes as the ghost modes, these nonlinear gauge conditions could cause them unless the phase space in these theories are properly restricted so as to have ghost-free models with positive Hamiltonians. Particularly, it was shown \[30\] that by restricting the phase space to the vector field solutions with initial values obeying Gauss’s law, the equivalence of Nambu’s nonlinear QED model with an ordinary ghost-free QED is restored. At the same time, if these constraints are introduced, as in our case, through the quadratic Lagrange multiplier potentials (7, 17, 39) then a Hamiltonian appears positive over the full phase space \[30\]. Though these results have been still established for Abelian case only, one could expect that similar arguments are applicable to all gauge theories considered.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
We deal with the photoacoustic imaging modality using dielectric nanoparticles as contrast agents. Exciting the heterogeneous tissue, localized in a bounded domain $\Omega$, with an electromagnetic wave, at a given incident frequency, creates heat in its surrounding which in turn generates an acoustic pressure wave (or fluctuations). The acoustic pressure can be measured in the accessible region $\partial \Omega$ surrounding the tissue of interest. The goal is then to extract information about the optical properties (i.e. the permittivity and conductivity) of this tissue from these measurements. We describe two scenarios. In the first one, we inject single nanoparticles while in the second one we inject couples of closely spaced nanoparticles (i.e. dimers). From the acoustic pressure measured, before and after injecting the nanaparticles (for each scenario), at two single points $x_{1}$ and $x_{2}$ of $\partial \Omega$ and two single times $t_{1} \neq t_{2}$ such that $t_{1,2} > {\mathrm{diam}}(\Omega)$,
1. we locatize the center point $z$ of the single nanoparticles and reconstruct the phaseless total field $\vert u_0\vert$ on that point $z$ (where $u_0$ is the total field in the absence of the nanoparticles). Hence, we transform the photoacoustic problem into the inversion of phaseless internal electric fields.
2. we localize the centers $z_1$ and $z_2$ of the injected dimers and reconstruct both the permittivity and the conductivity of the tissue on those points.
This can be done using [*[dielectric]{}*]{} nanoparticles enjoying high contrasts of both its electric permittivity and conductivity.
These results are possible using frequencies of incidence close to the resonances of the used dielectric nanoparticles. These particular frequencies are computable. This allows us to solve the photoacoustic inverse problem with direct approximation formulas linking the measured pressure to the optical properties of the tissue. The error of approximations are given in terms of the scales and the contrasts of the dielectric nanoparticles. The results are justified in the $2$D TM-model.
author:
- 'Ahcene Ghandriche $^*$ and Mourad Sini$^{\ddag}$'
title: 'Mathematical Analysis of the Photo-acoustic imaging modality using resonating dielectric nano-particles: The $2D$ TM-model'
---
Introduction and statement of the results
=========================================
Motivation and the mathematical models
--------------------------------------
Imaging using small scaled contrast agents has known in the recent years a considerable attention, see for instance [@B-B:2011; @Chen-Craddock-Kosmas; @Shea-Kosmas-VanVeen-Hagness]. To motivate it, let us recall that conventional imaging techniques, as microwave imaging, are known to be potentially capable of extracting features in breast cancer, for instance, in case of the relatively high contrast of the permittivity, and conductivity, between healthy tissues and malignant ones, [@F-M-S:2003]. However, it is observed that in case of benign tissue, the variation of the permittivity is quite low so that such conventional imaging modalities are limited to be used for early detection of such diseases. In these cases, creating such missing contrast is highly desirable. One way to do it is to use micro or nano scaled particles as contrast agents, [@B-B:2011; @Chen-Craddock-Kosmas]. There are several imaging modalities using contrast agents as acoustic imaging using gas microbubbles, optical imaging and photoacoustic using dielectric or magnetic nanoparticles [@B-B:2011; @F-M-S:2003; @Q-C-F-2009]. The first two modalities are single wave based methods. In this work, we deal with the last imaging modality.
Photoacoustic imaging is a hybrid imaging method which is based on coupling electromagnetic waves with acoustic waves to achieve high-resolution imaging of optical properties of biological tissues, [@P-P-B:2015; @L-C:2015]. Precisely, exciting the heterogeneous tissue with an electromagnetic wave, at a certain frequency related to the used small scale particles, creates heat in its surrounding which in turn generates an acoustic pressure wave (or fluctuations). The acoustic wave can be measured in a region surrounding the tissue of interest. The goal is then to extract information about the optical properties of this tissue from these measurements, [@P-P-B:2015; @L-C:2015].
A main reason why such a modality is promising is that injecting nanoparticles, see [@B-B:2011; @Chen-Craddock-Kosmas] for information on its feasibility, with appropriate scales between their sizes and optical properties, in the targeted tissue will create localized contrasts in the tissue and hence amplify the local electromagnetic energy around its location. This amplification can be more pronounced if the used incident electromagnetic wave is sent at frequencies close to resonances. In particular, dielectric or magnetic nanoparticles (as gold nanoparticles [@L-C:2015]) can exhibit such resonances when its inner electric permittivity or magnetic permeability is tuned appropriately, see below. Our target here is to mathematically analyze this imaging technique when injecting such nanoparticles.
To give more insight to this, let us briefly recall the photoacoustic model, see [@C-A-B:2007; @Triki-Vauthrin:2017; @K-K:2010; @N-S:2014; @S:2010; @S-U:2009] for extensive studies and different motivations of this model and related topics. We assume the time harmonic (TM) approximation for the electromagnetic model [^1], then the third component of the electric field, that we denote by $u$, satisfies $$\label{helmotzequa}
\Delta u +\omega^2 \varepsilon \mu_0 u=0,\; \mbox{ in } \mathbb{R}^2$$ with $u:=u^i+u^s$ where $u^i:=u^i(x,d, \omega):=e^{i \omega d\cdot x}$, is the incident plane wave, sent at a frequency $\omega$ and direction $d,\; \vert d\vert=1$, and $u^s:=u^s(x, \omega)$ is the corresponding scattered wave selected according to the outgoing Sommerfeld radiation conditions (S.R.C) at infinity. Here, $\mu_0$ is the magnetic permeability of the vacuum, which we assume to be a positive real constant, and $\epsilon:=\epsilon (x)$ is defined as $$\label{defvareps}
\varepsilon(x):=\left \{
\begin{array}{llrr}
\epsilon_0, & in \quad \mathbb{R}^2 \setminus \Omega,\\
\epsilon(x) & in \quad \Omega \setminus \overset{M}{\underset{m=1}{\cup}} D_{m}, \\
\epsilon_m , & in \quad D_m,
\end{array} \right.$$ where $\epsilon_0$ is the positive constant permittivity of the vacuum and $\epsilon:=\epsilon_r +i\frac{\sigma_{\Omega}}{\omega}$ with $\epsilon_r$ as the permittivity and $\sigma_{\Omega}$ the condutivity of the heterogeneous tissue (i.e. variable functions). The quantity $\epsilon_m$ is the permittivity constant of the particle $D_m$, of radius $a<<1$, which is taken to be complex valued, i.e. $\epsilon_m:=\epsilon_{m,r} +i\frac{\sigma_m}{\omega}$ where $\epsilon_{m,r}$ is its actual electric permittivity and $\sigma_m$ its conductivity. The bounded domain $\Omega$ models the region of the tissue of interest. We take the nanoparticle of dielectric type, meaning that $\frac{\epsilon_m}{\epsilon_0}>>1$ when $a <<1$, and hence its relative speed of propagation is very large as well. Under particular rates of the ratio $\frac{\epsilon_m}{\epsilon_0}>>1$, resonances can occur, as the dielectric (or Mie-electric) resonances. These regimes will be of particular interest to us. Here, we take the $D_{m}$’s of the form $D_{m}:= z_{m} + a \, B_{m}$ where $z_{m}$ models its location, $a$ its radius and $B_{m}$ as a smooth domain of radius $1$ containing the origin.
As said above, exciting the tissue with such electromagnetic waves will generate a heat $T$ which in turn generates acoustic pressure. Under some appropriate conditions, see [@Habib-lectures:2017; @Triki-Vauthrin:2017] for instance, this process is modeled by the following system:
$$\label{Photoacoustic-general-model}
\left \{
\begin{array}{llrr}
\rho_0 c_p\frac{\partial T}{\partial t}-\nabla \cdot \kappa \nabla T=\omega {{\ensuremath{\mathrm{Im\,}}}}(\epsilon)\vert u \vert^2 \delta_{0}(t),\\
\\
\frac{1}{c^2}\frac{\partial^2 p}{\partial t^2}-\Delta p= \rho_0 \beta_0\frac{\partial^2 T}{\partial t^2}
\end{array} \right.$$
where $\rho_0$ is the mass density, $c_p$ the heat capacity, $\kappa$ is the heat conductivity, $c$ is the wave speed and $\beta_0$ the thermal expansion coefficient. To these two equations, we supplement the homogeneous initial conditions: $$T=p=\frac{\partial p}{\partial t}=0, \mbox{ at } t=0.$$ Under additional assumptions on the smallness of the heat conductivity $\kappa$, one can neglect the term $\nabla \cdot \kappa \nabla T$ and hence, we end up with the photoacoustic model linking the electromagnetic field to the acoustic pressure [^2]: $$\label{Waveequa}
\left \{
\begin{array}{llrr}
\frac{\partial^2 p}{\partial t^2} - c_{s}^2 \Delta p=0, & in \quad \mathbb{R}^2 \times \mathbb{R}_+,\\
p(x, 0)= \frac{\omega\beta_0}{c_p} {{\ensuremath{\mathrm{Im\,}}}}(\varepsilon)\vert u \vert^2& in \quad \mathbb{R}^2, \\
\frac{\partial p}{\partial t}(x, 0)=0, & in \quad \mathbb{R}^2.
\end{array} \right.$$
The imaging problem we wish to focus on is stated in the following terms:
[**[Problem]{}**]{}. Reconstruct the coefficient $\epsilon$ from the given pressure $p(x, t)$ measured for $(x, t) \in \partial \Omega \times (0, T)$, with some positive time length $T$,
1. after injecting single nanoparticles located in a sample of points in $\Omega$,
or/and
2. after injecting couples of nanoparticles two by two closely spaced (i.e. dimers) and located in a sample of points in $\Omega$.
It is natural to split this problem into two steps. The first step concerns the acoustic inversion, namely to reconstruct the source term ${{\ensuremath{\mathrm{Im\,}}}}(\varepsilon)\vert u\vert^2, \;~ x \in \Omega,$ from the pressure $p(x, t)$ for $(x, t) \in \partial \Omega \times (0, L)$. The second step concerns the electromagnetic inversion, namely to reconstruct $\epsilon$ from the internal data ${{\ensuremath{\mathrm{Im\,}}}}(\varepsilon)\vert u\vert^2$.
The acoustic inversion {#photo-acoustic-inversion-known-results}
----------------------
We start by recalling the main results related to the model $(\ref{Waveequa})$. More informations about this part can be found in [@!PATTAT] and [@KuchmentKunyansky].
For this inversion, there are two cases to distinguish:
1. The speed of propagation $c_{s}$ is constant everywhere in $\mathbb{R}^{2}$ and $\Omega$ is a disc.\
The solution of the problem $(\ref{Waveequa})$ is given by the Poisson formula $$\label{pressursol}
p(x,t) = \frac{\omega \, \beta_{0}}{2\pi c_{s} c_{p}} \partial_{t} \Bigg( \int_{\vert x-y \vert < c_{s} t} \frac{{{\ensuremath{\mathrm{Im\,}}}}(\varepsilon)(y) \, \vert u \vert^{2}(y)}{\sqrt{c^{2}_{s}t^{2}-\vert x-y \vert^{2}}} dy\Bigg).$$ We denote by $M(f)$ the circular means of $f$ $$M(f)(x,r) := \frac{1}{2\pi} \int_{\vert \xi \vert=1} f(x+r\xi) \, d\sigma(\xi).$$ The equation $(\ref{pressursol})$ takes the following form $$p(x,t) = \frac{\omega \, \beta_{0}}{c_{s} c_{p}} \partial_{t} \Bigg( \int_{0}^{c_{s}t} \frac{r}{\sqrt{c^{2}_{s}t^{2}-r^{2}}} M( {{\ensuremath{\mathrm{Im\,}}}}(\varepsilon) \, \vert u \vert^{2})(x,r) dr \Bigg).$$ The recovery of $Im(\varepsilon) \, \vert u \vert^{2}$ from $p(x,t), (x,t) \in \partial \Omega \times [0,T]$, is done in two steps. First, as $\partial \Omega$ is a circle, the circular means can be recovered from the pressure as follows $$\label{Abelequa}
M(Im(\varepsilon) \, \vert u \vert^{2})(x,r) = \frac{2 \omega \beta_{0}}{c_{p} \pi} \int_{0}^{c_{s}r} \frac{p(x,t)}{\sqrt{r^{2}-t^{2}}} dt, \quad x \in \partial \Omega.$$ Second, if $ Im(\varepsilon) \, \vert u \vert^{2} \in C^{\infty}(\mathbb{R}^{2})$ with $supp(Im(\varepsilon) \, \vert u \vert^{2})\subset \overline{\Omega}$, then, for $x \in \Omega$, $$\label{N}
Im(\varepsilon)(x) \, \vert u \vert^{2}(x) = \frac{1}{2 \pi R_{0}} \int_{\partial \Omega} \int_{0}^{2R_{0}} (\partial_{r} \, r \, \partial_{r} M(Im(\varepsilon) \, \vert u \vert^{2}))(p,r) \, \log(\vert r^{2} - \vert x-p \vert^{2} \vert) \, dr \, d\sigma(p).$$
We can find in [@Natterer] and [@FHR] the justification of $(\ref{Abelequa})$ and $(\ref{N})$ respectively.
2. The speed of propagation is variable in $\Omega$ and constant in $\mathbb{R}^{2}\setminus\Omega$, with $\Omega$ not necessarily a disc. However, the following assumptions are needed, namely (1). $Supp({{\ensuremath{\mathrm{Im\,}}}}(\varepsilon) \, \vert u \vert^{2})$ is compact in $\Omega$, (2). $c(x) > c > 0$ and $Supp(c(x)-1)$ is compact in $\Omega$ and (3). the non trapping condition is verified. In $\mathbb{L}^{2}(\Omega; c^{-2}_{s}(x)dx)$, we consider the operator given by the differential expression $A = -c^{-2}_{s}(x) \Delta$ and the Dirichlet boundary condition on $\partial \Omega$. This operator is positive self-adjoint operator, and has discrete spectrum $\lbrace s^{2}_{k} \rbrace_{k \geq 1}$ with a basis set of eigenfunctions $ \lbrace \psi_{k} \rbrace_{k \geq 1}$ in $\mathbb{L}^{2}(\Omega; c^{-2}_{s}(x)dx)$. Then, the function ${{\ensuremath{\mathrm{Im\,}}}}(\varepsilon)(x) \, \vert u \vert^{2}(x)$ can be reconstructed inside $\Omega$ from the data $p$, as the following $\mathbb{L}^{2}(\Omega)$ convergent series $${{\ensuremath{\mathrm{Im\,}}}}(\varepsilon)(x) \, \vert u \vert^{2}(x) = \frac{c_{p}}{\omega \, \beta_{0}} \, \sum_{k} ({{\ensuremath{\mathrm{Im\,}}}}(\varepsilon)(x) \, \vert u \vert^{2})_{k} \psi_{k}(x),$$ where the Fourier coefficients $({{\ensuremath{\mathrm{Im\,}}}}(\varepsilon)(x) \, \vert u \vert^{2})_{k}$ can be recovered as: $$({{\ensuremath{\mathrm{Im\,}}}}(\varepsilon)(x) \, \vert u \vert^{2})_{k} = s^{-2}_{k} p_{k}(0) - s^{-3}_{k} \int_{0}^{\infty} \sin(s_{k}t) \, p^{\prime \prime}_{k}(t) dt,$$ with $$p_{k}(t) := \int_{\partial \Omega} p(x,t) \frac{\partial \overline{\psi_{k}}}{\partial \nu}(x) dx.$$
More details can be found in [@!PATTAT].
In our work, we address the following two situations regarding the types of the used dielectric nanoparticles.
1. [*[Only the permittivity $\epsilon_{m,r}$ of the nanoparticle is contrasting.]{}*]{} For this case, we use the results above on the acoustic inversion to obtain ${{\ensuremath{\mathrm{Im\,}}}}(\varepsilon)(x) \, \vert u \vert^{2}(x), x\in \Omega$ and hence $\vert u \vert, x\in D_m$, as ${{\ensuremath{\mathrm{Im\,}}}}\varepsilon=\frac{\sigma_{m}}{\omega}$ on $D_m$ which is known. With this information, we perform the electromagnetic inversion to reconstruct $\epsilon_{r}$ and $\sigma_{\Omega}$.
2. [*[Both the permittivity $\epsilon_{m,r}$ and the conductivity $\sigma_m$ of the nanoparticle are contrasting.]{}*]{} In this case, we do not rely on the acoustic inversion results above. Instead, we propose direct approximating formulas to link the measured data $p(x, t)$ for $x\in \partial \Omega$ and $t\in (0, T)$, to $\vert u \vert(x)$, $x \in D_{m}$. Actually, we need only to measure $p(x, t)$ on two single points on $\partial \Omega$ for two distinct times $t_{1}$ and $t_{2}$. Then, we perform the electromagnetic inversion.
The electromagnetic inversion and motivation of using nearly resonant incident frequencies {#electromagnetic-inversion}
------------------------------------------------------------------------------------------
We start from the model $$\label{prblm}
\left\{\begin{array}{lll}
(\Delta + \omega^2 n^2) u = 0 & in & \mathbb{R}^{2}\\
u := u^{i} + u^{s} & and & u^{s} \quad S.R.C
\end{array}\right.$$ where, taking $M=1$ in ($\ref{defvareps}$), $$n := \left\{\begin{array}{llll}
n_{p}& = \sqrt{\epsilon_{p} \mu_{0}} & in & D\\
n_{0}& = \sqrt{\epsilon_{0} \mu_{0}} & in & \mathbb{R}^{2} \setminus D.
\end{array}\right.$$ We set $\varepsilon_{p}-\varepsilon_{0} = \tau,~~ \tau >> 1.$ Then, we obtain $$n^{2}-n^{2}_{0} = \left\{\begin{array}{llll}
\mu_{0} \, \tau & in & D\\
0 & in & \mathbb{R}^{2} \setminus D.
\end{array}\right.$$ We call the dielectric (or *Mie-electric*) resonances the possible eigenvalues of $(\ref{prblm})$, i.e. the possible solutions $(\omega,u^{s})$ of $(\ref{prblm})$ when $u^{i}=0$. It is known from the scattering theory, precisely Rellich’s lemma, that those eigenvalues belong to the lower complex plane $\mathbb{C}_{-}$. However, as $\tau >>1$, and $a<<1$, their imaginary parts tend to zero, see [@A-D-F-M-S:2019] for instance. Using the Lippmann-Schwinger equation (L.S.E), such eigenvalues are also characterized by the equation $$\label{defLSE}
u(x) = - \omega^{2} \int_{D} (n_{0}^{2}-n^{2}) G_{k}(x,y) \, u(y) dy, \quad x \in \mathbb{R}^{2},$$ where $G_k$ is the Green’s function satisfying $(\Delta + \omega^2 n^2) G_k =-\delta$ with the S.R.C, and $k:=\omega n$ is the wave number. As $\epsilon_{p}$ is constant in $D$, and assuming $\epsilon$ to be constant in $\Omega$ for simplicity of the exposition here, we get from $(\ref{defLSE})$ $$\label{spe}
u(x) \frac{1}{\omega^{2} \mu_{0} \tau} = \int_{D} G_{k}(x,y) u(y) dy, \quad x \in \mathbb{R}^{2}.$$ To solve $(\ref{spe})$, it is enough to find and compute eigenvalues $w_{n}(k)$ of the volumetric potential operator $A_{k}$ defined as $$\label{New}
A_{k}(u)(x) := \int_{D} \Phi_{k}(x,y) u(y) dy,~ u \in \mathbb{L}^2(D).$$ Then combining ($\ref{spe}$) and ($\ref{New}$), we can write $
A_{k}(u) = \frac{1}{\omega^{2} \mu_{0} \tau} \; u$ and then solve in $\omega$, and recalling that $k=\omega\; n$, the dispersion equation $$\label{L}
w_{n}(k) = \frac{1}{\omega^{2}\mu_{0}\tau}.$$ Let us now recall that the operator $LP$, called the Logarithmic Potential operator, defined by $$LP(u)(\eta) := \int_{B} -\frac{1}{2 \pi} \log\vert \eta - \xi \vert \; u(\xi) \; d\xi, u \in \mathbb{L}^2(B), \eta \in B,$$ has a countable sequence of eigenvalues with the corresponding eigenfunctions as a basis of $\mathbb{L}^2(B)$. For more details see [@RG] and [@AKL]. Correspondingly, we define $A_{0}$ to be $$\label{U}
A_{0}(u)(x) := \int_{D} -\frac{1}{2 \pi} \log\vert x - y \vert \; u(y) \; dy, u \in \mathbb{L}^2(D), x \in D.$$
Rescaling, we have $ A_{0}(u)(x)=a^2 LP \tilde{u} (\xi)-\frac{a^2\log(a)}{2\pi}\int_B \tilde{u}(\xi)d\xi,~~ \xi:=\frac{x-z}{a}.$ Hence the eigenvalue problem $A_0(u)=\lambda_n u$, on $D$, becomes $$LP \tilde{u} -\frac{\log(a)}{2\pi}\int_B \tilde{u}(\eta)d\eta=\frac{\lambda_n}{a^2}\tilde{u}, ~ \mbox{ on } B.$$
We observe that the spectrum $Spect(A_0\lvert_{\mathbb{L}_{0}^2(D)})$ of $A_0$, restricted to $\mathbb{L}_0^2(D):=\{v \in \mathbb{L}^2(D),~ \int_{D}v(x)~dx=0\}$, is characterized by $Spect(A_0\lvert_{\mathbb{L}_0^2(D)})
=a^{-2} Spect(LP\lvert_{\mathbb{L}_0^2(B)})$. However, as we see it later, the important eigenvalues are those for which the corresponding eigenfunctions are not average-zero. Therefore, we need to handle the other part of the spectrum of $A_0$ as well. As $\mathbb{L}_0^2(D)$ is not invariant under the action of $A_0$, the natural decomposition $\mathbb{L}^2(D)= \mathbb{L}_0^2(D) \oplus 1$ does not decompose it.
The following properties are needed in the sequel and we state them as hypotheses to keep a higher generality.
\[hyp\] The particles $D$, of radius $a,\; a<<1$, are taken such that the spectral problems $A_0 u =\lambda\; u, \mbox{ in } D$, have eigenvalues $\lambda_n$ and corresponding eigenfunctions, $e_n$, satisfying the following properties:
1. $\int_D e_n(x) dx \neq 0, \; \forall a <<1.$
2. $\lambda_n \sim a^2 \vert \log(a)\vert, \; \forall a <<1.$
In the appendix, see section \[the hypotheses-justification\], we show that for particles of general shapes, the first eigenvalue and the corresponding eigenfunction satisfy [**[Hypotheses]{}**]{} \[hyp\]. In addition, we characterize the properties of the eigenvalues for the case when $D$ is a disc.
Since, the dominant part of the operator $A_{k}$ defined in $(\ref{New})$ is $A_{0}$, we can write[^3] $$\label{B1}
w_{n}(k) = \lambda_{n} + \mathcal{O}(a^{2}).$$
Combining $(\ref{B1})$, $(\ref{T})$ and $(\ref{L})$, we get ${\lambda_{n}} = \displaystyle\frac{1}{\omega^{2}\, \mu_{0} \, \tau \, a^{2}} + \mathcal{O}(1)$ or $\omega^{2} = \displaystyle\frac{1}{\mu_{0} \, \tau \, \lambda_{n}} + \mathcal{O}(\vert \log(a) \vert^{-1}).$ This means that $(\ref{prblm})$ has a sequence of eigenvalues that can be approximated by $$\frac{1}{\mu_{0} \, \tau \, \lambda_{n}} + \mathcal{O}(\vert \log(a) \vert^{-1}).$$ The dominating term is finite if the contrast of the used nanoparticle’s permittivity behaves as $\tau \sim \lambda^{-1}_n \sim a^{-2} \vert \log(a)\vert^{-1} $ for $a<<1$.
We distinguish two cases as related to our imaging problem.
1. Injecting one nanoparticle and then sending incident plane waves at real frequencies $\omega$ close to the real values $$\label{Number}
\omega_{n} := \left( \mu_{0} \, \tau \, \lambda_{n} \right)^{-1/2} ,$$ we can excite, approximately, the sequence of eigenvalues described above. As a consequence, see the justification later, if we excite with incident frequencies near $w_{n}, n \in \mathbb{N}$, the total field $u$ solution of $(\ref{prblm})$, restricted to $D$ will be dominated by $\int_{D} u_0(x) \, e_{n}(x) \, dx \,~ e_{n}(x),$ which is, in turn, dominated by $u_0(z) \, e_{n}(x) \, \int_{D} e_{n}(x) \, dx$ where $u_0$ is the wave field in the absence of the nanoparticles, i.e. $(\Delta + \omega^2 n_0^2) u_0 =0, \;~ u_0=u^{i}+u^s_0$ and $u_0^s$ satisfies the S.R.C. Hence from the acoustic inversion, i.e. from the knowledge of ${{\ensuremath{\mathrm{Im\,}}}}(\epsilon)(x)\vert u \vert(x), x\in \Omega$, and hence $\vert u\vert(x), x\in D$, as, for $x\in D$, ${{\ensuremath{\mathrm{Im\,}}}}(\epsilon)={{\ensuremath{\mathrm{Im\,}}}}(\epsilon_p)=\frac{\sigma_p}{\omega}$ is known, we can reconstruct $$\big\vert u_0(z) \big\vert \, \big\vert e_{n}(z) \big\vert \, \Bigg\vert \int_{D} e_{n}(x) \, dx \Bigg\vert.$$
As $e_{n}$ and $D$ are in principle known, then we can recover the total field $\vert u_0(z) \vert$. Taking a sampling of points $z$ in $\Omega$, we get at hand the phaseless internal total field $\vert u_0(z) \vert$, $z \in \Omega$.
2. Now, we inject a dimer, meaning a couple of close nanoparticles, instead of only single particles, with prescribed high contrasts of the relative permittivity or/and conductivity. Sending incident plane wave at frequencies close to the dielectric resonances, we recover also the amplitude of the field generated by the first interaction of the two nano-particles. Indeed, based on point-approximation expansions, this field can be approximated by the Foldy-Lax field. This field describes the one due to multiple interactions between the nanoparticles. We show that the acoustic inversion approximately reconstruct the first multiple interaction field (i.e. the Neumann series cut at the first, and not the zero, order term). From this last field, we recover the value of $\vert \varepsilon_{0}(z) \vert, z \in \Omega$.\
Both steps are justified using the incident frequencies close to the dielectric resonance of the nanoparticles. This wouldn’t be possible using incident frequencies away from these resonances.
Recall that $\displaystyle\epsilon_0=\epsilon_{r} +i \frac{\sigma_{\Omega}}{\omega}$, then $\displaystyle\vert \epsilon_{0} \vert^2=\epsilon_{r}^2 + \frac{\sigma_{\Omega}^2}{\omega^2}$. Hence using two different dielectric resonances, we can reconstruct both the permittivity $\epsilon_r$ and the conductivity $\sigma_{\Omega}$.
Statement of the results
------------------------
We recall that the mathematical model of the photoacoustic imaging modality is $(\ref{helmotzequa}), (\ref{defvareps})$ and $(\ref{Waveequa})$.\
Next, we set $u:=u_j, j=0, 1, 2$, the solution of $(\ref{helmotzequa})$ and $(\ref{defvareps})$ when there is no nanoparticle injected, there is one or two nanoparticles, respectively (i.e take $M=0,1$ or $2$ in $(\ref{defvareps})$).\
To keep the technicality at the minimum, we deal only with the case when the electromagnetic properties of the injected particles are the same i.e, $\epsilon_{1} = \cdots = \epsilon_{M}$.
### Imaging using dielectric nanoparticles with permittivity contrast only {#only-contrasted-permittivity}
Let the permittivity $\epsilon$, of the medium, be $W^{1, \infty}-$smooth in $\Omega$ and the permeability $\mu_0$ to be constant and positive. Let also the injected nanoparticles $D$ satisfy [**[Hypotheses]{}**]{} \[hyp\]. We assume these nanoparticles to be characterized with moderate magnetic permeability and their permittiviy and conductivity are such that $\epsilon_{m,r} \sim a^{-2} \, \vert \log(a) \vert^{-1}$ while $\sigma_{m} \sim 1$ as $a << 1$. The frequency of the incidence $\omega$ is chosen close to the dielectric resonance $\omega_{n_0}$ $$\label{resoance-n-0}
\omega^2_{n_0}:= \left( \mu_0 \, \tau \, \lambda_{n_0} \right)^{-1},$$ as follows $$\label{resoance-n-0}
\omega^2=\omega^2_{n_0}(1\pm \vert \log(a)\vert^{-h}),\;~~ 0<h<1.\;\, \footnote{Choosing + or - does not make a difference for the results in Theorem \ref{Using-only-permittivity-contrast}.}$$
\[Using-only-permittivity-contrast\] We assume that the acoustic inversion is already performed using one of the methods given in section \[photo-acoustic-inversion-known-results\]. Hence, we have at hands $$\vert u_j\vert(x), x \in D, j=1, 2.$$
1. [*[Injecting one nanoparticle.]{}*]{} In this case, we use the data $\vert u_1\vert(x), x \in D$. We have the following approximation $$\label{one-particle-reconst-permittivity-only}
\int_D\vert u_1\vert^2(x) dx =\frac{\vert u_0\vert^2(z) (\int_D e_{n_0}(x) dx)^2}{\vert 1-\omega^2 \mu_0 \tau \lambda_{n_0}\vert^2} + \mathcal{O}\big( a^{2} \big).$$
2. [*[Injecting two closely spaced nanopartilces.]{}*]{} These two particles are distant to each other as $$\vert z_1-z_2\vert \geq exp(-\vert \log(a)\vert^{1-h}), a<<1,$$ where $z_1$ and $z_2$ are the location points of the particles. In this case, we use as data $\vert u_j\vert(x), x \in D$ $j=1, 2$, where $D$ is any one of the two particles. The following expansion is valid
$$\label{reconstruction-k-using-contras-permittivity-only}
\log (\vert k\vert)(z)=2\pi \gamma -
\frac{A_1-(1-C\Phi_0)^2}{A_1-2(1-C \Phi_0)}+O(\vert \log(a)\vert^{\max\{h-1, 1-2h\}}),$$
where $\gamma$ is the Euler constant, $$A_1:=\frac{\int_D\vert u_2\vert^2(x) dx}{\int_D\vert u_1\vert^2(x) dx},\;~~ \Phi_0:=\frac{-1}{2\pi} \log \vert z_1-z_2\vert$$ and $$\textbf{C}:=\int_D[\frac{1}{\omega^2 \mu_0 \tau}I-A_0]^{-1}(1)(x)dx=\frac{\omega^2 \mu_0 \tau}{1-\omega^2 \mu_0 \tau \lambda_{n_0}}\Bigg(\int_D e_{n_0}(x) dx\Bigg)^2 +O(\vert \log(a) \vert^{-1}).$$
From the formula (\[one-particle-reconst-permittivity-only\]), we can derive an estimate of the total field in the absence of the nanoparticles, i.e. $\vert u_0\vert(x), x \in \Omega$, by repeating the same experiment scanning the targeted tissue located in $\Omega$ by injecting single nanoparticles. Hence, we transform the photoacoustic inverse problem to the reconstruction of $\epsilon_0$ in the equation $(\Delta +\omega^2 \mu_0 \epsilon_0)u_0=0,$ in $\mathbb{R}^2$, from the phaseless internal data $\vert u_0\vert(x), x\in \Omega$.
From the formula (\[reconstruction-k-using-contras-permittivity-only\]), we can reconstruct $\vert k\vert (z)$ using the data $\vert u_1\vert(x)$ and $\vert u_2\vert(x)$, with $x \in D$. Indeed, $$\vert k\vert (z)=\omega^2 \vert \epsilon_0\vert \mu_0=\omega^2 \Big(\vert \epsilon_{r}\vert^2 +\frac{\vert \sigma_{\Omega} \vert^2}{\omega^2}\Big)^{1/2} \, \mu_{0},$$ then using two different resonances $\omega_{n_0}$ and $\omega_{n_1}$, we can reconstruct both the permittivity $\epsilon_0(z)$ and the conductivity $\sigma_{\Omega}(z)$.
### Imaging using dielectric nanoparticles with both permittivity and conductivity contrasts {#both-contrasted-permittivity-conductivity}
As in section \[only-contrasted-permittivity\], let the permittivity $\epsilon$, of the medium, be $W^{1, \infty}-$smooth in $\Omega$ and the permeability $\mu_0$ to be constant and positive. Let also the injected nanoparticles $D$ satisfy [**[Hypotheses]{}**]{} \[hyp\]. Here, we assume that $\epsilon_{m,r} \sim a^{-2} \vert \log(a) \vert^{-1}$and $\sigma_m\sim a^{-2} \vert \log(a) \vert^{-1-h-s}$, $s \geq 0$. The frequency of the incidence $\omega$ is chosen close to the dielectric resonance $\omega_{n_0}$ $$\label{resoance-n-0}
\omega^2_{n_0} := \left( \mu_0 \, \tau \, \lambda_{n_0} \right)^{-1}$$ as follows $$\label{resoance-n-0-2}
\omega^2=\omega^2_{n_0}(1\pm \vert \log(a)\vert^{-h}),\;~~ 0<h<1.$$
\[Using-both-permittivity-and-conductivity-contrasts\]
Let $x \in \partial \Omega$ and $t \geq diam(\Omega)$. We have the following expansions of the pressure:
1. [*[Injecting one nanoparticle]{}*]{}. In this case, we have the expansion
$$\label{pressure-to-v_0}
(p^{+} + p^{-} - 2p_{0})(t,x) = \frac{-t \, \omega \, \beta_{0}}{c_{p} \, (t^{2}-\vert x-z \vert^{2})^{3/2}} \frac{2{{\ensuremath{\mathrm{Im\,}}}}(\tau) \vert u_0(z) \vert^{2}}{\vert 1- \omega^{2} \mu_{0} \lambda_{n_{0}} \tau \vert^{2}} \Big(\int_{D} e_{n_{0}} dx \Big)^{2} + \mathcal{O}\big(\vert \log(a) \vert^{\max(-1,2h-2)}\big),$$
under the condition $0\leq s < \max\{h,\; 1-h\}$, where $p^+$ and $p^-$ correspond to the pressure after injecting the nanoparticles and exciting with frequencies of incidence (\[resoance-n-0-2\]) while $p_0$ is the pressure in the absence of the nanoparticles.
2. [*[Injecting two close dielectric nanoparticles.]{}*]{} These two particles are distant to each other as $$\vert z_1-z_2\vert \geq exp(-\vert \log(a)\vert^{1-h}), a<<1,$$ where $z_1$ and $z_2$ are the location points of the particles. We set $$\label{pressure-tilde}
\tilde{p}(t, x):=(p^+-p_{0})(t,x) +\frac{1-\omega_{n_0}^2}{1+\omega_{n_0}^2}(p^--p_0)(t,x)$$ then we have the following expansion[^4] $$\label{pressure-tilde-expansion}
\tilde{p}(t, x) = \frac{\omega \, \beta_{0}}{c_{p}} \, \frac{-t \, }{(t^{2}-\vert x-z_{2} \vert^{2})^{\frac{3}{2}}} \, \, \frac{4 \; {{\ensuremath{\mathrm{Im\,}}}}(\tau)}{1+ \omega_{n_0}^{2}} \, \, \Big(\int_{D} u_2(x)e_{n_{0}}(x) dx \Big)^{2} + \mathcal{O}\big(\vert \log(a) \vert^{\max(-1,2h-2)}\big),$$ where $D$ is any one of the two nanoparticles.
\
The formula (\[pressure-to-v\_0\]) means that if we measure before and after injecting one nanoparticle, then we can reconstruct the phaseless data $\vert u_0 \vert(x), x \in \Omega$. Hence, we transform the photoacoustic inverse problem to the inverse scattering using phaseless internal data.
The formula (\[pressure-tilde-expansion\]) can be expressed using $u_0$ instead of $u_2$ under the condition $0\leq s < \max\{h,\; 1-h\}$ as for (\[pressure-to-v\_0\]). The formula (\[pressure-tilde-expansion\]) means that if we measure before and after injecting two closely spaced nanoparticles, then we can reconstruct $\int_{D} u_2(x)e_{n_{0}}(x) dx $ and hence $\int_{D} \vert u_2(x) \vert^2 dx$. In addition, a slightly different form of formula (\[pressure-to-v\_0\]), see $(\ref{abxyz})$, $$(p^{+} + p^{-} - 2p_{0})(t,x) = \frac{-2 \, t \; {{\ensuremath{\mathrm{Im\,}}}}(\tau) \; \vert \int_{D} u_{1}(x) e_{n_{0}}(x) \, dx \vert^{2}}{(t^{2}-\vert x-z \vert^{2})^{3/2}} + \mathcal{O}\big(\vert \log(a) \vert^{\max(-1,2h-2)}\big),$$ shows that if we measure before and after injecting one nanoparticle, we can reconstruct $\int_{D} \vert u_1(x) \vert^2 dx$. Using these two last data, i.e. $\int_{D} \vert u_1(x) \vert^2 dx$ and $\int_{D} \vert u_2(x) \vert^2 dx$, we can reconstruct, via (\[reconstruction-k-using-contras-permittivity-only\]), $\vert \epsilon_0\vert$. Hence, using two different resonances, we reconstruct both the permittivity $\epsilon$ and the conductivity $\sigma_{\Omega}$.
Let us show how we can use (\[pressure-to-v\_0\]) to localize the position $z$ of the injected nanoparticles and estimate $\vert u_{0}(z) \vert$. The corresponding results can also be shown using $(\ref{pressure-tilde-expansion})$. For this, we use the notations $$\begin{aligned}
\tilde{p} &:=& (p^{+} + p^{-} - 2p_{0}), \; A := \frac{- 2 \; \, {{\ensuremath{\mathrm{Im\,}}}}(\tau) \; \; \vert u_0(z) \vert^{2} }{\vert 1 - \omega^{2} \, \mu_{0} \, \lambda_{n_{0}} \, \tau \vert^{2}} \, \Bigg( \int_{D} e_{n_{0}} \, dx \Bigg)^{2} \; and \;
Err := \mathcal{O}\big(\vert \log(a) \vert^{\max(-1,2h-2)}\big).\end{aligned}$$ Let $t_{1} \neq t_{2}$ then we have $$\label{p/p}
\frac{\tilde{p}(t_{1},x)}{\tilde{p}(t_{2},x)} = \frac{ A \displaystyle\frac{t_{1}}{(t_{1}^{2}-\vert x-z \vert^{2})^{3/2}}+Err}{ A \displaystyle\frac{t_{2}}{(t_{2}^{2}-\vert x-z \vert^{2})^{3/2}}+Err} = \frac{t_{1}}{t_{2}} \Bigg( \frac{t_{2}^{2}-\vert x-z \vert^{2}}{t_{1}^{2}-\vert x-z \vert^{2}} \Bigg)^{3/2} + \mathcal{O}\Big(\vert \log(a) \vert^{s+\max(-h,h-1)}\Big),$$ where $$\label{cdtons}
0 \leq s < \min(h;1-h).$$ From $(\ref{p/p})$ we derive the formula $$\label{positionza}
\vert x - z \vert = \Bigg[ \frac{t_{1}^{2} \, (t_{2} \tilde{p}(t_{1},x))^{2/3} - t_{2}^{2} \, (t_{1} \tilde{p}(t_{2},x))^{2/3}}{(t_{2} \tilde{p}(t_{1},x))^{2/3} - (t_{1} \tilde{p}(t_{2},x))^{2/3}} \Bigg]^{\frac{1}{2}} + \mathcal{O}\Big(\vert \log(a) \vert^{s+\max(-h,h-1)}\Big).$$ The expression $(\ref{positionza})$ tells that, for $x \in \partial \Omega$, the point $z$ is in the arc given by the intersection of $\Omega$ and the circle $S$ with center $x$ and radius computable as $$\label{radius}
\Bigg[ \frac{t_{1}^{2} \, (t_{2} \tilde{p}(t_{1},x))^{2/3} - t_{2}^{2} \, (t_{1} \tilde{p}(t_{2},x))^{2/3}}{(t_{2} \tilde{p}(t_{1},x))^{2/3} - (t_{1} \tilde{p}(t_{2},x))^{2/3}} \Bigg]^{\frac{1}{2}}.$$ Then in order to localise $z$, we repeat the same experience with another point $x_{\star} \neq x$, and take the intersection of two arcs, see Figure $\ref{fig1}$.
![Localization of the particles. The blue curve represents $\partial \Omega$ while the red and yellow ones the circles of center $x_{1}:=x$ and $x_{2}:=x^*$ and radius $(\ref{radius})$, with $x$ and $x^{\star}$ respectively.[]{data-label="fig1"}](./fig-ConvertImage.png)
Assume that $z$ is obtained, then from the equation $(\ref{pressure-to-v_0})$, we get $$\vert u_{0}(z) \vert^{2} = - \, \displaystyle \frac{\vert 1- \omega^{2} \, \mu_{0} \, \lambda_{n_{0}} \, \tau \vert^{2} (t^{2}-\vert x-z \vert^{2})^{3/2} \; \tilde{p}(t,x)}{2 \,t \, {{\ensuremath{\mathrm{Im\,}}}}(\tau) \, \Big(\int_{D} e_{n_{0}}\Big)^{2}} + \mathcal{O}\Big(\vert \log(a) \vert^{s+\max(-h,h-1)}\Big).$$ with $$\label{cdt2s}
0\leq s < \min \{h ; 1-h \}.$$
Let us finish this introduction by comparing our findings with the previous results. To our knowledge, the only work published to analyse the photo-acoustic imaging modality using contrast agents is the recent work [@Triki-Vauthrin:2017]. The authors propose to use plasmonic resonances instead of dielectric ones. Assuming the acoustic inversion to be known and done, as described in section \[photo-acoustic-inversion-known-results\], they perform the electromagnetic inversion. They state the 2D-electromagnetic model where the magnetic fields satisfie a divergence form equation. Performing asymptotic expansions, close to these resonances they derive the dominant part of the magnetic field and reconstruct the permittivity by an optimization step applied on this dominating term. This result could be compared to Theorem $\ref{Using-only-permittivity-contrast}$, i.e formula $(\ref{one-particle-reconst-permittivity-only})$.
The rest of the paper is organized as follows. In section \[Theo-Using-only-permittivity-contrast\] and section \[Theo-using-both-contrasted-permit-conduct\], we prove Theorem \[Using-only-permittivity-contrast\] and Theorem \[Using-both-permittivity-and-conductivity-contrasts\] respectively. In section \[appendixlemma\], we derive the needed estimates on the electric fields, used in section \[Theo-Using-only-permittivity-contrast\] and section \[Theo-using-both-contrasted-permit-conduct\] in terms of the contrast of the permittivity, conductivity and for frequencies close to the dielectric resonances. Finally, in section \[the hypotheses-justification\], we discuss the validity of the conditions in [**[Hypotheses]{}\[hyp\]**]{}.\
**Notations:** Only $\mathbb{L}^{2}$-norms on domains are involved in the text. Therefore, unless indicated, we use $\Vert \cdot \Vert$ without specifying the domain. In addition, we use $<\cdot,\cdot>$ for the corresponding scalar product. For a given function $f$ defined on $\overset{M}{\underset{j=1}{\cup}} D_{j}$, we denote by $f_{j} := f_{|D_{j}}$, $j = 1, \cdots,M$.\
The eigenfunctions $\left( e^{(i)}_{n} \right)_{n \in \mathbb{N}}$ of the Newtonian operator stated on $D_{i}$ depend, of course, on $D_{i}$. Nevertheless, unless specified, we use the notation $\left( e_{n} \right)$ even when dealing with multiple particles located in different positions.
Proof of Theorem \[Using-only-permittivity-contrast\] {#Theo-Using-only-permittivity-contrast}
=====================================================
We split the proof into two subsections. In the first one, we derive the Foldy-Lax algebraic system, see $(\ref{as})$ in proposition $(\ref{propositionas})$, as an approximation of the continuous L.S.E satisfied by the electric field. In the second subsection, we invert the algebraic system and extract the needed formulas, see $(\ref{Kds})$.
Approximation of the L.S.E {#motref}
--------------------------
In the following, we notice by $G_{k}$ the Green kernel for Helmholtz equation in dimension two. This means that $G_{k}$ is a solution of: $$(\Delta + \omega^{2} \, n_{0}^{2}(\cdot) ) G_{k(\cdot)} (\cdot,\cdot) = - \delta_{\cdot}(\cdot), \qquad in \qquad \mathbb{R}^{2}$$ satisfying the S.R.C.
\[GreenKernel\] The Green kernel $G_{k}$ admits the following asymptotic expansion $$\begin{aligned}
\label{Gkexpansion}
\nonumber
G_{k}(\vert x - y \vert) & = & \frac{-1}{2\pi} \log(\vert x-y \vert) - \frac{1}{2\pi} \log(k(y)) + \frac{i}{4}+\frac{1}{2 \pi}\Bigg[\log(2)-\displaystyle\lim_{p \rightarrow +\infty}\Big( \sum_{m=1}^{p} \frac{1}{m} - \log(p) \Big)\Bigg] \\
& + & \mathcal{O}\big(\vert x-y\vert \; \log(\vert x-y \vert)\big), \quad x \; \text{near} \; y. \end{aligned}$$
Following the same steps as in , pages 10-12, and taking into account the logarithmic singularity of the fundamental solution of 2D Helmholtz equation we can deduce the expansion $(\ref{Gkexpansion})$.
We define $$a := \frac{1}{2} \underset{1 \leq m \leq M}{diam}(D_{m}),\quad d_{mj} := dist(D_{m},D_{j}),\quad d := \underset{m \neq j \atop 1\leq m ,j \leq M}{ \min} d_{mj},$$ where $D_{m} = z_{m} + a\, B$ with $B$ is a bounded domain containing the origin.
The unique solution of the problem $(\ref{prblm})$, with $D :=
\overset{M}{\underset{j=1}{\cup}} D_{j}$, satisfies the L.S.E $$\label{O}
u(x) - \omega^{2} \, \mu_{0} \, \int_{D} G_{k}(x,y) \, (\varepsilon_{p}-\varepsilon_{0})(y) \, u(y) \, dy = u_{0}(x) \quad in \; D.$$ We set[^5] $v_{m} := u_{|_{D_{m}}}, m=1,\cdots,M$. Then $(\ref{O})$, for $x \in D_{m}$, rewrites as $$v_{m}(x) - \omega^{2} \mu_{0} \int_{D_{m}} G_{k}(x,y) (\varepsilon_{p}-\varepsilon_{0}(y)) v_{m}(y) dy - \omega^{2} \mu_{0} \sum_{j \neq m} \int_{D_{j}} G_{k}(x,y) (\varepsilon_{p}-\varepsilon_{0}(y)) v_{j}(y) dy = u_{0}(x).$$ We set: $\tau_{j} = (\varepsilon_{p}-\varepsilon_{0}(z_{j}))$. Assuming $\varepsilon_{0_{|_{\Omega}}}$ to be $\mathcal{W}^{1,\infty}(\Omega)$, the solution $u$ of the scattering problem $$\begin{aligned}
\Big( \Delta + \omega^{2} \, n_{0}(x) \Big) u &=& 0 \quad \text{in} \; \mathbb{R}^{2} \\
u &:=& u^{i} + u^{s} \; \mbox{and} \; u^{s} \; S.R.C,\end{aligned}$$ has a $\mathcal{C}^{1}$-regularity in $\Omega$. Set[^6] $$\begin{aligned}
\label{phigamma}
\Phi_{0}(x,y) := \frac{-1}{2\pi} \log(\vert x-y \vert) \;\; \text{and} \;\; \Gamma := \frac{i}{4}+\frac{1}{2 \pi}\Bigg[\log(2)-\displaystyle\lim_{p \rightarrow +\infty}\Big( \sum_{m=1}^{p} \frac{1}{m} - \log(p) \Big)\Bigg]. \end{aligned}$$ Expanding $(\varepsilon_{p}-\varepsilon_{0}(.))$, $u_{0}$ and $G_{k}(\cdot,\cdot)$ near the center $z_{m}$, we obtain $$\begin{aligned}
v_{m}(x) & - & \omega^{2} \, \mu_{0} \, \tau_{m} \int_{D_{m}} \Big(\Phi_{0}(x,y) - \frac{1}{2 \pi} \log(k)(y) + \Gamma \Big) \, v_{m}(y) \, dy \\
&-& \omega^{2} \, \mu_{0} \, \tau_{m} \; \mathcal{O}\left( \int_{D_{m}} \vert x-y \vert \, \log\vert x-y \vert \, v_{m}(y) \, dy \right) \\ & + & \omega^{2} \, \mu_{0} \, \int_{D_{m}} G_{k}(x,y) \, \int_{0}^{1} (y-z_{m}) \centerdot \nabla \varepsilon_{0}(z_{m}+t(y-z_{m})) \, dt \, v_{m}(y) \, dy \\ & - & \omega^{2} \, \mu_{0} \, \sum_{j \neq m} \, \tau_{j} \, \int_{D_{j}} \Bigg[G_{k}(z_{m};z_{j}) + \int_{0}^{1} \nabla_{x} G_{k}(z_{m}+t(x-z_{m});z_{j})\centerdot(x-z_{m}) \; dt \\ &+& \int_{0}^{1} \nabla_{y}G_{k}(z_{m};z_{j}+t(y-z_{j}))\centerdot(y-z_{j}) \, dt \\ &+& \int_{0}^{1} \nabla_{x} \int_{0}^{1} \nabla_{y} G_{k}(z_{m}+t(x-z_{m});z_{j}+t(y-z_{j}))\centerdot(y-z_{j}) \, dt\centerdot \, (x-z_{m}) \, dt \Bigg] \, v_{j}(y) \, dy \\ &+& \omega^{2} \, \mu_{0} \, \sum_{j \neq m} \, \int_{D_{j}} G_{k}(x,y) \, \int_{0}^{1} (y-z_{j}) \centerdot \nabla \varepsilon_{0}(z_{j}+t(y-z_{j})) \,dt \, v_{j}(y) \, dy \\ &=& u_{0}(z_{m}) + \int_{0}^{1} \nabla u_{0}(z_{m}+t(x-z_{m})) \centerdot (x-z_{m}) \, dt.\end{aligned}$$ We assumed that all nano-particles have the same electromagnetic properties, then $\tau_{j}$ is the same for every $j$ and let us denote it by $\tau$. Define $$\label{A}
w = \omega^{2} \, \mu_{0} \, \tau \Big[I - \omega^{2} \, \mu_{0} \, \tau \, A_{0} \Big]^{-1}(1) = \Big[\frac{1}{\omega^{2} \, \mu_{0} \, \tau} I - A_{0} \Big]^{-1}(1),$$ and set the following notations $$\label{R}
\textbf{C}_{m} = \int_{D_{m}} w dx \quad \& \quad \textbf{C}_{m}^{\star} = \textbf{C}_{m} \Bigg[1 - \Big(- \frac{1}{2 \pi} \log(k)(z_{m}) + \Gamma \Big) \textbf{C}_{m}\Bigg]^{-1} \quad \& \quad Q_{m} = \omega^{2} \, \mu_{0} \tau (\textbf{C}_{m}^{\star})^{-1} \, \int_{D_{m}} v_{m} dx.$$
Using the definition of $w$, and integrate $y$ over $D_{m}$, the self adjointness of the operator $(\lambda I - A_{0})$ and we multiplying both sides of this equation by $\omega^{2} \, \mu_{0} \, \tau \, C_{m}^{-1}$, we obtain $$\begin{aligned}
Q_{m} &-& \sum_{j \neq m} G_{k}(z_{m};z_{j}) \; \textbf{C}_{j}^{\star} \; Q_{j} = u_{0}(z_{m}) + \omega^{2} \, \mu_{0} \; \tau \, \textbf{C}_{m}^{-1} \; \Bigg[ \\
&+& \int_{D_{m}}\,w \, \int_{D_{m}} \vert x-y \vert \, \log\vert x-y \vert \, v_{m}(y) \, dy \, dx \\
& - & \tau^{-1} \, \int_{D_{m}} \, w \int_{D_{m}} G_{k}(x,y) \, \int_{0}^{1} (y-z_{m}) \centerdot \nabla \varepsilon_{0}(z_{m}+t(y-z_{m})) \,dt \; v_{m}(y) \, dy \, dx \\
& + & \sum_{j \neq m} \, \int_{D_{m}} \, w \, \int_{0}^{1} \nabla_{x}G_{k}(z_{m}+t(x-z_{m});z_{j})\centerdot(x-z_{m})\,dt \,dx \, \int_{D_{j}} v_{j} \, dy \\
& + & \textbf{C}_{m} \sum_{j \neq m} \, \int_{D_{j}} \, \int_{0}^{1} \, \nabla_{y} G_{k}(z_{m};z_{j}+t(y-z_{j}))\centerdot(y-z_{j})dt \, v_{j}(y) \, dy\\
& + & \sum_{j \neq m} \int_{D_{m}} w \int_{D_{j}} \int_{0}^{1} \nabla \int_{0}^{1} \nabla G_{k}(z_{m}+t(x-z_{m});z_{j}+t(y-z_{j}))\centerdot(y-z_{j}) dt \centerdot (x-z_{m}) dt v_{j}(y) dy dx\\
&+& \tau^{-1} \, \, \sum_{j \neq m} \, \int_{D_{m}} \, w \, \int_{D_{j}} G_{k}(x,y) \, \int_{0}^{1} (y-z_{j}) \centerdot \nabla \varepsilon_{0}(z_{j}+t(y-z_{j})) \,dt v_{j}(y) \, dy \, dx \\
&+& \big( \omega^{2} \, \mu_{0} \, \tau \big)^{-1} \, \int_{D_{m}} \, w \, \int_{0}^{1} (x-z_{m}) \centerdot \nabla u_{0}(z_{m}+t(x-z_{m})) dt \, dx + \mathcal{O}\Bigg(\textbf{C}_{m} \, a \, \int_{D_{m}} v_{m} dx \Bigg)\Bigg].\end{aligned}$$ For the right side, we keep $u_{0}(z_{m})$ as a dominant term and estimate the other terms as an error. To achieve this goal, we need the following proposition.
\[abc\] We have: $$\label{prioriest}
\Vert u \Vert_{\mathbb{L}^{2}(D)} \leq \vert \log(a) \vert^{h} \, \Vert u_{0} \Vert_{\mathbb{L}^{2}(D)},$$ and $$\textbf{C}_{m} = \mathcal{O}( \vert \log(a) \vert^{h-1}).$$
See Section \[appendixlemma\].
As the incident wave is smooth and independent on $a$, thanks to $(\ref{prioriest})$, we get $$\Vert w \Vert_{\mathbb{L}^{2}(D)} \leq a^{-1} \, \vert \log(a) \vert^{h-1}.$$ We recall that $$\tau \sim 1 / a^{2} \, \vert \log(a) \vert.$$ The error part contains eight terms. Next we define and estimate every term, then we sum them up. More precisely, we have
- $S_{1} := \tau \; \textbf{C}_{m}^{-1} \int_{D_{m}} \,w \, \int_{D_{m}} \vert x-y \vert \log(\vert x-y \vert) v_{m}(y) dy \; dx$ $$\begin{aligned}
\vert S_{1} \vert & \precsim & a^{-2} \, \vert \log(a) \vert^{-1} \; \vert \log(a) \vert^{1-h} \; \Vert w \Vert \, \Bigg[\int_{D_{m}} \Bigg\vert \int_{D_{m}} \vert x-y \vert \log(\vert x-y \vert) v_{m}(y) dy \Bigg\vert^{2} \; dx \Bigg]^{\frac{1}{2}} \\
& \precsim & a^{-2} \, \vert \log(a) \vert^{-h} \; a^{-1} \; \vert \log(a) \vert^{h-1} \; \Bigg[\int_{D_{m}} \Bigg( \int_{D_{m}} \vert v_{m}\vert(y) dy \Bigg)^{2} \; dx \Bigg]^{\frac{1}{2}} \, a \, \vert \log(a) \vert\\
& = & \mathcal{O}\Big( a^{-2} \, \vert \log(a) \vert^{-1} \, a \, \Vert v_{m} \Vert \, a \, \vert \log(a) \vert \Big), \end{aligned}$$ and then $$S_{1} = \mathcal{O}\Big(a \, \vert \log(a) \vert^{h} \, M^{\frac{1}{2}} \Big).$$
- $S_{2} := \textbf{C}_{m}^{-1} \, \int_{D_{m}} w(x) \int_{D_{m}} G_{k}(x,y) \int_{0}^{1} (y-z_{m})\centerdot \nabla \varepsilon_{0}(z_{m}+t(y-z_{m})) dt \, v_{m}(y) \, dy \, dx $ $$\vert S_{2} \vert \lesssim a^{-1} \, \Bigg[ \int_{D_{m}} \Bigg(\int_{D_{m}} \vert G_{k}\vert(x;y) \; \Big\vert \int_{0}^{1} (y-z_{m})\centerdot\nabla\varepsilon_{0}(z_{m}+t(y-z_{m})) dt \Big\vert \; \vert v_{m}\vert(y) dy \Bigg)^{2} dx \Bigg]^{\frac{1}{2}}.$$ The smoothness of $\varepsilon_{0}$ implies $\left\vert \int_{0}^{1} (y-z_{m})\centerdot\nabla\varepsilon_{0}(z_{m}+t(y-z_{m})) dt \right\vert \lesssim \mathcal{O}(a),$ hence $$\begin{aligned}
\vert S_{2} \vert & \lesssim & \Vert v_{m} \Vert \; \Bigg[\int_{D_{m}} \int_{D_{m}} \vert G_{k} \vert^{2} (x;y) \; dy \; dx \Bigg]^{\frac{1}{2}} \lesssim \Vert u \Vert \; a^{2} \; \vert \log(a) \vert, \end{aligned}$$ and then $$S_{2} = \mathcal{O}\Big(a^{3} \; \vert \log(a) \vert^{1+h} \; M^{\frac{1}{2}}\Big).$$
- $ S_{3} := \ \tau \, \textbf{C}_{m}^{-1} \, \underset{j \neq m}{\sum} \, \int_{D_{m}} \, w \, \int_{0}^{1} \nabla_{x}G_{k}(z_{m}+t(x-z_{m});z_{j})\centerdot(x-z_{m})\,dt \,dx \, \int_{D_{j}} v_{j} \, dy $ $$\vert S_{3} \vert \lesssim \frac{1}{a \, \vert \log(a) \vert^{h}} \sum_{j \neq m} \Vert w \Vert \; \Vert v_{j} \Vert \; \Bigg[\int_{D_{m}} \Bigg\vert \int_{0}^{1} \underset{x}{\nabla} G_{k}(z_{m}+t(x-z_{m});z_{j})\centerdot(x-z_{m}) \, dt\, \Bigg\vert^{2} dx \Bigg]^{\frac{1}{2}}.$$ Without difficulties, we can check that $$\Bigg[\int_{D_{m}} \Bigg\vert \int_{0}^{1} \underset{x}{\nabla} G_{k}(z_{m}+t(x-z_{m});z_{j})\centerdot(x-z_{m}) \, dt \Bigg\vert^{2} \, dx \Bigg]^{\frac{1}{2}} \lesssim \frac{a^{2}}{d_{mj}},$$ then we plug this on the previous equation and use Cauchy-Schwartz inequality, to get $$\vert S_{3} \vert \lesssim \vert \log(a) \vert^{-1} \; \Vert v \Vert \; \Bigg( \sum_{j \neq m} \frac{1}{d_{mj}^{2}} \Bigg)^{\frac{1}{2}} \lesssim \vert \log(a) \vert^{h-1} \; a \; M \; d^{-1}.$$ Set $$S_{4} := \tau \, \, \sum_{j \neq m} \, \int_{D_{j}} \, \int_{0}^{1} \, \underset{y}{\nabla} G_{k}(z_{m};z_{j}+t(y-z_{j}))\centerdot(y-z_{j})dt \, v_{j}(y) \, dy,$$ and remark that $S_{4}$ has a similar expression as $S_{3}$, then we obtain: $$S_{3} = \mathcal{O}(\vert \log(a) \vert^{h-1} \; a \; M \; d^{-1}) \quad \text{and} \quad S_{4} = \mathcal{O}(\vert \log(a) \vert^{h-1} \; a \; M \; d^{-1}).$$
- $$S_{5} :=
\frac{\tau}{\textbf{C}_{m}} \, \sum_{j \neq m} \int_{D_{m}} w \int_{D_{j}} \int_{0}^{1} \nabla \int_{0}^{1} \nabla G_{k}(z_{m}+t(x-z_{m});z_{j}+l(y-z_{j}))\centerdot(y-z_{j}) dl \centerdot (x-z_{m}) dt v_{j}(y) dy dx$$ $$\begin{aligned}
\vert S_{5} \vert & \lesssim & \frac{\Vert w \Vert}{a^{2} \, \vert \log(a) \vert^{h}} \sum_{j \neq m} \, \Bigg\Vert \int_{D_{j}} \int_{0}^{1} \nabla \int_{0}^{1} \nabla G_{k}(z_{m}+t(\centerdot-z_{m});z_{j}+l(y-z_{j}))\centerdot(y-z_{j}) dl \centerdot (\centerdot-z_{m}) dt \, v_{j}(y) \, dy \Bigg\Vert \\
& \lesssim & \frac{a^{-3}}{\vert \log(a) \vert} \sum_{j \neq m} \Vert v_{j} \Vert \Bigg[\int_{D_{m}} \Bigg\vert \int_{D_{j}} \int_{0}^{1} \nabla \int_{0}^{1} \nabla G_{k}(z_{m}+t(x-z_{m});z_{j}+l(y-z_{j}))\centerdot(y-z_{j}) dl \centerdot (x-z_{m}) dt dy \Bigg\vert^{2} dx\Bigg]^{\frac{1}{2}}. \end{aligned}$$ we have $$\int_{D_{m}} \Bigg\vert \int_{D_{j}} \int_{0}^{1} \nabla \int_{0}^{1} \nabla G_{k}(z_{m}+t(x-z_{m});z_{j}+l(y-z_{j}))\centerdot(y-z_{j}) dl \centerdot (x-z_{m}) dt dy \Bigg\vert^{2} dx \lesssim \mathcal{O}\Big( \frac{a^{8}}{d_{mj}^{4}} \Big),$$ hence $$\begin{aligned}
\vert S_{5} \vert & \lesssim & a \, \vert \log(a) \vert^{-1} \, \Vert u \Vert \, \Bigg(\sum_{j \neq m} \frac{1}{d_{mj}^{4}} \Bigg)^{\frac{1}{2}} \lesssim a^{2} \; \vert \log(a) \vert^{h-1} \; M \; d^{-2}, \end{aligned}$$ then $$S_{5} = \mathcal{O}(a^{2} \; \vert \log(a) \vert^{h-1} \; M \, d^{-2}).$$
- $S_{6} := \textbf{C}_{m}^{-1} \, \underset{j \neq m}{\sum} \int_{D_{m}} w \int_{D_{j}} G_{k}(x,y) \int_{0}^{1} (y-z_{j})\centerdot \nabla \varepsilon_{0}(z_{j}+t(y-z_{j})) \, dt \, v_{j}(y) \, dy \, dx$ $$\begin{aligned}
\vert S_{6} \vert & \lesssim & \sum_{j \neq m} \Vert v_{j} \Vert \; \Bigg[ \int_{D_{m}} \; \int_{D_{j}} \vert G_{k} \vert^{2}(x;y) \, dy \; dx \, \Bigg]^{\frac{1}{2}} \lesssim a^{2} \; \Vert v \Vert \; M^{\frac{1}{2}} \leq a^{3} \; \vert \log(a) \vert^{h} \; M.\end{aligned}$$ Then $$S_{6} = \mathcal{O}\Big(a^{3} \; \vert \log(a) \vert^{h} \; M\Big).$$
- $S_{7} := \textbf{C}_{m}^{-1} \int_{D_{m}} w \int_{0}^{1} (x-z_{m})\centerdot \nabla u_{0}(z_{m}+t(x-z_{m})) \, dt \, dx$ $$\begin{aligned}
\vert S_{7} \vert & \lesssim & \vert \log(a) \vert^{1-h} \; \Vert w \Vert \; \Bigg[ \int_{D_{m}} \Bigg\vert \int_{0}^{1} (x-z_{m}) \centerdot \nabla u_{0}(z_{m}+t(x-z_{m})) dt \Bigg\vert^{2} dx \Bigg]^{\frac{1}{2}}.\end{aligned}$$ As $u_{0}$ is smooth, we have $$\Bigg[ \int_{D_{m}} \Bigg\vert \int_{0}^{1} (x-z_{m}) \centerdot \nabla u_{0}(z_{m}+t(x-z_{m})) dt \Bigg\vert^{2} dx \Bigg]^{\frac{1}{2}} = \mathcal{O}(a).$$ Hence $$S_{7} = \mathcal{O}(a).$$
- $ S_{8} := a \, \tau \, \int_{D_{m}} v_{m}$ $$\vert S_{8} \vert \leq \tau \, a \, \Vert 1 \vert_{\mathbb{L}^{2}(D_{m})} \, \Vert v_{m} \Vert_{\mathbb{L}^{2}(D_{m})} = \mathcal{O}(a \, \vert \log(a) \vert^{h-1}).$$
Finally, the error part is $$Error = S_{1}+\cdots+S_{8} = \mathcal{O}(a \; d^{-1} \; \vert \log(a) \vert^{h-1} \; M ).$$
\[propositionas\] The vector $\big( Q_{m} \big)_{m=1}^{M}$ satisfie the following algebraic system $$\label{as}
Q_{m} - \sum_{j \neq m} G_{k}(z_{m};z_{j}) \; \textbf{C}_{j}^{\star} \; Q_{j} = u_{0}(z_{m}) + \mathcal{O}(a \; d^{-1} \; \vert \log(a) \vert^{h-1} \; M ).$$
The algebraic system $\ref{as}$ can be written, in a matrix form, as $$\label{per}
(I-B) \, Q = V + Err$$ with $B=\Big(B_{mj}\Big)_{m,j=1}^{M}$ such that $B_{mj} := G_{k}(z_{m};z_{j}) \, \textbf{C}_{j}^{\star}$ and $V := (u_{0}(z_{1}), \cdots, u_{0}(z_{M}))^{\top}$.\
In the next proposition, we give conditions under which the linear system $(\ref{per})$ is invertible.
\[d>exp\] The algebraic system $(\ref{per})$ is invertible if $$\label{Condinv}
d > exp\Big(- \vert \log(a) \vert^{1-h} \Big),$$ where $d$ is the minimal distance between the particles.
of $\textbf{Lemma} (\ref{d>exp})$. Let us evaluate the norm of $B$. For this we have: $$\begin{aligned}
\Vert B \Vert & = & \max_{m} \, \sum_{j \neq m} \vert B_{mj} \vert \stackrel{\text{def}} = \max_{m} \sum_{j \neq m} \Bigg\vert G_{k}(z_{m};z_{j}) \Bigg[\textbf{C}^{-1}_{j} - \Big( \frac{-1}{2\pi} \log(k)(z_{j}) + \Gamma \Big)\Bigg]^{-1} \Bigg\vert \\
& \leq & \vert \log(a) \vert^{h-1} \sum_{j \neq m} \log\Bigg(\frac{1}{d_{mj}}\Bigg). \end{aligned}$$ We need the following lemma
\[P\] We have $$\label{lmjdmj}
\sum_{j \neq m} \log(1/d_{mj}) = \log(1/d).$$
of **Lemma \[P\]**\
We set $ \log(1/d_{mj}) = 1/l_{mj}$ and $l = \displaystyle\min_{j \neq m} l_{mj}$. Then $$\label{lmj}
\sum_{j \neq m} \log\Bigg(\frac{1}{d_{mj}}\Bigg) = \sum_{j \neq m} \frac{1}{l_{mj}} \stackrel{(\star)}= \frac{1}{l}.$$ At first we assume that $(\star)$ is checked. Then we have $$\label{dmj}
l = \min_{j \neq m} \frac{1}{\log(1/d_{mj})} = \frac{1}{\log(\displaystyle\max_{j \neq m} (1/d_{mj}))} = \frac{1}{\log(1/(\displaystyle\min_{j \neq m} d_{mj}))} = \frac{1}{\log(1/d)}.$$ Then $(\ref{lmj})$ combined with $(\ref{dmj})$ give a justification of $(\ref{lmjdmj})$.\
Now, in order to prove $(\star)$ we modify to the two dimensional case the proof, done for three dimensional case, given in ([@ASCD], page 13). We get $$\sum_{i=1 \atop i \neq j}^{M} \frac{1}{l^{k}_{ij}} = \left\{
\begin{array}{lll}
\mathcal{O}(l^{-k}) + \mathcal{O}(l^{-2\alpha}) & if & k < 2\\
\mathcal{O}(l^{-2}) + \mathcal{O}(l^{-2\alpha} \vert \log(l) \vert) & if & k=2 \\
\mathcal{O}(l^{-k}) + \mathcal{O}(l^{-\alpha k}) & if & k > 2.
\end{array}
\right.$$
Based on lemma $\ref{P}$ the condition $\Vert B \Vert < 1$, is fulfilled if $$\label{cdtsurd}
\log(1/d) < \vert \log(a) \vert^{1-h} \quad \text{or} \quad d > \exp\Big(-\vert \log(a) \vert^{1-h}\Big).$$
Inversion of the derived Foldy-Lax algebraic system (\[as\])
------------------------------------------------------------
Here, we deal with the case of two particles, i.e $M = 2$. In the equation $(\ref{as})$ we use the condition $d \backsim a^{\vert \log(a) \vert^{-h}}$, then we get $$\left\{
\begin{array}{r c l}
Q_{1} - G_{k}(z_{1};z_{2}) \, \textbf{C}_{2}^{\star} Q_{2} &=& u_{0}(z_{1}) + \mathcal{O}(a^{1-\vert \log(a) \vert^{-h}} \, \, \vert \log(a) \vert^{h-1}),\\
&&\\
Q_{2} - G_{k}(z_{2};z_{1}) \, \textbf{C}_{1}^{\star} Q_{1} &=& u_{0}(z_{2}) + \mathcal{O}(a^{1-\vert \log(a) \vert^{-h}} \, \vert \log(a) \vert^{h-1}).
\end{array}
\right.$$ We check that the condition $d \backsim a^{\vert \log(a) \vert^{-h}}$ is sufficient for the invertibility of the last system. For this, we have from $(\ref{cdtsurd})$ $$d > \exp\Big(-\vert \log(a) \vert^{1-h}\Big) = \Big(e^{-\vert \log(a) \vert} \Big)^{- \vert \log(a) \vert^{-h}} = a^{\vert \log(a) \vert^{-h}}.$$ Now, we assume that[^7] $\textbf{C}_{1} = \textbf{C}_{2} = \textbf{C}$ and use the expansion of $G_{k}(z_{m};z_{j})$, see $(\ref{Gkexpansion})$, to obtain $$\left\{
\begin{array}{r c l}
Q_{1} - \Big[\Phi_{0}(z_{1};z_{2}) - \frac{1}{2\pi}\log(k)(z_{1})+\Gamma \Big] \, \textbf{C}_{2}^{\star} Q_{2} &=& u_{0}(z_{1}) + \mathcal{O}\bigg(\displaystyle\frac{a^{1-\vert \log(a) \vert^{-h}}}{ \, \vert \log(a) \vert^{1-h}}\bigg) + \mathcal{O}\bigg(d \, \vert \log(d) \vert \, \textbf{C}_{2}^{\star} Q_{2} \bigg), \\
&&\\
Q_{2} - \Big[\Phi_{0}(z_{2};z_{1}) - \frac{1}{2\pi}\log(k)(z_{2})+\Gamma \Big] \, \textbf{C}_{1}^{\star} Q_{1} &=& u_{0}(z_{2}) + \mathcal{O}\bigg(\displaystyle\frac{a^{1-\vert \log(a) \vert^{-h}}}{ \, \vert \log(a) \vert^{1-h}}\bigg) + \mathcal{O}\bigg(d \, \vert \log(d) \vert \, \textbf{C}_{1}^{\star} Q_{1} \bigg).
\end{array}
\right.$$ We can estimate $$\label{Q}
d \, \vert \log(d) \vert \, \textbf{C}_{i}^{\star} Q_{i} = \mathcal{O}(a^{\vert \log(a) \vert^{-h}}), \quad for \quad i=1,2,$$ because, by the definition of $Q_{i}$, see $(\ref{R})$, we have $$d \, \vert \log(d) \vert \, \textbf{C}_{i}^{\star} Q_{i} = d \, \vert \log(d) \vert \, \textbf{C}_{i}^{\star} \, \omega^{2} \, \mu_{0} \, \tau \, \big(\textbf{C}_{i}^{\star}\big)^{-1} \int_{D_{i}} v \, dx \lesssim d \, \vert \log(d) \vert \, \vert \tau \vert \, \Vert 1 \Vert \, \Vert u \Vert.$$ The value of $d$ and $\vert \tau \vert$ are known, and we have an a priori estimate about $\Vert u \Vert$ given by $(\ref{prioriest})$, then $$\mathcal{O}(d \, \log(d)) \; \textbf{C}_{i}^{\star} \; Q_{i} \lesssim a^{\vert \log(a) \vert^{-h}} \, \vert \log(a) \vert^{1-h} \, a^{-2} \, \vert \log(a) \vert^{-1} \, a \, \vert \log(a) \vert^{h} \, a = a^{\vert \log(a) \vert^{-h}}.$$ This proves $(\ref{Q})$.\
With these estimations the last system can be written as $$\label{ccequastar}
\left\{\begin{array}{lll}
Q_{1} - \Big[\Phi_{0}(z_{1};z_{2}) - \frac{1}{2\pi}\log(k)(z_{1})+\Gamma \Big] \, \textbf{C}_{2}^{\star} Q_{2} &=& u_{0}(z_{1}) + \mathcal{O}(d), \\
&&\\
Q_{2} - \Big[\Phi_{0}(z_{2};z_{1}) - \frac{1}{2\pi}\log(k)(z_{2})+\Gamma \Big] \, \textbf{C}_{1}^{\star} Q_{1} &=& u_{0}(z_{2}) + \mathcal{O}(d).
\end{array}\right.$$ We need the following lemma to simplify the last system.
Since $k$ is $\mathcal{C}^{1}$-smooth and $z_{1}$ is close to $z_{2}$ at a distance $d$, we obtain $$\log(k)(z_{2}) = \log(k)(z_{1}) + \mathcal{O}\big(d \big)
\quad \text{and} \quad
\textbf{C}_{2}^{\star} = \textbf{C}_{1}^{\star} + \mathcal{O}\big(d \; \textbf{C}^{\,2}\big).$$
Use Taylor expansion of the function $k$ to get the first equality. Now the first one is proved, we use the definition of $\textbf{C}_{1,2}^{\star}$ and the fact that $\textbf{C}_{1} = \textbf{C}_{2}$ to obtain the second equality.
We use the last lemma to write the system $(\ref{ccequastar})$ as $$\left\{\begin{array}{lll}
Q_{1} - \Big[\Phi_{0}(z_{1};z_{2}) - \frac{1}{2\pi}\log(k)(z_{1})+\Gamma \Big] \, \textbf{C}_{1}^{\star} Q_{2} &=& u_{0}(z_{1}) + \mathcal{O}(d), \\
&&\\
Q_{2} - \Big[\Phi_{0}(z_{2};z_{1}) - \frac{1}{2\pi}\log(k)(z_{1})+\Gamma \Big] \, \textbf{C}_{1}^{\star} Q_{1} &=& u_{0}(z_{2}) + \mathcal{O}(d).
\end{array}\right.$$
To simplify notations, we write $\Phi_{0}$ (respectively $\frac{-1}{2\pi}\log(k)$, $\textbf{C}^{\star}$) instead of $\Phi_{0}(z_{1};z_{2})$ (respectively $\frac{-1}{2 \pi} \, \log(k(z_{1}))$, $\textbf{C}_{1}^{\star}$).
After resolution of this algebraic system, we obtain $$\label{usedpressure}
\left\{\begin{array}{lll}
Q_{1} &=& \displaystyle\frac{u_{0}(z_{1})}{1-\big[\Phi_{0} + \frac{-1}{2\pi}\log(k)+\Gamma \big] \, \textbf{C}^{\star}} + \mathcal{O}(d), \\
&&\\
Q_{2} &=& \displaystyle\frac{u_{0}(z_{2})}{1-\big[\Phi_{0} + \frac{-1}{2\pi}\log(k)+\Gamma \big] \, \textbf{C}^{\star}} + \mathcal{O}(d).
\end{array}\right.$$ We use the definition of $Q_{1,2}$, see $(\ref{R})$, to get $$\label{equa53}
\int_{D_{1}} v \; dx = \displaystyle\frac{u_{0}(z_{1})}{\omega^{2} \, \mu_{0} \, \tau \big[(\textbf{C}^{\star})^{-1} - (\Phi_{0} + (\frac{-1}{2\pi}\log(k)+\Gamma))\big] \, } + \mathcal{O}(d \, a^{2} \, \vert \log(a) \vert^{h}).$$ Then $$\begin{aligned}
\int_{D_{1}} v \; dx &=& \displaystyle\frac{u_{0}(z_{2})}{\omega^{2} \, \mu_{0} \, \tau \big[(\textbf{C}^{\star})^{-1} - (\Phi_{0} + (\frac{-1}{2\pi}\log(k)+\Gamma))\big] \, } \\ &+&
\displaystyle\frac{u_{0}(z_{1})-u_{0}(z_{2})}{\omega^{2} \, \mu_{0} \, \tau \big[(\textbf{C}^{\star})^{-1} - (\Phi_{0} + (\frac{-1}{2\pi}\log(k)+\Gamma))\big] \, } +
\mathcal{O}(d \, a^{2} \, \vert \log(a) \vert^{h}), \end{aligned}$$ we estimate the term $$\frac{u_{0}(z_{1})-u_{0}(z_{2})}{\omega^{2} \, \mu_{0} \, \tau \big[(\textbf{C}^{\star})^{-1} - (\Phi_{0} + (\frac{-1}{2\pi}\log(k)+\Gamma))\big] \, }$$ as $\mathcal{O}(d \, a^{2} \, \vert \log(a) \vert^{h})$, and use this to obtain $$\begin{aligned}
\int_{D_{1}} v \; dx &=& \displaystyle\frac{u_{0}(z_{2})}{\omega^{2} \, \mu_{0} \, \tau \big[(\textbf{C}^{\star})^{-1} - (\Phi_{0} + (\frac{-1}{2\pi}\log(k)+\Gamma))\big] \, } +
\mathcal{O}(d \, a^{2} \, \vert \log(a) \vert^{h}) \\
&=& \int_{D_{2}} v \, dx + \mathcal{O}(d \, a^{2} \, \vert \log(a) \vert^{h}),\end{aligned}$$ and finally $$\label{intv1intv2}
\int_{D_{1}} v \, dx = \int_{D_{2}} v \, dx + \mathcal{O}(d \, a^{2} \, \vert \log(a) \vert^{h}).$$ By adding the two equations of system $(\ref{ccequastar})$, we get $$\label{equastar}
\textbf{C}^{-1} - \Big( \Phi_{0}+2(-\log(k) / 2 \pi +\Gamma)\Big) = \frac{u_{0}(z_{1}) + u_{0}(z_{2})}{\omega^{2} \; \mu_{0} \; \tau \; \Bigg[ \displaystyle\int_{D_{1}} v \, dy + \int_{D_{2}} v \, dy \Bigg]} + \frac{\mathcal{O}\big(d \, \tau^{-1}\big)}{ \displaystyle \int_{D_{1}} v \, dy + \int_{D_{2}} v \, dy}.$$ We use equation $(\ref{intv1intv2})$ to rewrite the denominator as $$\begin{aligned}
\omega^{2} \; \mu_{0} \; \tau \; \Bigg[ \int_{D_{1}} v \, dy + \int_{D_{2}} v \, dy \Bigg] & = & \omega^{2} \; \mu_{0} \; \tau \; \Bigg[2 \int_{D_{2}} v \, dy + \mathcal{O}(d \, a^{2} \, \vert \log(a) \vert^{h}) \Bigg] \\
& = & 2\, \omega^{2} \; \mu_{0} \; \tau \; \int_{D_{2}} v \, dy \, \Bigg[1 + \frac{\mathcal{O}(d \, a^{2} \, \vert \log(a) \vert^{h})}{\int_{D_{2}} v \, dy} \Bigg] \\
& = & 2\, \omega^{2} \; \mu_{0} \; \tau \; \int_{D_{2}} v \, dy \, \Big[1 + \mathcal{O}(d) \Big],\end{aligned}$$ then equation $(\ref{equastar})$ takes the form $$\textbf{C}^{-1} - \Big( \Phi_{0}+2(- \log(k) / 2 \pi +\Gamma)\Big) = \frac{u_{0}(z_{1}) + u_{0}(z_{2})}{2\, \omega^{2} \; \mu_{0} \; \tau \; \displaystyle\int_{D_{2}} v \, dy } \Big[1 + \mathcal{O}(d) \Big] + \frac{\mathcal{O}\big(d \, \tau^{-1} \big)}{ \; \displaystyle \int_{D_{2}} v \, dy \, \Big[1 + \mathcal{O}(d) \Big]},$$ We manage the errors $$\textbf{C}^{-1} - \Big( \Phi_{0}+2(- \log(k) / 2 \pi +\Gamma)\Big) = \frac{u_{0}(z_{1}) + u_{0}(z_{2})}{2\, \omega^{2} \; \mu_{0} \; \tau \; \displaystyle\int_{D_{2}} v \, dy } + \mathcal{O}(d \, \vert \log(a) \vert^{1-h})$$ $$\begin{aligned}
\phantom{Invisible text} \qquad \qquad &=& \frac{2\,u_{0}(z_{2})}{2\, \omega^{2} \; \mu_{0} \; \tau \; \displaystyle\int_{D_{2}} v \, dy } + \frac{\int_{0}^{1} (z_{1} -z_{2}) \centerdot \nabla u_{0}(z_{2}+t(z_{1}-z_{2})) dt}{2\, \omega^{2} \; \mu_{0} \; \tau \; \displaystyle\int_{D_{2}} v \, dy } +\mathcal{O}(d \, \vert \log(a) \vert^{1-h}) \\
&=& \frac{u_{0}(z_{2})}{\omega^{2} \; \mu_{0} \; \tau \; \displaystyle\int_{D_{2}} v \, dy } +\mathcal{O}(d \, \vert \log(a) \vert^{1-h}), \end{aligned}$$ and take the modulus, we derive the identity: $$\label{equa55}
\Bigg\vert \textbf{C}^{-1} - \Big( \Phi_{0}+2(- \log(k) / 2\pi +\Gamma)\Big) \Bigg\vert^{2} = \frac{\vert u_{0}(z_{2}) \vert^{2}}{\vert \omega^{2} \; \mu_{0} \; \tau \vert^{2} \; \Big\vert \displaystyle\int_{D_{2}} v \, dy \Big\vert^{2}} +\mathcal{O}(d \, \vert \log(a) \vert^{2(1-h)}).$$ Unfortunately, from the acoustic inversion, we get only data of the form $\int_{D_{1,2}} \vert v \vert^{2} dx$ and in the last equation we deal with $\vert \int_{D_{1,2}} v \, dx \vert^{2}$. The next lemma makes a link between these two quantities.
We have $$\label{equa56}
\Big\vert \int_{D_{i}} v \; dy \Big\vert^{2} = a^{2} \, \Big(\int_{B} \overline{e}_{n_{0}} \; dy \Big)^{2} \; \int_{D_{i}} \vert v \vert^{2} \; dy + \mathcal{O}(a^{4} \, \vert \log(a) \vert^{h}), \quad i=1,2.$$
We split the proof into two steps.\
Step 1: Estimation of $\vert \int_{D_{1}} v \, dy \vert^{2}$.\
We use the same techniques as in the proof of the a priori estimation i.e proposition $\ref{abc}$. We have $$\begin{aligned}
\int_{D_{1}} v \, dy &=& < v ; e^{(1)}_{n_{0}} > \; \int_{D} e_{n_{0}} \, dx + a^{2} \; \sum_{n \neq n_{0} } < \tilde{v} ; \overline{e}_{n} > \; \int_{B} \overline{e}_{n} \, d\eta \\
& \stackrel{(\ref{W})}= & < v ; e^{(1)}_{n_{0}} > \int_{D} e_{n_{0}} dx + \mathcal{O}(a^{2}) \sum_{n \neq n_{0}} \Bigg[\frac{< \tilde{u_{0}} ; \overline{e}_{n} >}{(1-\omega^{2} \mu_{0} \tau \lambda_{n})} + \mathcal{O}(\vert \log(a) \vert^{-h}) <1,\overline{e}_{n}> \Bigg] \int_{B} \overline{e}_{n} d\eta. \end{aligned}$$ When the used frequency is not close to the resonance the following estimation holds $$\sum_{n \neq n_{0}} \Bigg[\frac{< \tilde{u_{0}} ; \overline{e}_{n} >}{(1-\omega^{2} \, \mu_{0} \, \tau \, \lambda_{n})} + \mathcal{O}(\vert \log(a) \vert^{-h}) <1,\overline{e}_{n}> \Bigg] \; \int_{B} \overline{e}_{n} \, d\eta \sim \mathcal{O}(1),$$ and plug this in the previous equation to obtain $$\begin{aligned}
\int_{D_{1}} v \, dy & \stackrel{(\ref{equa825})}= & a^{2} \Bigg[ \frac{ < \tilde{u_{0}}; \overline{e}^{(1)}_{n_{0}} > }{(1-\omega^{2} \, \mu_{0} \, \tau \, \lambda_{n_{0}})-\omega^{2} \, \mu_{0} \,\tau \, a^{2} \, \Phi_{0} \Big(\int_{B} \overline{e}_{n_{0}}\Big)^{2}} + \mathcal{O}(1) \Bigg] \; \int_{B} \overline{e}_{n_{0}} \, d\eta + \mathcal{O}(a^{2}). \end{aligned}$$ Then $$\label{equa57}
\Big\vert \int_{D_{1}} v \, dy \Big\vert^{2} = a^{4} \frac{ \vert < \tilde{u_{0}}; \overline{e}^{(1)}_{n_{0}} > \vert^{2}}{\Bigg\vert (1-\omega^{2} \, \mu_{0} \, \tau \, \lambda_{n_{0}})-\omega^{2} \, \mu_{0} \,\tau \, a^{2} \, \Phi_{0} \Big(\int_{B} \overline{e}_{n_{0}}\Big)^{2} \Bigg\vert^{2}} \; \Big( \int_{B} \overline{e}_{n_{0}} \, d\eta \Big)^{2} + \mathcal{O}(a^{4} \, \vert \log(a) \vert^{h}).$$ Step 2: Estimation of $ \int_{D_{1}} \vert v \vert^{2} \, dy $.\
We have $$\begin{aligned}
\int_{D_{1}} \vert v \vert^{2} \, dx &=& \sum_{n} \vert <v,e^{(1)}_{n}> \vert^{2} = a^{2} \, \Big(\vert <\tilde{v_{1}},\overline{e}_{n_{0}}> \vert^{2} + \sum_{n \neq n_{0}} \vert <\tilde{v_{1}},\overline{e}_{n}> \vert^{2} \Big) \\
& = & a^{2} \, \vert <\tilde{v_{1}},\overline{e}_{n_{0}}> \vert^{2} + \mathcal{O}(a^{2}) \\
& \stackrel{(\ref{equa825})}= & a^{2} \Bigg[\frac{ \vert < \tilde{u_{0}}; \overline{e}^{(1)}_{n_{0}} > \vert^{2}}{\Big\vert (1-\omega^{2} \, \mu_{0} \, \tau \, \lambda_{n_{0}})-\omega^{2} \, \mu_{0} \,\tau \, a^{2} \, \Phi_{0}(z_{1},z_{2}) \Big(\int_{B} \overline{e}_{n_{0}}\Big)^{2} \Big\vert^{2}} + \mathcal{O}(\vert \log(a) \vert^{h}) \Bigg] + \mathcal{O}(a^{2}). \end{aligned}$$ Then $$\label{equa58}
\int_{D_{1}} \vert v \vert^{2} \, dx = a^{2} \frac{ \vert < \tilde{u_{0}}; \overline{e}^{(1)}_{n_{0}} > \vert^{2}}{\Bigg\vert (1-\omega^{2} \, \mu_{0} \, \tau \, \lambda_{n_{0}})-\omega^{2} \, \mu_{0} \,\tau \, a^{2} \, \Phi_{0}(z_{1},z_{2}) \Big(\int_{B} \overline{e}_{n_{0}}\Big)^{2} \Bigg\vert^{2}} + \mathcal{O}(a^{2} \, \vert \log(a) \vert^{h}).$$ Combining $(\ref{equa57})$ and $(\ref{equa58})$, we obtain $$\begin{aligned}
\Big\vert \int_{D_{1}} v \, dy \Big\vert^{2} &=& a^{4} \frac{ \vert < \tilde{u_{0}}; \overline{e}^{(1)}_{n_{0}} > \vert^{2}}{\Bigg\vert (1-\omega^{2} \, \mu_{0} \, \tau \, \lambda_{n_{0}})-\omega^{2} \, \mu_{0} \,\tau \, a^{2} \, \Phi_{0}(z_{1},z_{2}) \Big(\int_{B} \overline{e}_{n_{0}}\Big)^{2} \Bigg\vert^{2}} \; \Big( \int_{B} \overline{e}_{n_{0}} \, d\eta \Big)^{2} + \mathcal{O}(a^{4} \, \vert \log(a) \vert^{h}) \\
&=& a^{2} \Bigg[ \int_{D_{1}} \vert v \vert^{2} \, dx + \mathcal{O}(a^{2} \, \vert \log(a) \vert^{h}) \Bigg] \; \Big( \int_{B} \overline{e}_{n_{0}} \, d\eta \Big)^{2} + \mathcal{O}(a^{4} \, \vert \log(a) \vert^{h}) \\
&=& a^{2} \, \int_{D_{1}} \vert v \vert^{2} \, dx \; \Big( \int_{B} \overline{e}_{n_{0}} \, d\eta \Big)^{2} + \mathcal{O}(a^{4} \, \vert \log(a) \vert^{h}),\end{aligned}$$ which proves the formula $(\ref{equa56})$.
We continue with equation $(\ref{equa55})$, then $$\Bigg\vert \textbf{C}^{-1} - \Big( \Phi_{0}+2(- \log(k) / 2\pi +\Gamma)\Big) \Bigg\vert^{2} = \frac{\vert u_{0}(z_{2}) \vert^{2}}{\vert \omega^{2} \; \mu_{0} \; \tau \vert^{2} \; \Big\vert \displaystyle\int_{D_{2}} v \, dy \Big\vert^{2}} +\mathcal{O}(d \, \vert \log(a) \vert^{2(1-h)})$$ $$\begin{aligned}
\phantom{Invisible text} \quad \quad &\stackrel{(\ref{equa56})}=& \frac{\vert u_{0}(z_{2}) \vert^{2}}{\vert \omega^{2} \; \mu_{0} \; \tau \vert^{2} \; \Big[ a^{2} \; \Big(\displaystyle \int_{B} \overline{e}_{n_{0}} dy \Big)^{2} \displaystyle\int_{D_{2}} \vert v \vert^{2} \, dy + \mathcal{O}(a^{4} \vert \log(a) \vert^{h} ) \Big]} +\mathcal{O}(d \, \vert \log(a) \vert^{2(1-h)}) \\
&=& \frac{\vert u_{0}(z_{2}) \vert^{2}}{\vert \omega^{2} \; \mu_{0} \; \tau \vert^{2} \; a^{2} \; \Big(\displaystyle \int_{B} \overline{e}_{n_{0}} dy \Big)^{2} \displaystyle\int_{D_{2}} \vert v \vert^{2} \, dy \Big[1 + \mathcal{O}(\vert \log(a) \vert^{-h} ) \Big]} +\mathcal{O}(d \, \vert \log(a) \vert^{2(1-h)}) \\
&=& \frac{\vert u_{0}(z_{2}) \vert^{2}}{\vert \omega^{2} \; \mu_{0} \; \tau \vert^{2} \; a^{2} \; \Big(\displaystyle \int_{B} \overline{e}_{n_{0}} dy \Big)^{2} \displaystyle\int_{D_{2}} \vert v \vert^{2} \, dy } + \mathcal{O}(\vert \log(a) \vert^{2-3h}). \end{aligned}$$ In the following proposition, we write an estimation of $\vert u_{0}(z_{2}) \vert$ in the case of one particle inside the domain.
We have $$\label{vVonelemma}
\vert u_{0}(z_{2}) \vert^{2} = \frac{\Big\vert 1-\omega^{2} \mu_{0} \tau \, \lambda_{n_{0}} - \omega^{2} \mu_{0} \tau \, \Big( \frac{-1}{2\pi} \log(k)+\Gamma \Big) \,\Big( \int_{D} e_{n_{0}} \Big)^{2} \Big\vert^{2}}{ \big(\int_{D} e_{n_{0}} \big)^{2}} \; \int_{D} \vert u_{1} \vert^{2} \, dx + \mathcal{O}\Big(\vert \log(a) \vert^{\max(-2h,-1)}\Big).$$
To fix notations recall L.S.E for one particle $$u_{1}(x) - \omega^{2} \, \mu_{0} \, \int_{D} G_{k}(x,y) \, (\varepsilon_{p}-\varepsilon_{0})(y) \, u_{1}(y) \, dy = u_{0}(x), \qquad x \in D.$$ With this notation the equation $(\ref{Z})$ takes the following form $$\label{equa511}
<u_{1};e_{n_{0}}> = \frac{ <u_{0};e_{n_{0}}>}{\Bigg[1-\omega^{2} \mu_{0} \tau \, \lambda_{n_{0}} - \omega^{2} \mu_{0} \tau \, \Big( \frac{-1}{2\pi} \log(k)+\Gamma \Big) \,\Big( \int_{D} e_{n_{0}} \Big)^{2}\Bigg]} + \mathcal{O}(a\,\vert \log(a) \vert^{h-1}).$$ Next, $$\begin{aligned}
\label{J}
\nonumber
\int_{D} \vert u_{1} \vert^{2} \, dx & = & \vert <u_{1} ;e_{n_{0}}> \vert^{2} + a^{2} \, \sum_{n \neq n_{0} } \, \vert <\tilde{u}_{1} ; \overline{e}_{n}> \vert^{2} \\
& \stackrel{(\ref{equa511})}= & \frac{\vert <u_{0};e_{n_{0}}> \vert^{2}}{\Big\vert 1-\omega^{2} \mu_{0} \tau \, \lambda_{n_{0}} - \omega^{2} \mu_{0} \tau \, \Big( \frac{-1}{2\pi} \log(k)+\Gamma \Big) \,\Big( \int_{D} e_{n_{0}} \Big)^{2} \Big\vert^{2}} \, + \mathcal{O}\Big(a^{2} \, \vert \log(a) \vert^{\max(0,2h-1)}\Big). \end{aligned}$$ We develop $u_{0}$ near the point $z$ to obtain $$\begin{aligned}
\int_{D} \vert u_{1} \vert^{2} \, dx & = & \frac{\Big[ \vert u_{0}(z_{2}) \vert^{2} \, \Big( \int_{D} e_{n_{0}} dx \Big)^{2} + \mathcal{O}(a^{3}) \Big]}{\Big\vert 1-\omega^{2} \mu_{0} \tau \, \lambda_{n_{0}} - \omega^{2} \mu_{0} \tau \, \Big( \frac{-1}{2\pi} \log(k)+\Gamma \Big) \,\Big( \int_{D} e_{n_{0}} \Big)^{2} \Big\vert^{2}} \, +\mathcal{O}\Big(a^{2} \, \vert \log(a) \vert^{\max(0,2h-1)}\Big) \\
& = & \frac{\vert u_{0}(z_{2}) \vert^{2} \, \Big( \int_{D} e_{n_{0}} dx \Big)^{2}}{\Big\vert 1-\omega^{2} \mu_{0} \tau \, \lambda_{n_{0}} - \omega^{2} \mu_{0} \tau \, \Big( \frac{-1}{2\pi} \log(k)+\Gamma \Big) \,\Big( \int_{D} e_{n_{0}} \Big)^{2} \Big\vert^{2}} \, + \mathcal{O}\Big(a^{2} \, \vert \log(a) \vert^{\max(0,2h-1)}\Big).\end{aligned}$$ This proves $(\ref{vVonelemma})$.
In $(\ref{vVonelemma})$, we use the following notation $$\Psi := \Bigg\vert 1-\omega^{2} \mu_{0} \tau \, \lambda_{n_{0}} - \omega^{2} \mu_{0} \tau \, \Big( \frac{-1}{2\pi} \log(k)+\Gamma \Big) \,\Big( \int_{D} e_{n_{0}} \Big)^{2} \Bigg\vert^{2}.$$ With this, we get $$\begin{aligned}
\left\vert \textbf{C}^{-1} - \bigg( \Phi_{0}+2 \bigg( \frac{-1}{2\pi} \log(k) +\Gamma \bigg)\bigg) \right\vert^{2} &=& \frac{\vert u_{0}(z_{2}) \vert^{2}}{\vert \omega^{2} \; \mu_{0} \; \tau \vert^{2} \; \Big(\displaystyle \int_{D} e_{n_{0}} dy \Big)^{2} \displaystyle\int_{D_{2}} \vert v \vert^{2} \, dy } + \mathcal{O}(\vert \log(a) \vert^{2-3h}) \\
&\stackrel{(\ref{vVonelemma})}=& \frac{\Psi \; \displaystyle \int_{D} \vert u_{1} \vert^{2} \, dx}{\vert \omega^{2} \; \mu_{0} \; \tau \vert^{2} \; \Big(\displaystyle \int_{D} e_{n_{0}} dy \Big)^{4} \displaystyle\int_{D_{2}} \vert v \vert^{2} \, dy } + \mathcal{O}(\vert \log(a) \vert^{2-3h}).\end{aligned}$$ We set $$\label{E}
B := \frac{\displaystyle \int_{D} \vert u_{1} \vert^{2} \, dx}{\vert \omega^{2} \; \mu_{0} \; \tau \vert^{2} \; \Big(\displaystyle \int_{D} e_{n_{0}} dy \Big)^{4} \displaystyle\int_{D_{2}} \vert v \vert^{2} \, dy }.$$ Referring to $(\ref{phigamma})$, we set $\Gamma := \gamma + i/4 $. We develop the left side of the last equation as $$\begin{aligned}
\left\vert \textbf{C}^{-1} - \Bigg[ \Phi_{0}+2 \Big( \frac{-1}{2 \pi} \log(k) +\Gamma\Big)\Bigg] \right\vert^{2} &=& \Big( \textbf{C}^{-1} - \Phi_{0}\Big)^{2} - 4 \, \Big( \textbf{C}^{-1} - \Phi_{0}\Big) \Bigg(\frac{-1}{2\pi} \log\vert k \vert + \gamma \Bigg) \\
& + & 4 \, \Bigg(\frac{-1}{2\pi} \log\vert k \vert + \gamma \Bigg)^{2} + 4 \, \Bigg(\frac{-1}{2\pi} Arg(k) + \frac{1}{4} \Bigg)^{2},\end{aligned}$$ then, we have $$\begin{aligned}
\label{D}
\nonumber
\Big( \textbf{C}^{-1} - \Phi_{0}\Big)^{2} - 4 \, \Big( \textbf{C}^{-1} - \Phi_{0}\Big) \Bigg(\frac{-1}{2\pi} \log\vert k \vert + \gamma \Bigg)
&+& 4 \, \Bigg(\frac{-1}{2\pi} \log\vert k \vert + \gamma \Bigg)^{2} + 4 \, \Bigg(\frac{-1}{2\pi} Arg(k) + \frac{1}{4} \Bigg)^{2} \\ &=& \Psi \, B + \mathcal{O}(\vert \log(a) \vert^{2-3h}).\end{aligned}$$ Remark that $\Psi$ can be written as $$\begin{aligned}
\Psi &=& \Big\vert 1-\omega^{2} \mu_{0} \tau \, \lambda_{n_{0}} \Big\vert^{2} + (\omega^{2} \mu_{0})^{2} \, \vert \tau \vert^{2} \, \Big(\int_{D} e_{n_{0}}\Big)^{4} \,\left[ \Bigg( \frac{-1}{2\pi} \log\vert k \vert + \gamma \Bigg)^{2} + \Bigg( \frac{-1}{2\pi} Arg(k) + \frac{1}{4} \Bigg)^{2} \right] \\
&-& 2 \omega^{2} \mu_{0} \Big(\int_{D} e_{n_{0}}\Big)^{2} \Bigg( \frac{-1}{2\pi} \log\vert k \vert + \gamma \Bigg) \, {{\ensuremath{\mathrm{Re\,}}}}\Big[\overline{\tau} \; (1-\omega^{2}\mu_{0}\tau \lambda_{n_{0}}) \Big]+\mathcal{O}(a^{2}). $$ Hence using $(\ref{C})$, we have $$\begin{aligned}
\Psi &=& \textbf{C}^{-2} \, (\omega^{2} \, \mu_{0})^{2} \, \vert \tau \vert^{2} \, \Big( \int_{D} e_{n_{0}} \Big)^{4} + (\omega^{2} \mu_{0})^{2} \, \vert \tau \vert^{2} \, \Big(\int_{D} e_{n_{0}}\Big)^{4} \, \left[ \Bigg( \frac{-1}{2\pi} \log\vert k \vert + \gamma \Bigg)^{2} + \, \Bigg( \frac{-1}{2\pi} Arg(k) + \frac{1}{4} \Bigg)^{2} \right] \\
&-& 2 \, \textbf{C}^{-1} \, (\omega^{2} \mu_{0})^{2} \, \vert \tau \vert^{2} \, \Big(\int_{D} e_{n_{0}}\Big)^{4} \Bigg( \frac{-1}{2\pi} \log\vert k \vert + \gamma \Bigg) + \mathcal{O}\big(\vert \log(a) \vert^{-3h}\big). \end{aligned}$$ Replace $\Psi$ in $(\ref{D})$ and use the fact that $B = \mathcal{O}(\vert \log(a) \vert^{2})$ to cancel all the terms of order $\mathcal{O}(1)$. The formula $(\ref{D})$ will be $$\Big( \textbf{C}^{-1} - \Phi_{0}\Big)^{2} - 4 \, \Big( \textbf{C}^{-1} - \Phi_{0}\Big) \Bigg(\frac{-1}{2\pi} \log\vert k \vert + \gamma \Bigg) = - 2 \, \textbf{C}^{-1} \, (\omega^{2} \mu_{0})^{2} \, \vert \tau \vert^{2} \, \Big(\int_{D} e_{n_{0}}\Big)^{4} \Bigg( \frac{-1}{2\pi} \log\vert k \vert + \gamma \Bigg) \, B$$ $$\phantom{invisible text} \qquad + \qquad \textbf{C}^{-2} \, (\omega^{2} \, \mu_{0})^{2} \, \vert \tau \vert^{2} \, \Big( \int_{D} e_{n_{0}} \Big)^{4} B + \mathcal{O}\big(\vert \log(a) \vert^{\max(0,2-3h)}\big).$$ Then $$\begin{aligned}
\Bigg(\frac{-1}{2\pi} \log\vert k \vert + \gamma \Bigg) \Bigg[ - 4 \, \Big( \textbf{C}^{-1} - \Phi_{0}\Big) + 2 \, \textbf{C}^{-1} \, (\omega^{2} \mu_{0})^{2} \, \vert \tau \vert^{2} \, \Big(\int_{D} e_{n_{0}}\Big)^{4} \, B \Bigg] & = & \textbf{C}^{-2} \, (\omega^{2} \, \mu_{0})^{2} \, \vert \tau \vert^{2} \, \Big( \int_{D} e_{n_{0}} \Big)^{4} B \end{aligned}$$ $$\begin{aligned}
\phantom{invisible text} \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad &-& \Big( \textbf{C}^{-1} - \Phi_{0}\Big)^{2} + \mathcal{O}\big(\vert \log(a) \vert^{\max(0,2-3h)}\big).\end{aligned}$$ Using $(\ref{E})$, we get an explicit expression $$\label{Kds}
\log\vert k \vert = 2 \pi \, \gamma - \frac{\pi}{\textbf{C}} \; \frac{\frac{\displaystyle \int_{D} \vert u_{1} \vert^{2} \, dx}{ \displaystyle\int_{D_{2}} \vert v \vert^{2} \, dx } - (1 - \textbf{C} \, \Phi_{0})^{2}}{ \frac{\displaystyle \int_{D} \vert u_{1} \vert^{2} \, dx}{ \displaystyle\int_{D_{2}} \vert v \vert^{2} \, dx } - 2 \, (1 - \textbf{C} \, \Phi_{0})} + \mathcal{O}\Big(\vert \log(a) \vert^{\max(h-1;1-2h)}\Big).$$
To justify that $(\ref{Kds})$ is well defined, we use $(\ref{J}), (\ref{equa58})$ and $(\ref{C})$ to obtain the following relation $$\frac{\displaystyle\int_{D} \vert u_{1} \vert^{2} \, dx}{\displaystyle \int_{D_{2}} \vert v \vert^{2} \, dx } = \frac{\Big( 1 - \textbf{C} \, \Phi_{0} \Big)^{2}}{\Bigg\vert 1 - \textbf{C} \, \Big(\frac{-1}{2\pi} \, \log(k) + \Gamma \Big) \Bigg\vert^{2}} + \; \mathcal{O}\Big(\vert \log(a) \vert^{\max(h-1;-h)}\Big).$$ Hence, $$\frac{\frac{\displaystyle \int_{D} \vert u_{1} \vert^{2} \, dx}{ \displaystyle\int_{D_{2}} \vert v \vert^{2} \, dx } - (1 - \textbf{C} \, \Phi_{0})^{2}}{ \frac{\displaystyle \int_{D} \vert u_{1} \vert^{2} \, dx}{ \displaystyle\int_{D_{2}} \vert v \vert^{2} \, dx } - 2 \, (1 - \textbf{C} \, \Phi_{0})} = \frac{\Big(1 - \textbf{C} \, \Phi_{0} \Big) \, \textbf{C} \, \Bigg\lbrace 2 \, {{\ensuremath{\mathrm{Re\,}}}}\Big[ \frac{-1}{2\pi} \, \log(k) + \Gamma \Big] - \textbf{C} \, \Big\vert \frac{-1}{2\pi} \, \log(k) + \Gamma \Big\vert^{2} \Bigg\rbrace}{- 1 - \textbf{C} \, \Phi_{0} +2 \, \textbf{C} \, \Bigg\lbrace 2 \, {{\ensuremath{\mathrm{Re\,}}}}\Big[ \frac{-1}{2\pi} \, \log(k) + \Gamma \Big] - \textbf{C} \, \Big\vert \frac{-1}{2\pi} \, \log(k) + \Gamma \Big\vert^{2} \Bigg\rbrace} \thicksim \mathcal{O}(C).$$
Therefore the error term in $(\ref{Kds})$ is indeed negligible as soon as $\frac{1}{2} < h <1$.\
Taking the exponential in both side of $(\ref{Kds})$ and using the smallness of $\mathcal{O}\Big(\vert \log(a) \vert^{\max(1-2h,h-1)}\Big)$, we write $$\label{equa516}
\vert k \vert = \exp\left\lbrace 2 \pi \, \gamma - \frac{\pi}{\textbf{C}} \; \frac{\frac{\displaystyle \int_{D} \vert u_{1} \vert^{2} \, dx}{ \displaystyle\int_{D_{2}} \vert v \vert^{2} \, dx } - (1 - \textbf{C} \, \Phi_{0})^{2}}{ \frac{\displaystyle \int_{D} \vert u_{1} \vert^{2} \, dx}{ \displaystyle\int_{D_{2}} \vert v \vert^{2} \, dx } - 2 \, (1 - \textbf{C} \, \Phi_{0})} \right\rbrace + \mathcal{O}\Big(\vert \log(a) \vert^{\max(h-1;1-2h)}\Big).$$
Proof of Theorem \[Using-both-permittivity-and-conductivity-contrasts\] {#Theo-using-both-contrasted-permit-conduct}
=======================================================================
We recall the model problem for photo-acoustic imaging: $$\label{equa91}
\left\{
\begin{array}{rll}
\partial^{2}_{t} p(x,t) - \Delta_{x} p(x,t) &=& 0 \qquad in \quad \mathbb{R}^{2} \times \mathbb{R}^{+},\\
p(x,0) &=& \frac{\omega \, \beta_{0}}{c_{p}} {{\ensuremath{\mathrm{Im\,}}}}(\varepsilon)(x) \, \vert u \vert^{2}(x) \,\chi_{\Omega} , \qquad in \quad \mathbb{R}^{2} \\
\partial_{t}p(x,0) &=& 0 \qquad in \quad \mathbb{R}^{2}.
\end{array}
\right.$$
\[FGH\] Next, when we solve the equation $(\ref{equa91})$, we omit the multiplicative term[^8] $\frac{\omega \, \beta_{0}}{2\pi \, c_{p}}$.
Photo-acoustic imaging using one particle. {#opsubsection}
-------------------------------------------
Proof of $(\ref{pressure-to-v_0})$.\
The next lemma gives an estimation of the total field for $x \in \Omega \setminus D$.
The total field behaves as $$\label{awayD}
\vert u_{1}(x) \vert^{2} = \mathcal{O}(1) + \mathcal{O}(\vert \log(a) \vert^{h-1} \, \vert \log(dist) \vert) \quad in \quad \Omega \setminus D,$$ where $dist = dist(x,D)$.
We use L.S.E $$\label{LSE1p}
u_{1}(x) = u_{0}(x) + \omega^{2} \, \mu_{0} \, \int_{D} (\varepsilon_{p}-\varepsilon_{0})(y) G_{k}(y,x) u_{1}(y) dy, \qquad \qquad x \in \mathbb{R}^{2}.$$ Now, for $x$ away from $D$ $$\begin{aligned}
\vert u_{1}(x) \vert & \leq & \vert u_{0}(x) \vert + \mathcal{O}\Bigg( \frac{1}{a^{2}\, \vert \log(a) \vert} \int_{D} \vert G_{0} \vert (y,x) \; \vert u_{1}(y) \vert \; dy \Bigg) \\
& = & \mathcal{O}(1) + \mathcal{O}\Bigg( \frac{1}{a^{2}\, \vert \log(a) \vert} \; \Vert u_{1} \Vert \Big[ \int_{D} \vert G_{0} \vert^{2} (y,x) \; dy\Big]^{1/2} \Bigg) = \mathcal{O}(1) + \mathcal{O}\Big( \vert \log(a) \vert^{h-1} \, \vert \log(dist) \vert \Big) \end{aligned}$$ This proves $(\ref{awayD})$.
Let us recall from proposition $(\ref{Y})$, the following relation $$\label{ve=Ve}
<u_{1},e_{n_{0}}> = \frac{1}{[1-\omega^{2} \, \mu_{0} \, \tau \, \lambda_{n_{0}}]} \; <u_{0};e_{n_{0}}> + \mathcal{O}(a \, \vert \log(a) \vert^{2h-1}).$$ We use Poisson’s formula to solve the system $(\ref{equa91})$, see ([@PinchoverRubinstein], Chapter 9), to represent the pressure as follows $$\begin{aligned}
p(t,x) &=& \partial_{t} \int_{\vert x-y \vert <t} \frac{({{\ensuremath{\mathrm{Im\,}}}}(\varepsilon_{p}) \vert u_{1} \vert^{2})(y)}{\sqrt{t^{2}-\vert x-y \vert^{2}}} \chi_{D} dy + \partial_{t} \int_{\vert x-y \vert <t} \frac{({{\ensuremath{\mathrm{Im\,}}}}(\varepsilon_{0}) \vert u_{1} \vert^{2})(y)}{\sqrt{t^{2}-\vert x-y \vert^{2}}} \chi_{\Omega \setminus D} dy \\
& = & \partial_{t} \int_{\vert x-y \vert <t} \frac{({{\ensuremath{\mathrm{Im\,}}}}(\varepsilon_{p} - \varepsilon_{0}) \vert u_{1} \vert^{2})(y)}{\sqrt{t^{2}-\vert x-y \vert^{2}}} \chi_{D} dy + \partial_{t} \int_{\vert x-y \vert <t} \frac{({{\ensuremath{\mathrm{Im\,}}}}(\varepsilon_{0}) \vert u_{1} \vert^{2})(y)}{\sqrt{t^{2}-\vert x-y \vert^{2}}} \chi_{\Omega} dy.\end{aligned}$$ Let $t > diam(\Omega)$. For $x \in \partial \Omega$, the representation above reduces to: $$p(t,x) = \int_{D(z,a)} \partial_{t} \, \frac{({{\ensuremath{\mathrm{Im\,}}}}(\varepsilon_{p} - \varepsilon_{0}) \vert u_{1} \vert^{2})(y)}{\sqrt{t^{2}-\vert x-y \vert^{2}}} dy + \int_{\Omega} \partial_{t} \frac{({{\ensuremath{\mathrm{Im\,}}}}(\varepsilon_{0}) \vert u_{1} \vert^{2})(y)}{\sqrt{t^{2}-\vert x-y \vert^{2}}} dy.$$ Set $T_{4}$ to be $$T_{4} := \int_{\Omega} \partial_{t} \frac{({{\ensuremath{\mathrm{Im\,}}}}(\varepsilon_{0}) \vert u_{1} \vert^{2})(y)}{\sqrt{t^{2}-\vert x-y \vert^{2}}} dy.$$ Recalling that $\tau := \varepsilon_{p} - \varepsilon_{0}(z)$, we have $$p(t,x) = -t \, {{\ensuremath{\mathrm{Im\,}}}}(\tau) \, \int_{D(z,a)} \frac{\, \vert u_{1} \vert^{2}(y)}{(t^{2}-\vert x-y \vert^{2})^{3/2}} dy + T_{4} + \int_{D(z,a)}\vert u_{1} \vert^{2}(y) \;\; \partial_{t} \frac{{{\ensuremath{\mathrm{Im\,}}}}\Big(\int_{0}^{1}(y-z) \centerdot \nabla \varepsilon_{0}(z+s(y-z))ds\Big)}{\sqrt{t^{2}-\vert x-y \vert^{2}}} dy$$ We estimate the remainder term as follows $$\label{lb}
\Bigg\vert \int_{D(z,a)}\vert u_{1} \vert^{2}(y) \;\; \partial_{t} \frac{{{\ensuremath{\mathrm{Im\,}}}}\Big(\int_{0}^{1}(y-z) \centerdot \nabla \varepsilon_{0}(z+s(y-z))ds\Big)}{\sqrt{t^{2}-\vert x-y \vert^{2}}} dy \Bigg\vert \\
\leq a \, \Vert u_{1} \Vert^{2}_{\mathbb{L}^{2}(D)} = \mathcal{O}(a^{3} \; \vert \log(a) \vert^{2h}),$$ then $$p(t,x) = -t \, {{\ensuremath{\mathrm{Im\,}}}}(\tau) \, \int_{D(z,a)} \frac{\, \vert u_{1} \vert^{2}(y)}{(t^{2}-\vert x-y \vert^{2})^{3/2}} dy + T_{4} + \mathcal{O}(a^{3} \; \vert \log(a) \vert^{2h}).$$ By Taylor expansion of the function $\Big(t^{2} - \vert x-\centerdot \vert^{2} \Big)^{-3/2}$ near $z$, we have $$\begin{aligned}
p(t,x) &=& \frac{-t \, {{\ensuremath{\mathrm{Im\,}}}}(\tau)}{(t^{2}-\vert x-z \vert^{2})^{3/2}} \, \int_{D(z,a)} \vert u_{1} \vert^{2}(y) dy + T_{4} \\
&+& \mathcal{O}(a^{3} \; \vert \log(a) \vert^{2h}) + \mathcal{O}\Big[ \, {{\ensuremath{\mathrm{Im\,}}}}(\tau) \, \int_{D(z,a)} (\vert y-z \vert^{2} + 2 <x-z;z-y>) \vert u_{1} \vert^{2}(y) dy \Big]. \end{aligned}$$ We estimate the remainder term as $$\label{hdd}
\bigg\vert {{\ensuremath{\mathrm{Im\,}}}}(\tau) \int_{D} (\vert y-z \vert^{2} + 2 <x-z;z-y>) \vert u_{1} \vert^{2}(y) dy \bigg\vert \leq {{\ensuremath{\mathrm{Im\,}}}}(\tau) \, a \, \Vert u_{1} \Vert^{2} = \mathcal{O}({{\ensuremath{\mathrm{Im\,}}}}(\tau) a^{3} \vert \log(a) \vert^{2h}),$$ and then $$p(t,x) = \frac{-t \, {{\ensuremath{\mathrm{Im\,}}}}(\tau)}{(t^{2}-\vert x-z \vert^{2})^{3/2}} \, \int_{D(z,a)} \vert u_{1} \vert^{2}(y) dy + T_{4} + \mathcal{O}\big(a^{3} \; {{\ensuremath{\mathrm{Im\,}}}}(\tau) \; \vert \log(a) \vert^{2h}\big).$$ Writing $u_{1}$ as a Fourier series over the basis $\big(e_{n}\big)_{n \in \mathbb{N}}$, we obtain $$p(t,x) = \frac{-t \, {{\ensuremath{\mathrm{Im\,}}}}(\tau) \, \vert <u_{1};e_{n_{0}}> \vert^{2}}{(t^{2}-\vert x-z \vert^{2})^{3/2}} \, - \frac{t \, {{\ensuremath{\mathrm{Im\,}}}}(\tau)}{(t^{2}-\vert x-z \vert^{2})^{3/2}} \, \sum_{n \neq n_{0}} \vert <u_{1};e_{n}> \vert^{2} + T_{4} + \mathcal{O}\big(a^{3} \; {{\ensuremath{\mathrm{Im\,}}}}(\tau) \; \vert \log(a) \vert^{2h}\big),$$ since $n \neq n_{0}$ we estimate the series as $$\label{oyaya}
\mathcal{O}\big( {{\ensuremath{\mathrm{Im\,}}}}(\tau) \, \sum_{n \neq n_{0}} \vert <u_{1};e_{n}> \vert^{2} \big) \sim \mathcal{O}\big( {{\ensuremath{\mathrm{Im\,}}}}(\tau) \, \Vert u_{0} \vert_{\mathbb{L}^{2}(D)}^{2} \big) = \mathcal{O}\big( {{\ensuremath{\mathrm{Im\,}}}}(\tau) \, a^{2} \big).$$ Next, $$\begin{aligned}
p(t,x) & \stackrel{(\ref{ve=Ve})}= & \frac{-t \, {{\ensuremath{\mathrm{Im\,}}}}(\tau)}{(t^{2}-\vert x-z \vert^{2})^{3/2}} \, \Bigg[ \frac{\vert <u_{0};e_{n_{0}}> \vert^{2}}{\vert 1- \omega^{2} \mu_{0} \lambda_{n_{0}} \tau \vert^{2}} \, + \mathcal{O}(a^{2} \, \vert \log(a) \vert^{3h-1}) \Bigg] + T_{4} + \mathcal{O}\big( {{\ensuremath{\mathrm{Im\,}}}}(\tau) \, a^{2} \big),$$ hence $$\label{P(t,x)}
p(t,x) = \frac{-t \, {{\ensuremath{\mathrm{Im\,}}}}(\tau)}{(t^{2}-\vert x-z \vert^{2})^{3/2}} \, \frac{\vert <u_{0};e_{n_{0}}> \vert^{2}}{\vert 1- \omega^{2} \mu_{0} \lambda_{n_{0}} \tau \vert^{2}} \, + T_{4} + \mathcal{O}({{\ensuremath{\mathrm{Im\,}}}}(\tau) \, a^{2}) + \mathcal{O}({{\ensuremath{\mathrm{Im\,}}}}(\tau) \, a^{2} \, \vert \log(a) \vert^{3h-1}).$$ In order to calculate the terme $T_{4}$, we use L.S.E $$u_{1}(x) - \omega^{2} \, \mu_{0} \int_{D} (\varepsilon_{p}-\varepsilon_{0}(\eta)) G_{k}(x,\eta) \, u_{1}(\eta) d\eta = u_{0}(x) \; \quad in \quad \Omega,$$ and define $$\label{p0np}
p_{0}(t,x) := \int_{\Omega} \, \partial_{t} \, \frac{1}{\sqrt{t^{2}-\vert x-y \vert^{2}}} \; {{\ensuremath{\mathrm{Im\,}}}}(\varepsilon_{0})(y) \; \vert u_{0} \vert^{2}(y) \, dy.$$ Observe that $p_{0}(t,x)$ is the measured pressure at point $x \in \partial \Omega$ and time $t$ when no particle is inside $\Omega$.\
We set $$f := \partial_{t} \, \frac{1}{\sqrt{t^{2}-\vert x-y \vert^{2}}} \; {{\ensuremath{\mathrm{Im\,}}}}(\varepsilon_{0})(y).$$ With this, we get $$T_{4} = \int_{\Omega} \, \partial_{t} \, \frac{1}{\sqrt{t^{2}-\vert x-y \vert^{2}}} \; {{\ensuremath{\mathrm{Im\,}}}}(\varepsilon_{0})(y) \; \vert u_{1} \vert^{2}(y) \, dy = \int_{\Omega \setminus D} \, f \, \vert u_{1} \vert^{2}(y) \, dy + \int_{D} \, f \, \vert u_{1} \vert^{2}(y) \, dy.$$ If we compare $(\ref{aprioriestimation})$ to $(\ref{awayD})$ we deduce that the term $(\star) := \int_{D} \, f \, \vert u_{1} \vert^{2}(y) \, dy$ is less dominant than the one given on $\Omega \setminus D$. Now, since $f$ is smooth we can estimate $(\star)$, with help of a priori estimation, as $$\vert (\star) \vert = \Big\vert \int_{D} \, f \, \vert u_{1} \vert^{2}(y) \, dy \Big\vert \leq \Vert u_{1} \Vert^{2} = \mathcal{O}\big(a^{2} \, \vert \log(a) \vert^{2h} \big),$$ and, from L.S.E, see for instance $( \ref{LSE1p})$, we can rewrite $T_{4}$ as $$\begin{aligned}
\label{defS3}
T_{4} & = & p_{0}(t,x) - \int_{D} f \vert u_{0} \vert^{2}(y) dy + (\omega^{2} \mu_{0} )^{2} \int_{\Omega \setminus D} f \Big\vert \int_{D}(\varepsilon_{p}-\varepsilon_{0}(\eta)) G_{k}(\eta,y) u_{1}(\eta) d\eta \Big\vert^{2} dy \\
&+& 2 \; \omega^{2} \mu_{0} {{\ensuremath{\mathrm{Re\,}}}}\Bigg[\int_{\Omega \setminus D} f \overline{u}_{0}(y) \int_{D} (\varepsilon_{p}-\varepsilon_{0}(\eta)) G_{k}(\eta,y) u_{1}(\eta) d\eta dy \Bigg] + \mathcal{O}\big( a^{2} \, \vert \log(a) \vert^{2h} \big). \end{aligned}$$ The smoothness of $u_{0}$ is enough to justify the following estimation $$\Big\vert \int_{D} f \vert u_{0} \vert^{2}(y) dy \Big\vert \sim \mathcal{O}\big( a^{2} \big).$$ To finish the estimation of $T_{4}$ we still have to deal with two terms. More exactly we set $$\label{S3}
S_{3} := \int_{\Omega \setminus D} \, f \; \Big\vert \int_{D}(\varepsilon_{p}-\varepsilon_{0}(\eta)) G_{k}(\eta,y) \, u_{1}(\eta) d\eta \Big\vert^{2} dy.$$ Expanding $(\varepsilon_{p}-\varepsilon_{0}(.))$ near $z$, we obtain $$\begin{aligned}
\vert S_{3} \vert & \leq & \vert \tau \vert^{2} \, \int_{\Omega \setminus D} \, \big\vert f \big\vert \; \bigg( \int_{D} \big\vert G_{k}(\eta,y) \, u_{1}(\eta) \big\vert d\eta \bigg)^{2} dy \\
&+& \int_{\Omega \setminus D} \, \big\vert f \big\vert \; \bigg( \int_{D} \bigg\vert \int_{0}^{1} (z - \eta )\centerdot \nabla \varepsilon_{0}(z+s(\eta-z))ds G_{k}(\eta,y) \, u_{1}(\eta) \bigg\vert d\eta \bigg)^{2} dy \\
& + & 2 \; \int_{\Omega \setminus D} \, \big\vert f \;\big\vert \; \bigg\vert {{\ensuremath{\mathrm{Re\,}}}}\bigg[\overline{\tau} \, \int_{D} \overline{G_{k}}(\eta,y) \, \overline{u_{1}}(\eta) \, d\eta \int_{D} \int_{0}^{1} (z-\eta)\centerdot \nabla \varepsilon_{0}(z+s(\eta - z))ds G_{k}(\eta,y) \, u_{1}(\eta) d\eta \bigg]\bigg\vert dy, \end{aligned}$$ then apply Cauchy Schwartz inequality and exchange the integration variables to obtain $$\vert S_{3} \vert \leq \vert \tau \vert^{2} \, \Vert u_{1} \Vert^{2} \int_{D} J(\eta) \; d\eta + \mathcal{O}(a^{2}) \; \Vert u_{1} \Vert^{2} \int_{D} J(\eta) \; d\eta + \mathcal{O}(a \; \tau) \; \Vert u_{1} \Vert^{2} \int_{D} J(\eta) \; d\eta
\lesssim \vert \tau \vert^{2} \, \Vert u_{1} \Vert^{2} \int_{D} J(\eta) \; d\eta ,$$ where $J$ is the function given by $J(\eta) := \int_{\Omega \setminus D} \big\vert f \big\vert \; \big\vert G_{k}(\eta,y) \big\vert^{2} \, \, dy.$\
Remark that $J$ is a smooth function because $f$ is a smooth and $\eta$ and $y$ are in two disjoint domains. Then $$S_{3} = \mathcal{O}(\vert \log(a) \vert^{2h-2}).$$ The last term to estimate, that we set as $S_{4}$, is more delicate. We split it as: $$\begin{aligned}
S_{4} &:=& 2 \, \omega^{2} \mu_{0} \, {{\ensuremath{\mathrm{Re\,}}}}\,\Bigg[ \int_{\Omega \setminus D} \, f \; \; \overline{u}_{0}(y) \, \int_{D} (\varepsilon_{p}-\varepsilon_{0}(\eta)) G_{k}(\eta,y) \, u_{1}(\eta) \, d\eta\, dy \Bigg] \\
& = & 2 \, \omega^{2} \mu_{0} \, \sum_{n} {{\ensuremath{\mathrm{Re\,}}}}\,\Bigg[ <u_{1};e_{n}> \, \tau \, \int_{\Omega \setminus D} \, f \; \; \overline{u}_{0}(y) \, \int_{D} G_{k}(\eta,y) \, e_{n}(\eta) \, d\eta\, dy \Bigg] \\
& - & 2 \, \omega^{2} \mu_{0} \, {{\ensuremath{\mathrm{Re\,}}}}\,\Bigg[ \, \int_{\Omega \setminus D} \, f \; \; \overline{u}_{0}(y) \, \, \int_{D} \, \int_{0}^{1} (z-\eta ) \centerdot \nabla \varepsilon_{0}(z+s(\eta -z)) ds\, G_{k}(\eta,y) \, u_{1}(\eta) \, d\eta\, dy \Bigg]. \end{aligned}$$ The same techniques, as previously, allows to estimate the second term of $S_{4}$ as $ \mathcal{O}\Big(a^{4} \, \vert \log(a) \vert^{h}\Big)$. Then $$\begin{aligned}
S_{4} & = & 2 \, \omega^{2} \mu_{0} \; {{\ensuremath{\mathrm{Re\,}}}}\,\Bigg[ <u_{1};e_{n_{0}}> \, \tau \, \int_{\Omega \setminus D} \, f \; \; \overline{u}_{0}(y) \, \int_{D} G_{k}(\eta,y) \, e_{n_{0}}(\eta) \, d\eta\, dy \Bigg] \\
& + & \mathcal{O}\left( \sum_{n \neq n_{0}} {{\ensuremath{\mathrm{Re\,}}}}\,\Bigg[ <u_{1};e_{n}> \, \tau \, \int_{\Omega \setminus D} \, f \; \; \overline{u}_{0}(y) \, \int_{D} G_{k}(\eta,y) \, e_{n}(\eta) \, d\eta\, dy \Bigg]\right) + \mathcal{O}\Big(a^{4} \, \vert \log(a) \vert^{h}\Big). \end{aligned}$$ We keep the term with index $n_{0}$ and estimate the series as $$\begin{aligned}
\label{mla}
\nonumber
\vert \mathcal{O}(\cdots) \vert & \lesssim & \vert \tau \vert \, \, \int_{\Omega \setminus D} \,\big\vert f \big\vert \; \big\vert \overline{u}_{0}(y)\big\vert \, \sum_{n \neq n_{0}} \Big\vert <u_{1};e_{n}> \,\Big\vert \, \bigg\vert \int_{D} G_{k}(\eta,y) \, e_{n}(\eta) \, d\eta\, \bigg\vert dy\\
& \lesssim & \vert \tau \vert \; \Vert u_{1} \Vert \; \Vert f \, \overline{u}_{0} \Vert_{\mathbb{L}^{2}(\Omega \setminus D)} \; \Bigg( \int_{D} \int_{\Omega \setminus D} \, \big\vert G_{k}(\eta,y) \, \big\vert^{2} dy d\eta \bigg)^{\frac{1}{2}} =\mathcal{O}\Big(\vert \log(a) \vert^{-1}\Big).\end{aligned}$$ Plug this in the last equation to obtain $$\begin{aligned}
S_{4} & \stackrel{(\ref{ve=Ve})}= & 2 \, \omega^{2} \mu_{0} \; {{\ensuremath{\mathrm{Re\,}}}}\,\Bigg[ \frac{<u_{0};e_{n_{0}}>}{[1-\omega^{2}\, \mu_{0} \, \tau \, \lambda_{n_{0}}]} \, \tau \, \int_{\Omega \setminus D} \, f \; \; \overline{u}_{0}(y) \, \int_{D} G_{k}(\eta,y) \, e_{n_{0}}(\eta) \, d\eta\, dy \Bigg] \\
& + & \mathcal{O}\left(a \, \vert \log(a) \vert^{2h-1} \; {{\ensuremath{\mathrm{Re\,}}}}\,\Bigg[ \tau \, \int_{\Omega \setminus D} \, f \; \; \overline{u}_{0}(y) \, \int_{D} G_{k}(\eta,y) \, e_{n_{0}}(\eta) \, d\eta\, dy \Bigg]\right) + \mathcal{O}\big( \vert \log(a) \vert^{-1} \big), $$ the same technique, as previously again, see $(\ref{mla})$, allows us to deduce that $$a \, \vert \log(a) \vert^{2h-1} \; {{\ensuremath{\mathrm{Re\,}}}}\,\Bigg[ \tau \, \int_{\Omega \setminus D} \, f \; \; \overline{u}_{0}(y) \, \int_{D} G_{k}(\eta,y) \, e_{n_{0}}(\eta) \, d\eta\, dy \Bigg] = \mathcal{O}\big( \vert \log(a) \vert^{2h-2} \big).$$ The last step is to use Taylor expansion to write $<u_{0};e_{n_{0}}>$ on function of the center $z$. We have $$\begin{aligned}
S_{4} & = & 2 \, \omega^{2} \mu_{0} \; \int_{D} e_{n_{0}} \, dx \, {{\ensuremath{\mathrm{Re\,}}}}\,\Bigg[ \frac{u_{0}(z)}{[1-\omega^{2}\, \mu_{0} \, \tau \, \lambda_{n_{0}}]} \, \tau \, \int_{\Omega \setminus D} \, \; f \; \; \overline{u}_{0}(y) \, \int_{D} G_{k}(\eta,y) \, e_{n_{0}}(\eta) \, d\eta\, dy \Bigg] \\
& + & \mathcal{O}\left( \frac{\int_{D} \int_{0}^{1} (z-\eta)\centerdot \nabla u_{0}(z+s(\eta - z)) ds \, e_{n_{0}}(\eta) d\eta}{[1-\omega^{2}\, \mu_{0} \, \tau \, \lambda_{n_{0}}]} \, \tau \, \int_{\Omega \setminus D} \, \; f \; \overline{u}_{0}(y) \; \int_{D} G_{k}(\eta,y) \, e_{n_{0}}(\eta) \, d\eta\, dy \right) \\
& + & \mathcal{O}\Big(\vert \log(a) \vert^{2h-2} \Big) + \mathcal{O}\Big(\vert \log(a) \vert^{-1} \Big),\end{aligned}$$ then we compute an estimation of the remainder term from Taylor expansion. More precisely, we have $$\begin{aligned}
\vert \mathcal{O}(\cdots) \vert & \lesssim & a \, \vert \log(a) \vert^{h} \, \Bigg\vert \int_{D} e_{n_{0}} \, dx \Bigg\vert \, \vert \tau \vert \, \Bigg\vert \int_{\Omega \setminus D} \, \; f \; \overline{u}_{0}(y) \; \int_{D} G_{k}(\eta,y) \, e_{n_{0}}(\eta) \, d\eta\, dy \Bigg\vert \\
& \leq & \vert \log(a) \vert^{h-1} \; \int_{D} \; \int_{\Omega \setminus D} \, \big\vert f \; \overline{u}_{0}(y) \; G_{k}(\eta,y) \big\vert \, dy \, \big\vert e_{n_{0}}(\eta) \big\vert \, d\eta\, = \mathcal{O}\big(a \, \vert \log(a) \vert^{h-1}\big).\end{aligned}$$ Finally, $$\begin{aligned}
S_{4} &=& 2 \, \omega^{2} \mu_{0} \; \int_{D} e_{n_{0}} \, dx \, {{\ensuremath{\mathrm{Re\,}}}}\,\Bigg[ \frac{u_{0}(z)}{[1-\omega^{2}\, \mu_{0} \, \tau \, \lambda_{n_{0}}]} \, \tau \, \int_{\Omega \setminus D} \, \partial_{t} \, \frac{\, {{\ensuremath{\mathrm{Im\,}}}}(\varepsilon_{0})(y) \; \overline{u}_{0}(y) \,}{\sqrt{t^{2}-\vert x-y \vert^{2}}} \int_{D} G_{k}(\eta,y) \, e_{n_{0}}(\eta) \, d\eta\, dy \Bigg] \\
&+& \mathcal{O}\Big(\vert \log(a) \vert^{-1}\Big) + \mathcal{O}\Big(\vert \log(a) \vert^{2h-2}\Big).\end{aligned}$$ Hence $$\begin{aligned}
T_{4} &=& p_{0}(t,x) + S_{3} + S_{4} \\
&=& p_{0}(t,x) + 2 \, \omega^{2} \mu_{0} \; \int_{D} e_{n_{0}} \, dx \, {{\ensuremath{\mathrm{Re\,}}}}\,\Bigg[ \frac{u_{0}(z)}{[1-\omega^{2}\, \mu_{0} \, \tau \, \lambda_{n_{0}}]} \, \tau \, \int_{\Omega \setminus D} \, \partial_{t} \, \frac{\, {{\ensuremath{\mathrm{Im\,}}}}(\varepsilon_{0})(y) \; \overline{u}_{0}(y) \,}{\sqrt{t^{2}-\vert x-y \vert^{2}}} \int_{D} G_{k}(\eta,y) \, e_{n_{0}}(\eta) \, d\eta\, dy \Bigg] \\ &+& \mathcal{O}\Big(\vert \log(a) \vert^{-1}\Big) + \mathcal{O}\Big(\vert \log(a) \vert^{2h-2}\Big).\end{aligned}$$ The equation $(\ref{P(t,x)})$ takes the form $$\begin{aligned}
\label{(p-p0)(t,x)}
\nonumber
(p - p_{0})(t,x) &=& 2 \, \omega^{2} \mu_{0} \; \int_{D} e_{n_{0}} \, dx \, {{\ensuremath{\mathrm{Re\,}}}}\,\Bigg[ \frac{u_{0}(z)}{[1-\omega^{2}\, \mu_{0} \, \tau \, \lambda_{n_{0}}]} \, \tau \, \int_{\Omega \setminus D} \, \partial_{t} \, \frac{\, {{\ensuremath{\mathrm{Im\,}}}}(\varepsilon_{0})(y) \; \overline{u}_{0}(y) \,}{\sqrt{t^{2}-\vert x-y \vert^{2}}} \int_{D} G_{k}(\eta,y) \, e_{n_{0}}(\eta) \, d\eta\, dy \Bigg] \\
&+& \frac{-t \, {{\ensuremath{\mathrm{Im\,}}}}(\tau)}{(t^{2}-\vert x-z \vert^{2})^{3/2}} \, \frac{\vert <u_{0};e_{n_{0}}> \vert^{2}}{\vert 1- \omega^{2} \mu_{0} \lambda_{n_{0}} \tau \vert^{2}} \,
+ \mathcal{O}( \vert \log(a) \vert^{\max(2h-2,-1)}) + \mathcal{O}({{\ensuremath{\mathrm{Im\,}}}}(\tau) \, a^{2} \, \vert \log(a) \vert^{\max(0,3h-1)}).\end{aligned}$$ Recall that we take, $${{\ensuremath{\mathrm{Im\,}}}}(\tau) = \frac{1}{a^{2} \, \vert \log(a) \vert^{1+h+s}}$$ with $$\label{cdt1s}
0 \leq s < \min(h,1-h).$$ With this choice, the error part of $(\ref{(p-p0)(t,x)})$ will be of order $\mathcal{O}\big(\vert \log(a) \vert^{\max(-1,2h-2)}\big).$ Hence $$\begin{aligned}
(p - p_{0})(t,x) & = & 2 \omega^{2} \mu_{0} \int_{D} e_{n_{0}} dx {{\ensuremath{\mathrm{Re\,}}}}\Bigg[ \frac{u_{0}(z)}{[1-\omega^{2} \mu_{0} \tau \lambda_{n_{0}}]} \tau \int_{\Omega \setminus D} \partial_{t} \frac{{{\ensuremath{\mathrm{Im\,}}}}(\varepsilon_{0})(y) \overline{u}_{0}(y)}{\sqrt{t^{2}-\vert x-y \vert^{2}}} \int_{D} G_{k}(\eta,y) e_{n_{0}}(\eta) d\eta dy \Bigg] \\
&+& \frac{-t \, {{\ensuremath{\mathrm{Im\,}}}}(\tau)}{(t^{2}-\vert x-z \vert^{2})^{3/2}} \, \frac{1}{\vert 1- \omega^{2} \, \mu_{0} \, \lambda_{n_{0}} \, \tau \vert^{2}} \, \vert <u_{0},e_{n_{0}}> \vert^{2} + \mathcal{O}\big(\vert \log(a) \vert^{\max(-1,2h-2)}\big).\end{aligned}$$ Using again the estimate $$\vert <u_{0},e_{n_{0}}> \vert^{2} = \vert u_{0}(z) \vert^{2} \, \Big(\int_{D} e_{n_{0}} dx \Big)^{2} + \mathcal{O}(a^{3}),$$ we get $$\begin{aligned}
\big(p - p_{0}\big)(t,x) & = & 2 \omega^{2} \mu_{0} \int_{D} e_{n_{0}} dx {{\ensuremath{\mathrm{Re\,}}}}\Bigg[ \frac{u_{0}(z) \tau}{[1-\omega^{2} \mu_{0} \tau \lambda_{n_{0}}]} \int_{\Omega \setminus D} \partial_{t} \frac{{{\ensuremath{\mathrm{Im\,}}}}(\varepsilon_{0})(y) \overline{u}_{0}(y) }{\sqrt{t^{2}-\vert x-y \vert^{2}}} \int_{D} G_{k}(\eta,y) e_{n_{0}}(\eta) d\eta dy \Bigg] \\
&+& \frac{-t \, {{\ensuremath{\mathrm{Im\,}}}}(\tau)}{(t^{2}-\vert x-z \vert^{2})^{3/2}} \, \frac{\vert u_{0}(z) \vert^{2}}{\vert 1- \omega^{2} \, \mu_{0} \, \lambda_{n_{0}} \, \tau \vert^{2}} \, \, \Big(\int_{D} e_{n_{0}} dx \Big)^{2} + \mathcal{O}\big(\vert \log(a) \vert^{\max(-1,2h-2)}\big).\end{aligned}$$ Now, if we take two frequencies $\omega^{2}_{\pm}$, such that $\omega^{2}_{\pm} = \omega^{2}_{n_{0}} \pm \vert \log(a) \vert^{-h}$, we obtain $$\begin{aligned}
\big(p^{\pm} - p_{0}\big)(t,x) & = & 2 \, \omega_{\pm}^{2} \mu_{0} \; \int_{D} e_{n_{0}} \, dx \, {{\ensuremath{\mathrm{Re\,}}}}\,\Bigg[ \frac{u_{0}(z)}{[1-\omega_{\pm}^{2}\, \mu_{0} \, \tau \, \lambda_{n_{0}}]} \, \tau \, \int_{\Omega \setminus D} \, \partial_{t} \, \frac{\, {{\ensuremath{\mathrm{Im\,}}}}(\varepsilon_{0})(y) \; \overline{u}_{0}(y) \,}{\sqrt{t^{2}-\vert x-y \vert^{2}}} \int_{D} G_{k}(\eta,y) \, e_{n_{0}}(\eta) \, d\eta\, dy \Bigg] \\
&+& \frac{-t \, {{\ensuremath{\mathrm{Im\,}}}}(\tau)}{(t^{2}-\vert x-z \vert^{2})^{3/2}} \, \frac{1}{\vert 1- \omega^{2}_{\pm} \, \mu_{0} \, \lambda_{n_{0}} \, \tau \vert^{2}} \, \vert u_{0}(z) \vert^{2} \, \Big(\int_{D} e_{n_{0}} dx \Big)^{2} + \mathcal{O}(\vert \log(a) \vert^{\max(-1,2h-2)}).\end{aligned}$$ We use $ 1-\omega^{2}_{n_{0}} \, \mu_{0} \lambda_{n_{0}} \, \tau = 0$ to deduce that $\vert 1-\omega^{2}_{\pm} \, \mu_{0} \lambda_{n_{0}} \, \tau \vert = \mathcal{O}( \vert \log(a) \vert^{-h})$.\
After some simplifications we get $$\begin{aligned}
\big(p^{\pm} - p_{0}\big)(t,x) & = & 2 \, \omega_{n_{0}}^{2} \mu_{0} \; \int_{D} e_{n_{0}} \, dx \, {{\ensuremath{\mathrm{Re\,}}}}\,\Bigg[ \frac{u_{0}(z)}{[1-\omega_{\pm}^{2}\, \mu_{0} \, \tau \, \lambda_{n_{0}}]} \, \tau \, \int_{\Omega \setminus D} \, \partial_{t} \, \frac{\, {{\ensuremath{\mathrm{Im\,}}}}(\varepsilon_{0})(y) \; \overline{u}_{0}(y) \,}{\sqrt{t^{2}-\vert x-y \vert^{2}}} \int_{D} G_{k}(\eta,y) \, e_{n_{0}}(\eta) \, d\eta\, dy \Bigg] \\
&+& \frac{-t \, {{\ensuremath{\mathrm{Im\,}}}}(\tau)}{(t^{2}-\vert x-z \vert^{2})^{3/2}} \, \frac{1}{\vert 1- \omega^{2} \, \mu_{0} \, \lambda_{n_{0}} \, \tau \vert^{2}} \, \vert u_{0}(z) \vert^{2} \, \Big(\int_{D} e_{n_{0}} dx \Big)^{2} + \mathcal{O}(\vert \log(a) \vert^{\max(-1,2h-2)}).\end{aligned}$$ Next, $$\begin{aligned}
(p^{+} + p^{-} - 2p_{0})(t,x) & = & 4 \, \omega^{2}_{n_{0}} \mu_{0} \, \int_{D} e_{n_{0}} dx\; {{\ensuremath{\mathrm{Re\,}}}}\,\Bigg[\frac{u_{0}(z) \; \tau \; (1-\omega^{2}_{n_{0}} \, \mu_{0} \lambda_{n_{0}} \, \tau)}{(1-\omega^{2}_{+} \, \mu_{0} \lambda_{n_{0}} \, \tau)\, (1-\omega^{2}_{-} \, \mu_{0} \lambda_{n_{0}} \, \tau)} \\&& \int_{\Omega \setminus D} \, \partial_{t} \, \frac{ {{\ensuremath{\mathrm{Im\,}}}}(\varepsilon_{0})(y) \; \overline{u}_{0}(y)}{\sqrt{t^{2}-\vert x-y \vert^{2}}} \int_{D} G_{k}(\eta,y) \, e_{n_{0}}(\eta) \, d\eta\, dy \Bigg] \\
& + & \frac{-2 \, t \, {{\ensuremath{\mathrm{Im\,}}}}(\tau)}{(t^{2}-\vert x-z \vert^{2})^{3/2}} \, \frac{\vert u_{0}(z) \vert^{2}}{\vert 1- \omega^{2} \, \mu_{0} \, \lambda_{n_{0}} \, \tau \vert^{2}} \, \, \Big(\int_{D} e_{n_{0}} dx \Big)^{2} + \mathcal{O}(\vert \log(a) \vert^{\max(-1,2h-2)}),\end{aligned}$$ thanks to $(\ref{exactMieresonance})$, we know that $(1-\omega^{2}_{n_{0}} \, \mu_{0} \lambda_{n_{0}} \, \tau)=0$ , then the right term of this equation will be reduced to only the dominant term. Finally, we obtain $$\label{valueofV}
(p^{+} + p^{-} - 2p_{0})(t,x) = \frac{-t \, }{(t^{2}-\vert x-z \vert^{2})^{3/2}} \, \, \frac{2 \; {{\ensuremath{\mathrm{Im\,}}}}(\tau) \; \, \vert u_{0}(z) \vert^{2}}{\vert 1- \omega^{2} \, \mu_{0} \, \lambda_{n_{0}} \, \tau \vert^{2}} \Big(\int_{D} e_{n_{0}} dx \Big)^{2} + \mathcal{O}\big(\vert \log(a) \vert^{\max(-1,2h-2)}\big),$$ or, with help of $(\ref{ve=Ve})$, $$\label{abxyz}
(p^{+} + p^{-} - 2p_{0})(t,x) = \frac{-2 \, t \; {{\ensuremath{\mathrm{Im\,}}}}(\tau) \; \vert <u_{1};e_{n_{0}}> \vert^{2}}{(t^{2}-\vert x-z \vert^{2})^{3/2}} + \mathcal{O}\big(\vert \log(a) \vert^{\max(-1,2h-2)}\big).$$
Photo-acoustic imaging using two close particles (Dimers)
---------------------------------------------------------
Proof of $(\ref{pressure-tilde-expansion})$ \
To avoid using, in the proof, more notations we keep the same ones as in the case of one particle whenever this is possible.
\[lemmad1d2\] We have $$\label{xd1d2}
u_{2}(x) = \mathcal{O}(1) + \mathcal{O}(\vert \log(a) \vert^{h-1} \; \vert \log(dist(x,D_{1} \cup D_{2})) \vert), \qquad \qquad x \notin D_{1} \cup D_{2}.$$
We skip the proof since it is similar to that of one particle (see the proof of Lemma $\ref{awayD}$).
Now, from Poisson’s formula, the solution can be written as $$\begin{aligned}
p(t,x) & = & \sum_{i=1}^{2} \partial_{t} \int_{\vert x-y \vert < t} \frac{({{\ensuremath{\mathrm{Im\,}}}}(\varepsilon_{p}) \vert u_{2} \vert^{2})(y)}{\sqrt{t^{2}-\vert x-y \vert^{2}}} \chi_{D_{i}} \, dy + \partial_{t} \int_{\vert x-y \vert < t} \frac{({{\ensuremath{\mathrm{Im\,}}}}(\varepsilon_{0}) \vert u_{2} \vert^{2})(y)}{\sqrt{t^{2}-\vert x-y \vert^{2}}} \chi_{\Omega \setminus D} \, dy \\
& = & \sum_{i=1}^{2} \partial_{t} \int_{\vert x-y \vert < t} \frac{({{\ensuremath{\mathrm{Im\,}}}}(\varepsilon_{p}-\varepsilon_{0}) \vert u_{2} \vert^{2})(y)}{\sqrt{t^{2}-\vert x-y \vert^{2}}} \chi_{D_{i}} \, dy + \partial_{t} \int_{\vert x-y \vert < t} \frac{({{\ensuremath{\mathrm{Im\,}}}}(\varepsilon_{0}) \vert u_{2} \vert^{2})(y)}{\sqrt{t^{2}-\vert x-y \vert^{2}}} \chi_{\Omega} \, dy. \end{aligned}$$ For $t > diam (\Omega)$ we have $$p(t,x) = \sum_{i=1}^{2} \partial_{t} \int_{D_{i}} \frac{({{\ensuremath{\mathrm{Im\,}}}}(\varepsilon_{p}-\varepsilon_{0}) \vert u_{2} \vert^{2})(y)}{\sqrt{t^{2}-\vert x-y \vert^{2}}} \, dy + \partial_{t} \int_{\Omega} \frac{({{\ensuremath{\mathrm{Im\,}}}}(\varepsilon_{0}) \vert u_{2} \vert^{2})(y)}{\sqrt{t^{2}-\vert x-y \vert^{2}}} \, dy.$$ As before set $$T_{4}^{\star} := \partial_{t} \int_{\Omega} \frac{({{\ensuremath{\mathrm{Im\,}}}}(\varepsilon_{0}) \vert u_{2} \vert^{2})(y)}{\sqrt{t^{2}-\vert x-y \vert^{2}}} \, dy.$$ Next, we assume that $\tau_{1} = \tau_{2} = \tau$ and we use Taylor expansion of $(\varepsilon_{p}-\varepsilon_{0})(\centerdot)$ and $\big(t^{2} - \vert x - \centerdot \vert^{2} \big)^{-3/2}$ near $z_{1,2}$ to obtain $$p(t,x) = -t \, {{\ensuremath{\mathrm{Im\,}}}}(\tau) \; \sum_{i=1}^{2} \int_{D_{i}} \frac{ \vert u_{2} \vert^{2}(y)}{\big(t^{2}-\vert x-y \vert^{2}\big)^{3/2}} \, dy +T_{4}^{\star} + \mathcal{O}\left(\sum_{i=1}^{2} \int_{D_{i}} \frac{ \int_{0}^{1} (y-z_{i})\centerdot \nabla \varepsilon_{0}(z_{i}+t(y-z_{i})) \, dt \, \vert u_{2} \vert^{2}(y)}{\big(t^{2}-\vert x-y \vert^{2}\big)^{3/2}} \, dy \right).$$ The remainder term, as done in $(\ref{lb})$, is of order $\mathcal{O}\big(a^{3} \, \vert \log(a) \vert^{2h} \big)$. Then, as in the case of one particle, we have $$\begin{aligned}
p(t,x) & = & \sum_{i=1}^{2} \frac{-t {{\ensuremath{\mathrm{Im\,}}}}(\tau)}{\big(t^{2}-\vert x-z_{i} \vert^{2}\big)^{3/2}} \int_{D_{i}} \vert u_{2} \vert^{2} dy + T_{4}^{\star} \\
& +& \mathcal{O}\left( \sum_{i=1}^{2} {{\ensuremath{\mathrm{Im\,}}}}(\tau) \int_{D_{i}} (\vert y - z_{i} \vert^{2} + 2 <x-z_{i};z_{i}-y> ) \vert u_{2} \vert^{2}(y) dy \right) + \mathcal{O}\big(a^{3} \, \vert \log(a) \vert^{2h} \big).\end{aligned}$$ We deduce as in $(\ref{hdd})$ that the remainder term can be estimated as $\mathcal{O}\big( {{\ensuremath{\mathrm{Im\,}}}}(\tau) \, a^{3} \, \vert \log(a) \vert^{2h} \big)$. Next, we develop $u_{2}$ over the basis and we use $(\ref{oyaya})$ to estimate the remainder term to obtain $$\begin{aligned}
p(t,x) & = & -t {{\ensuremath{\mathrm{Im\,}}}}(\tau) \sum_{i=1}^{2} \frac{\vert <u_{2};e^{(i)}_{n_{0}}> \vert^{2}}{\big(t^{2}-\vert x-z_{i} \vert^{2}\big)^{3/2}} + T_{4}^{\star} + \mathcal{O}\left({{\ensuremath{\mathrm{Im\,}}}}(\tau) \sum_{i=1 \atop n \neq n_{0}}^{2} \vert <u_{2};e^{(i)}_{n}> \vert^{2} \right) + \mathcal{O}\big({{\ensuremath{\mathrm{Im\,}}}}(\tau) \, a^{3} \, \vert \log(a) \vert^{2h} \big).\end{aligned}$$ Then we get $$\label{T4p}
p(t,x) = -t \, {{\ensuremath{\mathrm{Im\,}}}}(\tau) \, \sum_{i=1}^{2} \, \frac{\vert <u_{2};e^{(i)}_{n_{0}}> \vert^{2}}{\big(t^{2}-\vert x-z_{i} \vert^{2}\big)^{3/2}} + T^{\star}_{4} + \mathcal{O}\big({{\ensuremath{\mathrm{Im\,}}}}(\tau) \, a^{2} \big).$$ Set $\Omega_{1,2} := \Omega \setminus \big( D_{1} \cup D_{2} \big)$ and write $T^{\star}_{4}$ as: $$T^{\star}_{4} = \int_{\Omega} ({{\ensuremath{\mathrm{Im\,}}}}(\varepsilon_{0}) \vert u_{2} \vert^{2})(y) \; \partial_{t} \, \frac{1}{\sqrt{t^{2}-\vert x-y \vert^{2}}} \, dy = \int_{\Omega_{1,2}} \vert u_{2} \vert^{2} \; f \; dy + \int_{D_{1} \cup D_{2}} \vert u_{2} \vert^{2} \; f \; dy.$$ From the a priori estimate, see $(\ref{apmp})$, and lemma $(\ref{lemmad1d2})$ we deduce that the first integral dominates the second one. Now, since $f$ is smooth, the a priori estimate allows to estimate the integral over $D_{1} \cup D_{2}$ as follows $$\Big\vert \int_{D_{1} \cup D_{2}} \vert u_{2} \vert^{2} \; f \; dy \Big\vert \lesssim \Vert u_{2} \Vert^{2} = \mathcal{O}\big(a^{2} \vert \log(a) \vert^{2h}\big).$$ Then we use L.S.E to obtain $$\begin{aligned}
T^{\star}_{4} & = & \int_{\Omega} \, f \, \vert u_{0} \vert^{2} dy - \int_{D_{1} \cup D_{2}} \, f \, \vert u_{0} \vert^{2} dy + 2 \,\omega^{2} \, \mu_{0} \, \sum_{i=1}^{2} \,{{\ensuremath{\mathrm{Re\,}}}}\Bigg[ \int_{\Omega_{1,2}} \; f \; \overline{u}_{0}(y) \, \int_{D_{i}} (\varepsilon_{p}-\varepsilon_{0})(\eta) \, G_{k}(\eta,y) \, u_{2}(\eta) \, d\eta \, d y \Bigg] \\
& + & ( \omega^{2} \, \mu_{0} )^{2} \, \sum_{i=1}^{2} \, \int_{\Omega_{1,2}} f \,\Big\vert \int_{D_{i}} (\varepsilon_{p}-\varepsilon_{0})(\eta) \, G_{k}(\eta,y) \, u_{2}(\eta) \, d\eta \Big\vert^{2} \, dy. \\
& + & 2 \, {{\ensuremath{\mathrm{Re\,}}}}\Bigg[ \int_{\Omega_{1,2}} f \; \int_{D_{1}} \overline{(\varepsilon_{p}-\varepsilon_{0})(\eta) \, G_{k}(\eta,y) \, u_{2}(\eta)} \, d\eta \; \int_{D_{2}} (\varepsilon_{p}-\varepsilon_{0})(\eta) \, G_{k}(\eta,y) \, u_{2}(\eta) \, d\eta dy \Bigg] + \mathcal{O}\big(a^{2} \vert \log(a) \vert^{2h}\big).\end{aligned}$$ Clearly, by the smoothness of $f$ and $\vert u_{0} \vert$, we have $$\Big\vert \int_{D_{1} \cup D_{2}} \, f \, \vert u_{0} \vert^{2} dy \Big\vert = \mathcal{O}\big( a^{2} \big).$$ Then, we obtain[^9] $$\begin{aligned}
T^{\star}_{4} & = & p_{0}(t,x) + 2 \,\omega^{2} \, \mu_{0} \, \sum_{i=1}^{2} \,{{\ensuremath{\mathrm{Re\,}}}}\Bigg[ \int_{\Omega_{1,2}} f \; \overline{u}_{0}(y) \, \int_{D_{i}} (\varepsilon_{p}-\varepsilon_{0})(\eta) \, G_{k}(\eta,y) \, u_{2}(\eta) \, d\eta \, d y \Bigg]\\
& + & 2 \, {{\ensuremath{\mathrm{Re\,}}}}\Bigg[ \int_{\Omega_{1,2}} f \; \int_{D_{1}} \overline{(\varepsilon_{p}-\varepsilon_{0})(\eta) \, G_{k}(\eta,y) \, u_{2}(\eta)} \, d\eta \; \int_{D_{2}} (\varepsilon_{p}-\varepsilon_{0})(\eta) \, G_{k}(\eta,y) \, u_{2}(\eta) \, d\eta dy \Bigg] \\
&+& (\omega^{2} \, \mu_{0} )^{2} \, \sum_{i=1}^{2} \, \int_{\Omega_{1,2}} f\, \,\Big\vert \int_{D_{i}} (\varepsilon_{p}-\varepsilon_{0})(\eta) \, G_{k}(\eta,y) \, u_{2}(\eta) \, d\eta \Big\vert^{2} \, dy + \mathcal{O}\big( a^{2} \, \vert \log(a) \vert^{2h} \big).\end{aligned}$$ We remark that $$(\omega^{2} \, \mu_{0} )^{2} \, \, \int_{\Omega_{1,2}} f\, \,\Big\vert \int_{D_{i}} (\varepsilon_{p}-\varepsilon_{0})(\eta) \, G_{k}(\eta,y) \, u_{2}(\eta) \, d\eta \Big\vert^{2} \, dy \quad \text{for} \; i=1,2$$ have the same expression as $S_{3}$ given in section $(\ref{opsubsection})$ (more exactly see $\ref{S3}$). Then we estimate it as $\mathcal{O}\big( \vert \log(a) \vert^{2h-2} \big)$. Similarly, regardless of whether the position of $y$ is in $D_{1}$ or $D_{2}$, the same estimation holds for $$2 \, {{\ensuremath{\mathrm{Re\,}}}}\Bigg[ \int_{\Omega_{1,2}} f \; \int_{D_{1}} \overline{(\varepsilon_{p}-\varepsilon_{0})(\eta) \, G_{k}(\eta,y) \, u_{2}(\eta)} \, d\eta \; \int_{D_{2}} (\varepsilon_{p}-\varepsilon_{0})(\eta) \, G_{k}(\eta,y) \, u_{2}(\eta) \, d\eta dy \Bigg]$$ We synthesize the above to get $$T^{\star}_{4} = p_{0}(t,x) + 2 \,\omega^{2} \, \mu_{0} \, \sum_{i=1}^{2} \,{{\ensuremath{\mathrm{Re\,}}}}\Bigg[ \int_{\Omega_{1,2}} f \; \overline{u}_{0}(y) \, \int_{D_{i}} (\varepsilon_{p}-\varepsilon_{0})(\eta) \, G_{k}(\eta,y) \, u_{2}(\eta) \, d\eta \, d y \Bigg]\\
+ \mathcal{O}\big( \vert \log(a) \vert^{2h-2} \big).$$ Next, we develop $u_{2}$ over the basis and use the Taylor expansion of $(\varepsilon_{p}-\varepsilon)(\centerdot)$ to obtain $$\begin{aligned}
T^{\star}_{4} & = & p_{0}(t,x) + 2 \,\omega^{2} \, \mu_{0} \, \sum_{i=1}^{2} \,{{\ensuremath{\mathrm{Re\,}}}}\Bigg[\tau \, <u_{2};e^{(i)}_{n_{0}}> \int_{\Omega_{1,2}} f \; \overline{u}_{0}(y) \, \int_{D_{i}} \, G_{k}(\eta,y) \, e^{(i)}_{n_{0}}(\eta) \, d\eta \, d y \Bigg] \\
& - & 2 \,\omega^{2} \, \mu_{0} \, \sum_{i=1}^{2} \,{{\ensuremath{\mathrm{Re\,}}}}\Bigg[ \int_{\Omega_{1,2}} \; f \; \overline{u}_{0}(y) \, \int_{D_{i}} \, \int_{0}^{1} (z_{i}- \eta) \centerdot \nabla \varepsilon_{0}(z_{i}+s(\eta - z_{i})) \, ds \, G_{k}(\eta,y) \, u_{2}(\eta) \, d\eta \, d y \Bigg] \\
& + & 2 \,\omega^{2} \, \mu_{0} \, \sum_{i=1,2 \atop n \neq n_{0}} {{\ensuremath{\mathrm{Re\,}}}}\Bigg[\tau \, <u_{2};e^{(i)}_{n}> \int_{\Omega_{1,2}} f \; \overline{u}_{0}(y) \, \int_{D_{i}} \, G_{k}(\eta,y) \, e^{(i)}_{n}(\eta) \, d\eta \, d y \Bigg] + \mathcal{O}\big( \vert \log(a) \vert^{2h-2} \big)
$$ To precise the value of the error we need to estimate $$\begin{aligned}
&& \Bigg\vert \sum_{i=1}^{2} \,{{\ensuremath{\mathrm{Re\,}}}}\Bigg[ \int_{\Omega_{1,2}} \; f \; \overline{u}_{0}(y) \, \int_{D_{i}} \, \int_{0}^{1} (z_{i}- \eta) \centerdot \nabla \varepsilon_{0}(z_{i}+s(\eta - z_{i})) \, ds \, G_{k}(\eta,y) \, u_{2}(\eta) \, d\eta \, d y \Bigg] \Bigg\vert \\
& \leq & a \, \sum_{i=1}^{2} \, \int_{\Omega_{1,2}} \, \Big\vert \; f \; \overline{u}_{0}(y) \, \Big\vert \,\Bigg\vert \int_{D_{i}} \, G_{k}(\eta,y) \, u_{2}(\eta) \, d\eta \, \Bigg\vert dy \lesssim a \,\sum_{i=1}^{2} \, \Bigg( \int_{\Omega_{1,2}} \, \Bigg\vert \int_{D_{i}} \, G_{k}(\eta,y) \, u_{2}(\eta) \, d\eta \, \Bigg\vert^{2} dy \Bigg)^{\frac{1}{2}} \\
& \lesssim & a \, \Vert u_{2} \Vert \, \sum_{i=1}^{2} \, \Bigg( \int_{D_{i}} \int_{\Omega_{1,2}} \, \vert G_{k}\vert^{2} (\eta,y) \, dy \, d\eta \Bigg)^{\frac{1}{2}} = \mathcal{O}\big( a^{3} \, \vert \log(a) \vert^{h} \big)\end{aligned}$$ and $$\begin{aligned}
&& \Bigg\vert \sum_{i=1,2 \atop n \neq n_{0}} {{\ensuremath{\mathrm{Re\,}}}}\Bigg[\tau \, <u_{2};e^{(i)}_{n}> \int_{\Omega_{1,2}} f \; \overline{u}_{0}(y) \, \int_{D_{i}} \, G_{k}(\eta,y) \, e^{(i)}_{n}(\eta) \, d\eta \, d y \Bigg] \Bigg\vert \\
& \leq & \vert \tau \vert \; \Big( \sum_{i=1,2 \atop n \neq n_{0}} \,\big\vert <u_{2};e^{(i)}_{n}> \, \big\vert^{2} \Big)^{\frac{1}{2}} \, \Bigg( \int_{\Omega_{1,2}} \Big\vert f \; \overline{u}_{0}(y) \,\Big\vert^{2} dy \Bigg)^{\frac{1}{2}} \, \Bigg( \int_{\Omega_{1,2}} \sum_{i=1,2 \atop n \neq n_{0}} \Big\vert \int_{D_{i}} \, G_{k}(\eta,y) \, e^{(i)}_{n}(\eta) \, d\eta \, \Big\vert^{2} \, d y \Bigg)^{\frac{1}{2}} \\
& \leq & \vert \tau \vert \; \Vert u_{0} \Vert \, \Bigg( \int_{D} \, \int_{\Omega_{1,2}} \vert G_{k}\vert^{2}(\eta,y) dy \, d\eta \Bigg)^{\frac{1}{2}} = \mathcal{O}\big( \vert \log(a) \vert^{-1} \big)\end{aligned}$$ We keep the dominant term and sum the others as an error to obtain $$T^{\star}_{4} = p_{0}(t,x) + 2 \,\omega^{2} \, \mu_{0} \, \sum_{i=1}^{2} \,{{\ensuremath{\mathrm{Re\,}}}}\Bigg[\tau \, <u_{2};e^{(i)}_{n_{0}}> \int_{\Omega_{1,2}} f \; \overline{u}_{0}(y) \, \int_{D_{i}} \, G_{k}(\eta,y) \, e^{(i)}_{n_{0}}(\eta) \, d\eta \, d y \Bigg] + \mathcal{O}\big(\vert \log(a) \vert^{\max(-1,2h-2)}\big).$$
Use ($\ref{v&V}$) to obtain $$\begin{aligned}
\label{ved53}
\nonumber
T^{\star}_{4} & = & 2 \,\omega^{2} \, \mu_{0} \, \sum_{i=1}^{2} \,{{\ensuremath{\mathrm{Re\,}}}}\Bigg[ \tau \; \frac{ <u_{0};e^{(i)}_{n_{0}}>}{det^{\star}} \int_{\Omega_{1,2}} f \;\, \overline{u}_{0}(y) \, \int_{D_{i}} \, G_{k}(\eta;y) \, e^{(i)}_{n_{0}}(\eta) \, d\eta \, d y \Bigg] + p_{0}(t,x) \\
&+& \mathcal{O}\Bigg(\tau \, a \, \sum_{i=1}^{2} \, \int_{\Omega_{1,2}} f \; \, \overline{u}_{0}(y) \, \int_{D_{i}} \, G_{k}(\eta,y) \, e^{(i)}_{n_{0}}(\eta) \, d\eta \, d y \Bigg) + \mathcal{O}\big(\vert \log(a) \vert^{\max(-1,2h-2)}\big),\end{aligned}$$ where $$det^{\star} := (1-\omega^{2}\,\mu_{0}\,\tau\,\lambda_{n_{0}})-\omega^{2}\, \mu_{0}\, \tau \, a^{2} \, \Phi_{0}(z_{1};z_{2}) \, \Big( \int_{B} \overline{e}_{n_{0}} \Big)^{2},$$ and $$\tau a \sum_{i=1}^{2} \int_{\Omega_{1,2}} f \overline{u}_{0}(y) \int_{D_{i}} G_{k}(\eta,y) e^{(i)}_{n_{0}}(\eta) d\eta d y = \tau a \sum_{i=1}^{2} \int_{D_{i}} \int_{\Omega_{1,2}} f \overline{u}_{0}(y) G_{k}(\eta,y) dy e^{(i)}_{n_{0}}(\eta) d \eta = \mathcal{O}\big( \vert \log(a) \vert^{-1} \big).$$ The last equality is justified by the fact that we integrate a smooth function over $\Omega_{1,2}$ and we know that the integral over $D$ of an eigenfunction is of the order $a$.\
Also we can write $(\ref{ved53})$ as $$\begin{aligned}
T^{\star}_{4} & = & 2 \,\omega^{2} \, \mu_{0} \, \int_{D} e_{n_{0}} dx \sum_{i=1}^{2} \,{{\ensuremath{\mathrm{Re\,}}}}\Bigg[\tau \; \frac{ u_{0}(z_{i}) \; }{det^{\star}} \int_{\Omega_{1,2}} \partial_{t} \frac{{{\ensuremath{\mathrm{Im\,}}}}(\varepsilon_{0})(y)\, \overline{u}_{0}(y)}{\sqrt{t^{2}-\vert x-y \vert^{2}}} \, \int_{D_{i}} \, G_{k}(\eta;y) \, e^{(i)}_{n_{0}}(\eta) \, d\eta \, d y \Bigg] + p_{0}(t,x)\\
&+& \mathcal{O}\Bigg( \sum_{i=1}^{2} \, \frac{\tau \; a^{2} }{det^{\star}} \int_{\Omega_{1,2}} f \, \overline{u}_{0}(y) \, \int_{D_{i}} \, G_{k}(\eta,y) \, e^{(i)}_{n_{0}}(\eta) \, d\eta \, d y \Bigg) + \mathcal{O}\big(\vert \log(a) \vert^{\max(-1,2h-2)}\big).\end{aligned}$$ We have $$\mathcal{O}\Bigg( \sum_{i=1}^{2} \, \frac{\tau \; a^{2} }{det^{\star}} \int_{\Omega_{1,2}} f \, \overline{u}_{0}(y) \, \int_{D_{i}} \, G_{k}(\eta,y) \, e^{(i)}_{n_{0}}(\eta) \, d\eta \, d y \Bigg) = \mathcal{O}(a \, \vert \log(a) \vert^{h-1})$$ since, if we compare it with the error term given in equation $(\ref{ved53})$ we deduce that they are different by a term of order $a / det^{\star}$. Finally: $$\begin{aligned}
T^{\star}_{4} & = & 2 \,\omega^{2} \, \mu_{0} \, \int_{D} e_{n_{0}} dx \sum_{i=1}^{2} \,{{\ensuremath{\mathrm{Re\,}}}}\Bigg[ \displaystyle\frac{\tau \; u_{0}(z_{i}) }{det^{\star}} \; \int_{\Omega_{1,2}} \partial_{t} \frac{{{\ensuremath{\mathrm{Im\,}}}}(\varepsilon_{0})(y)\, \overline{u}_{0}(y)}{\sqrt{t^{2}-\vert x-y \vert^{2}}} \, \int_{D_{i}} \, G_{k}(\eta,y) \, e^{(i)}_{n_{0}}(\eta) \, d\eta \, d y \Bigg] + p_{0}(t,x) \\ &+& \mathcal{O}\big(\vert \log(a) \vert^{\max(-1,2h-2)}\big).\end{aligned}$$ We set $I_{i}$ to be $$I_{i} := \int_{\Omega_{1,2}} \; \partial_{t} \, \frac{{{\ensuremath{\mathrm{Im\,}}}}(\varepsilon_{0})(y)\, \overline{u}_{0}(y)}{\sqrt{t^{2}-\vert x-y \vert^{2}}} \int_{D_{i}} \, G_{k}(\eta,y) \, e^{(i)}_{n_{0}}(\eta) \, d\eta \, d y,$$ and use the estimation of $T^{\star}_{4}$ in the equation $(\ref{T4p})$ to obtain: $$\begin{aligned}
(p-p_{0})(t,x) &=& 2 \,\omega^{2} \, \mu_{0} \,\,\Big( \int_{D} e_{n_{0}} dx \Big)\, \sum_{i=1}^{2} {{\ensuremath{\mathrm{Re\,}}}}\Bigg[ \frac{\tau \, u_{0}(z_{i})}{det^{\star}} I_{i} \Bigg] - t \, {{\ensuremath{\mathrm{Im\,}}}}(\tau) \, \sum_{i=1}^{2} \, \frac{\vert <u_{2};e^{(i)}_{n_{0}}> \vert^{2}}{\big(t^{2}-\vert x-z_{i} \vert^{2}\big)^{3/2}} \\
&+& \mathcal{O}\big(\vert \log(a) \vert^{\max(-1,2h-2)}\big).\end{aligned}$$ We use the next lemma to simplify the expression of $p(t,x)$
\[lemma35\] We have $$\label{v1n0=v2n0}
<u_{2};e^{(1)}_{n_{0}}> = <u_{2};e^{(2)}_{n_{0}}> + \mathcal{O}(a).$$
Remember, from $(\ref{intv1intv2})$, that we have: $$\int_{D_{1}} u_{2} \, dx = \int_{D_{2}} u_{2} \, dx + \mathcal{O}(d \, a^{2} \, \vert \log(a) \vert^{h}).$$ Write each integral over the basis: $$<u_{2},e^{(1)}_{n_{0}}> \, \int_{D_{1}} e_{n_{0}} \, dx = <u_{2},e^{(2)}_{n_{0}}> \, \int_{D_{1}} e_{n_{0}} \, dx + \mathcal{O}\Bigg( \sum_{i=1 \atop n \neq n_{0}}^{2} <u_{2},e^{(i)}_{n}> \, \int_{D_{i}} e^{(i)}_{n} \, dx \Bigg) + \mathcal{O}(d \, a^{2} \, \vert \log(a) \vert^{h})$$ Clearly, by Holder inequality, we have $$\Bigg\vert \sum_{i=1 \atop n \neq n_{0}}^{2} <u_{2},e^{(i)}_{n}> \, \int_{D_{i}} e^{(i)}_{n} \, dx \Bigg\vert \lesssim \Vert u_{0} \Vert \; \Vert 1 \Vert = \mathcal{O}\big( a^{2} \big)$$ and it follows that $$<u_{2};e^{(1)}_{n_{0}}> = <u_{2};e^{(2)}_{n_{0}}> + \mathcal{O}(a).$$ From $(\ref{v1n0=v2n0})$ we deduce: $$\vert <u_{2};e^{(1)}_{n_{0}}> \vert^{2} = \vert <u_{2};e^{(2)}_{n_{0}}> \vert^{2} + \mathcal{O}(a^{2} \; \vert \log(a) \vert^{h}).$$
By lemma $\ref{lemma35}$, we have $$\begin{aligned}
\nonumber
\big(p - p_{0}\big)(t,x) &=& 2 \,\omega^{2} \, \mu_{0} \, \Big[ \int_{D_{1}} e_{n_{0}} \Big]\sum_{i=1}^{2} \, {{\ensuremath{\mathrm{Re\,}}}}\Bigg[ \frac{\tau \, u_{0}(z_{i})}{det^{\star}} I_{i} \Bigg]- t \, {{\ensuremath{\mathrm{Im\,}}}}(\tau) \, \vert <u_{2};e^{(2)}_{n_{0}}> \vert^{2} \, \sum_{i=1}^{2} \, \frac{1}{\big(t^{2}-\vert x-z_{i} \vert^{2}\big)^{3/2}} \\
&+& \mathcal{O}\big(\vert \log(a) \vert^{\max(-1,2h-2)}\big).\end{aligned}$$ We have also: $$\frac{1}{\big(t^{2}-\vert x-z_{1} \vert^{2}\big)^{3/2}} = \frac{1}{\big(t^{2}-\vert x-z_{2} \vert^{2}\big)^{3/2}} \Big( 1 + \mathcal{O}(d) \Big).$$ Then $$(p-p_{0})(t,x) = 2 \,\omega^{2} \, \mu_{0} \, \Big[ \int_{D_{1}} e_{n_{0}} \Big]\sum_{i=1}^{2} \, {{\ensuremath{\mathrm{Re\,}}}}\Bigg[ \frac{\tau \, u_{0}(z_{i})}{det^{\star}} I_{i} \Bigg] - 2 t \, {{\ensuremath{\mathrm{Im\,}}}}(\tau) \, \, \frac{\vert <u_{2};e^{(2)}_{n_{0}}> \vert^{2}}{\big(t^{2}-\vert x-z_{2} \vert^{2}\big)^{3/2}}
+ \mathcal{O}\big(\vert \log(a) \vert^{\max(-1,2h-2)}\big).$$ Next, we use the same technique as before by taking two frequencies $\omega_{\pm}^{2} = \omega_{n_{0}}^{2} \pm \vert \log(a) \vert^{-h}$, we get $$\begin{aligned}
(p^{\pm}-p_{0})(t,x) &=& 2 \,\omega_{n_{0}}^{2} \, \mu_{0} \, \Big[ \int_{D_{1}} e_{n_{0}} \Big]\sum_{i=1}^{2} \, {{\ensuremath{\mathrm{Re\,}}}}\Bigg[ \frac{\tau \, u_{0}(z_{i}) \; \; I_{i}}{(1-\omega_{\pm}^{2} \, \mu_{0} \tau \lambda_{n_{0}})-\omega_{\pm}^{2} \, \mu_{0} \tau a^{2} \Phi_{0} (\int_{B} \overline{e}_{n_{0}})^{2}} \Bigg] \\
&+& \mathcal{O}\Bigg( \vert \log(a) \vert^{-h} \, \Big[ \int_{D_{1}} e_{n_{0}} \Big]\sum_{i=1}^{2} \, {{\ensuremath{\mathrm{Re\,}}}}\Bigg[ \frac{\tau \, u_{0}(z_{i}) \; \; I_{i}}{(1-\omega_{\pm}^{2} \, \mu_{0} \tau \lambda_{n_{0}})-\omega_{\pm}^{2} \, \mu_{0} \tau a^{2} \Phi_{0} (\int_{B} \overline{e}_{n_{0}})^{2}} \Bigg] \Bigg) \\
&-& 2 t \, {{\ensuremath{\mathrm{Im\,}}}}(\tau) \, \vert <u_{2};e^{(2)}_{n_{0}}> \vert^{2} \, \frac{1}{\big(t^{2}-\vert x-z_{2} \vert^{2}\big)^{3/2}} + \mathcal{O}\big(\vert \log(a) \vert^{\max(-1,2h-2)}\big).\end{aligned}$$ We estimate the error part as $$\vert \log(a) \vert^{-h} \, \Big[ \int_{D_{1}} e_{n_{0}} \Big]\sum_{i=1}^{2} \, {{\ensuremath{\mathrm{Re\,}}}}\Bigg[ \frac{\tau \, u_{0}(z_{i}) \; \; I_{i}}{(1-\omega_{\pm}^{2} \, \mu_{0} \tau \lambda_{n_{0}})-\omega_{\pm}^{2} \, \mu_{0} \tau a^{2} \Phi_{0} (\int_{B} \overline{e}_{n_{0}})^{2}} \Bigg] \sim {\mathcal{O}(\vert \log(a) \vert^{-1})}.$$ Define $\tilde{p}(t,x)$ as $$\tilde{p}(t,x) := (p^{+}-p_{0})(t,x) + \frac{1-\omega_{n_{0}}^{2}}{1+\omega_{n_{0}}^{2}} (p^{-}-p_{0})(t,x),$$ hence $$\begin{aligned}
\tilde{p}(t,x) &=& \frac{-4}{1+\omega_{n_{0}}^{2}} t \, {{\ensuremath{\mathrm{Im\,}}}}(\tau) \, \frac{\vert <u_{2};e^{(2)}_{n_{0}}> \vert^{2}}{\big(t^{2}-\vert x-z_{2} \vert^{2}\big)^{3/2}} + 2 \,\omega_{n_{0}}^{2} \, \mu_{0} \, \Big[ \int_{D_{1}} e_{n_{0}} \Big]\sum_{i=1}^{2} \, {{\ensuremath{\mathrm{Re\,}}}}\Bigg[\tau \, u_{0}(z_{i}) \; I_{i} \\&& \Bigg( \frac{1}{(1-\omega_{+}^{2} \, \mu_{0} \tau \lambda_{n_{0}})-\omega_{+}^{2} \, \mu_{0} \tau a^{2} \Phi_{0} (\int_{B} \overline{e}_{n_{0}})^{2}} + \frac{1-\omega_{n_{0}}^{2}}{1+\omega_{n_{0}}^{2}} \frac{1}{(1-\omega_{-}^{2} \, \mu_{0} \tau \lambda_{n_{0}})-\omega_{-}^{2} \, \mu_{0} \tau a^{2} \Phi_{0} (\int_{B} \overline{e}_{n_{0}})^{2}} \Bigg) \Bigg] \\
&+& \mathcal{O}\big(\vert \log(a) \vert^{\max(-1,2h-2)}\big).\end{aligned}$$ We compute the following quantity $$\begin{aligned}
J &:=& \frac{1}{(1-\omega_{+}^{2} \, \mu_{0} \tau \lambda_{n_{0}})-\omega_{+}^{2} \, \mu_{0} \tau a^{2} \Phi_{0} (\int_{B} \overline{e}_{n_{0}})^{2}} + \frac{1-\omega_{n_{0}}^{2}}{1+\omega_{n_{0}}^{2}} \frac{1}{(1-\omega_{-}^{2} \, \mu_{0} \tau \lambda_{n_{0}})-\omega_{-}^{2} \, \mu_{0} \tau a^{2} \Phi_{0} (\int_{B} \overline{e}_{n_{0}})^{2}} \\
&=& \frac{2 \, \omega_{n_{0}}^{2} \, \mu_{0} \, \tau \, \vert \log(a) \vert^{-h} \, a^{2}\, \Big[ \tilde{\lambda_{n_{0}}} + \Phi_{0} \, \Big( \int_{B} \overline{e}_{n_{0}} \Big)^{2} \Big]}{(1+\omega_{n_{0}}^{2})^{2}\,\Big[(1-\omega_{+}^{2} \, \mu_{0} \tau \lambda_{n_{0}})-\omega_{+}^{2} \, \mu_{0} \tau a^{2} \Phi_{0} (\int_{B} \overline{e}_{n_{0}})^{2}\Big]\, \Big[(1-\omega_{-}^{2} \, \mu_{0} \tau \lambda_{n_{0}})-\omega_{-}^{2} \, \mu_{0} \tau a^{2} \Phi_{0} (\int_{B} \overline{e}_{n_{0}})^{2}\Big]}\end{aligned}$$ hence $J = \mathcal{O}(1)$. Going back to the formula of $\tilde{p}(t,x)$, we obtain: $$\tilde{p}(t,x) = \frac{-4 \, t \, {{\ensuremath{\mathrm{Im\,}}}}(\tau)}{1+\omega_{n_{0}}^{2}} \frac{\vert <u_{2};e^{(2)}_{n_{0}}> \vert^{2}}{\big(t^{2}-\vert x-z_{2} \vert^{2}\big)^{3/2}} + 2 \,\omega_{n_{0}}^{2} \, \mu_{0} \, \Big[ \int_{D_{1}} e_{n_{0}} \Big]\sum_{i=1}^{2} \, {{\ensuremath{\mathrm{Re\,}}}}\big[\tau \, u_{0}(z_{i}) \; I_{i} J \big] + \mathcal{O}\big(\vert \log(a) \vert^{\max(-1,2h-2)}\big),$$ and $$\bigg\vert 2 \,\omega_{n_{0}}^{2} \, \mu_{0} \, \big[ \int_{D_{1}} e_{n_{0}} \big]\sum_{i=1}^{2} \, {{\ensuremath{\mathrm{Re\,}}}}\big[\tau \, u_{0}(z_{i}) \; I_{i} J \big] \bigg\vert \leq a \, \vert \tau \vert \, \vert I_{i} \vert = \mathcal{O}\big( \vert \log(a) \vert^{-1} \big)$$ Finally, we have the desired approximation formula $$\tilde{p}(t,x) = \frac{-4}{1+\omega_{n_{0}}^{2}} t \, {{\ensuremath{\mathrm{Im\,}}}}(\tau) \, \vert <u_{2};e^{(2)}_{n_{0}}> \vert^{2} \, \frac{1}{\big(t^{2}-\vert x-z_{2} \vert^{2}\big)^{3/2}} + \mathcal{O}\big(\vert \log(a) \vert^{max(-1,2h-2)}\big).$$
A priori estimations {#appendixlemma}
====================
A priori estimates on the electric field
----------------------------------------
[of **[Proposition \[abc\]]{}**]{}\
In order to prove the a priori estimation $(\ref{prioriest})$, we proceed in two steps. First we do it for one single particle and then for multiple particles.
- \
Remember that the eigenvalues and eigenfunctions of the logarithmic operator satisfy $$\int_{D} \Phi_{0}(x,y) \, e_{n}(y) \, dy \,=\, \lambda_{n} \, e_{n}(x) \qquad \quad in \quad D,$$ and after scaling we get, with $\tilde{e}_{n}(\cdot) := e_{n}\left(\frac{\cdot-z}{\varepsilon}\right),$ $$\label{eigevfB}
a^{2} \, \Bigg[ \int_{B} \Phi_{0}(\eta,\xi) \, \tilde{e}_{n}(\xi) \, d\xi - \, \frac{1}{2\pi} \, \log(a) \int_{B} \tilde{e}_{n}(\xi) d\xi \Bigg]\,=\, \lambda_{n} \, \tilde{e}_{n}(\eta) \qquad \quad in \quad B.$$ Integrating the equation $(\ref{eigevfB})$ over $B$ we obtain $$\int_{B} \int_{B} \Phi_{0}(\eta,\xi) \, \tilde{e_{n}}(\xi) \, d\eta d\xi = \Bigg[\frac{1}{2\pi} \log(a) \vert B \vert \,+\, \frac{\lambda_{n}}{a^{2}} \Bigg] \,\int_{B} \tilde{e_{n}} d\eta.$$ Multiplying $(\ref{eigevfB})$ by $\tilde{e_{m}}$ and integrating over $B$ we get: $$\int_{B} \int_{B} \Phi_{0}(\eta,\xi) \, \tilde{e_{n}}(\xi) \,\tilde{e_{m}}(\eta) \, d\eta d\xi - \frac{1}{2\pi} \, \log(a) \int_{B} \tilde{e_{n}}(\xi) d\xi \,\int_{B} \tilde{e_{m}}(\eta) d\eta \, = \, \frac{\lambda_{n}}{a^{2}} \, \int_{B} \tilde{e_{n}} \, \tilde{e_{m}} d\eta.$$ Remark that when $m \neq n$, thanks to the fact that $\big\lbrace \tilde{e_{n}} \big\rbrace_{n \in \mathbb{N}} $ forms an orthogonal basis in $\mathbb{L}^{2}(B)$, we obtain $$\label{whennneqm}
\int_{B} \int_{B} \Phi_{0}(\eta,\xi) \, \tilde{e_{n}}(\xi) \,\tilde{e_{m}}(\eta) \, d\eta d\xi = \frac{1}{2\pi} \, \log(a) \, \int_{B} \tilde{e_{n}} d\xi \, \int_{B} \tilde{e_{m}} d\xi$$ and when $m = n$, we get $$\label{whenn=m}
\int_{B} \int_{B} \Phi_{0}(\eta,\xi) \, \tilde{e_{n}}(\xi) \,\tilde{e_{n}}(\eta) \, d\eta d\xi - \frac{1}{2\pi} \, \log(a) \Bigg( \int_{B} \tilde{e_{n}} d\xi \Bigg)^{2} \, = \, \frac{\lambda_{n}}{a^{2}} \, \Vert \tilde{e_{n}} \Vert^{2}.$$ After normalisation $$\label{ha}
\int_{B} \int_{B} \Phi_{0}(\eta,\xi) \, \frac{\tilde{e_{n}}(\xi)}{\Vert \tilde{e_{n}}\Vert} \, \frac{\tilde{e_{n}}(\eta)}{\Vert \tilde{e_{n}}\Vert} \, d\eta d\xi = \frac{1}{\Vert \tilde{e_{n}} \Vert^{2}} \Bigg[\frac{1}{2\pi} \, \log(a) \Bigg( \int_{B} \tilde{e_{n}} d\xi \Bigg)^{2} \, + \, \frac{\lambda_{n}}{a^{2}} \, \Vert \tilde{e_{n}} \Vert^{2} \Bigg].$$ We denote $\overline{e}_{n} := \tilde{e}_{n} / \Vert \tilde{e}_{n}\Vert$ the orthonormalized basis in $\mathbb{L}^{2}(B)$, and we set $$\label{lambdatilde}
\tilde{\lambda_{n}} := \int_{B} \int_{B} \Phi_{0}(\eta,\xi) \, \overline{e}_{n}(\eta) \, \overline{e}_{n}(\xi) \, \, d\eta \, d\xi,$$ from $(\ref{ha})$ and $(\ref{lambdatilde})$ we deduce that $$\label{G}
\tilde{\lambda}_{n} = \frac{\lambda_{n}}{a^{2}} +\frac{1}{2\pi} \log(a) \Bigg(\int_{B} \overline{e}_{n} d\xi\Bigg)^{2}.$$ Thanks to L.S.E and Green kernel expansion $(\ref{Gkexpansion})$, we have $$\begin{aligned}
u_{1}(x) &-& \omega^{2} \mu_{0} \tau \, \int_{D} \Big(\Phi_{0}(x,y) -\frac{1}{2\pi} \log(k)(y)+\Gamma \Big) \, u_{1}(y) \, dy \\ &=& u_{0}(x) + \omega^{2} \mu_{0} \tau \, \mathcal{O}\left( \int_{D} \vert x-y \vert \, \log(\vert x-y \vert) \, u_{1}(y) \, dy \right) \quad in \quad D.\end{aligned}$$ After Taylor expansion of the function $\log(k)$ near the point $z$, we obtain $$\begin{aligned}
u_{1}(x) &-& \omega^{2} \mu_{0} \tau \, \int_{D} \Big(\Phi_{0}(x,y) -\frac{1}{2\pi} \log(k)(z)+\Gamma \Big) \, u_{1}(y) \, dy \\ &=& u_{0}(x) + \mathcal{O}\left(\tau \, \int_{D} \vert x-y \vert \, \log(\vert x-y \vert) \, u_{1}(y) \, dy \right) \\ &+& \mathcal{O}\Big(\tau \, \int_{D} \int_{0}^{1} (y-z)\centerdot \nabla \log(k)(z+t(y-z)) dt \, u_{1}(y) \, dy \Big).\end{aligned}$$ Now scaling, we have $$\begin{aligned}
\tilde{u}_{1}(\eta) & - & \omega^{2} \mu_{0} \tau \, a^{2} \, \int_{B} \Big(\Phi_{0}(\eta,\xi) - \frac{1}{2\pi} \log(k)(z) + \Gamma \Big) \, \tilde{u}_{1}(\xi) \, d\xi + \omega^{2} \mu_{0} \tau \, a^{2} \, \frac{1}{2\pi} \; \log(a) \, \int_{B} \tilde{u}_{1} \; d\xi \\ & = & \tilde{u_{0}}(\eta) + \mathcal{O}\bigg( \tau \, a^{3} \, \int_{B} \vert \eta -\xi \vert \, \log(\vert \eta - \xi \vert) \, \tilde{u}_{1}(\xi) \, d\xi
\bigg) + \mathcal{O}\bigg( \tau \, a^{3} \, \log(a) \, \int_{B} \vert \eta -\xi \vert \, \tilde{u}_{1}(\xi) \, d\xi \bigg) \\
& +& \mathcal{O}\Big(a^{3} \, \tau \, \int_{B} \tilde{u}_{1} \, d\xi \Big).\end{aligned}$$ Using the basis, we obtain \[technics\] $$\begin{aligned}
<\tilde{u}_{1};\overline{e}_{n_{0}}> && \left[1 - \omega^{2} \mu_{0} \tau \, a^{2} \, \int_{B} \, \int_{B} \Phi_{0}(\eta,\xi) \, \overline{e}_{n_{0}}(\xi) \, d\xi \overline{e}_{n_{0}}(\eta) \, d\eta + \frac{\omega^{2} \mu_{0} \tau \, a^{2}}{2\pi} \, \log(a) \, \left[ \int_{B} \overline{e}_{n_{0}} d\xi \right]^{2} \right] \\
& = & <\tilde{u_{0}};\overline{e}_{n_{0}}> +\mathcal{O}\bigg( \tau \, a^{3} \, \int_{B} \,\overline{e}_{n_{0}}(\eta) \int_{B} \vert \eta -\xi \vert \, \log(\vert \eta - \xi \vert) \, \tilde{u}_{1}(\xi) \, d\xi d\eta \bigg) \\
& + & \mathcal{O}\bigg(\tau \, a^{3} \, \log(a) \,\int_{B} \overline{e}_{n_{0}}(\eta) \int_{B} \vert \eta -\xi \vert \, \tilde{u}_{1}(\xi) \, d\xi \, d\eta \bigg) \\
&-& \omega^{2} \mu_{0} \tau \, a^{2} \, \frac{1}{2\pi} \, \log(a) \, <1;\overline{e}_{n_{0}}> \, \sum_{n \neq n_{0}} <\tilde{u}_{1};\overline{e}_{n}> \, \int_{B} \overline{e}_{n} d\xi \\ &+& \omega^{2} \mu_{0} \tau \, a^{2} \Big(- \frac{1}{2\pi}\log(k)(z)+\Gamma \Big) \, \int_{B}\tilde{u}_{1} d\xi \,\int_{B} \overline{e}_{n_{0}} d\eta \\
& + & \omega^{2} \mu_{0} \tau \, a^{2} \, \sum_{n \neq n_{0}} <\tilde{u}_{1};\overline{e}_{n}> \int_{B} \, \int_{B} \Phi_{0}(\eta,\xi) \, \overline{e}_{n}(\xi) \, d\xi \overline{e}_{n_{0}}(\eta) \, d\eta + \mathcal{O}\Big(a^{3} \, \tau \, \int_{B} \tilde{u}_{1} \, d\xi \Big).\end{aligned}$$ After simplifications and using $(\ref{whennneqm})$ and $(\ref{ha})$ we get $$\begin{aligned}
\label{vVone}
\nonumber
<\tilde{u}_{1};\overline{e}_{n_{0}}> &=& \frac{1}{\Big[1-\omega^{2} \mu_{0} \tau \, \lambda_{n_{0}} \Big]} \Bigg[ <\tilde{u_{0}};\overline{e}_{n_{0}}> + \omega^{2} \mu_{0} \tau \, a^{2} \Big( -\frac{1}{2\pi} \log(k)(z)+\Gamma \Big) \, \int_{B}\tilde{u}_{1} d\xi \,\int_{B} \overline{e}_{n_{0}} d\eta \\ \nonumber & + & \mathcal{O}\bigg( \tau \, a^{3} \, \int_{B} \,\overline{e}_{n_{0}}(\eta) \int_{B} \vert \eta -\xi \vert \, \log(\vert \eta - \xi \vert) \, \tilde{u}_{1}(\xi) \, d\xi d\eta \bigg) \\
& + & \mathcal{O}\bigg(\tau \, a^{3} \, \log(a) \,\int_{B} \overline{e}_{n_{0}}(\eta) \int_{B} \vert \eta -\xi \vert \, \tilde{u}_{1}(\xi) \, d\xi \, d\eta \bigg) + \mathcal{O}\Big(a^{3} \, \tau \, \int_{B} \tilde{u}_{1} \, d\xi \Big) \Bigg]. \end{aligned}$$ We take[^10] $\tau$ and $\omega$ so that $$\label{H}
\tau \simeq \frac{1}{a^{2} \, \vert \log(a) \vert} \quad \text{and} \quad \omega^{2} = \frac{\Big(1 \pm \vert \log(a) \vert^{-h}\Big)}{\mu_{0} \, \lambda_{n_{0}} \, a^{-2} \, \vert \log(a) \vert^{-1}}.$$ With this choice we have the estimation $$\frac{1}{\Big\vert 1-\omega^{2} \mu_{0} \tau \, \lambda_{n_{0}} \,\Big\vert} = \mathcal{O}(\vert \log(a) \vert^{h}).$$ Then $$\begin{aligned}
\vert <\tilde{u}_{1};\overline{e}_{n_{0}}> \vert & \leq & \vert \log(a) \vert^{h} \, \Bigg[ \vert <\tilde{u_{0}};\overline{e}_{n_{0}}> \vert + a \,\vert \log(a) \vert^{-1} \, \Bigg\vert \int_{B} \,\overline{e}_{n_{0}}(\eta) \int_{B} \vert \eta -\xi \vert \, \log(\vert \eta - \xi \vert) \, \tilde{u}_{1}(\xi) \, d\xi d\eta \Bigg\vert \\ & + & a \Bigg\vert \int_{B} \overline{e}_{n_{0}}(\eta) \int_{B} \vert \eta -\xi \vert \, \tilde{u}_{1}(\xi) \, d\xi \, d\eta \Bigg\vert + \vert \log(a) \vert^{-1} \, \Bigg\vert \int_{B}\tilde{u}_{1} d\xi \,\Bigg\vert \, \Bigg\vert \int_{B} \overline{e}_{n_{0}} d\eta \Bigg\vert + a^{2} \, \vert \log(a) \vert^{-1} \; \Vert \tilde{u}_{1} \Vert \Bigg] \end{aligned}$$ Obviously the term $\vert <\tilde{u_{0}};\overline{e}_{n_{0}}> \vert$ dominates the others, but we need to check this mathematically by estimating the error part. This last one will be subdivided into three parts. We have
- $s_{1} := \int_{B} \,\overline{e}_{n_{0}}(\eta) \int_{B} \vert \eta -\xi \vert \, \log(\vert \eta - \xi \vert) \, \tilde{u}_{1}(\xi) \, d\xi d\eta$. $$\begin{aligned}
\vert s_{1} \vert & \leq & \Vert \overline{e}_{n_{0}} \Vert \; \Bigg( \int_{B} \Big\vert\int_{B} \vert \eta -\xi \vert \, \log(\vert \eta - \xi \vert) \, \tilde{u}_{1}(\xi) \, d\xi \Big\vert^{2} d\eta \Bigg)^{\frac{1}{2}} = \mathcal{O}(\Vert \tilde{u}_{1} \Vert). \end{aligned}$$
- $s_{2} := \int_{B} \overline{e}_{n_{0}}(\eta) \int_{B} \vert \eta -\xi \vert \tilde{u}_{1}(\xi) d\xi d\eta$. $$\vert s_{2} \vert \leq \Vert \overline{e}_{n_{0}} \Vert \Bigg( \int_{B} \Big\vert\int_{B} \vert \eta -\xi \vert \tilde{u}_{1}(\xi) d\xi \Big\vert^{2} d\eta \Bigg)^{\frac{1}{2}} = \mathcal{O}(\Vert \tilde{u}_{1} \Vert).$$
- $s_{3} := \int_{B}\tilde{u}_{1} d\xi \, \int_{B} \overline{e}_{n_{0}} d\eta$. $$\vert s_{3} \vert := \Bigg\vert \int_{B}\tilde{u}_{1} d\xi \,\Bigg\vert \, \Bigg\vert \int_{B} \overline{e}_{n_{0}} d\eta \Bigg\vert = \mathcal{O}(\Vert \tilde{u}_{1} \Vert).$$
Then $$\vert <\tilde{u}_{1};\overline{e}_{n_{0}}> \vert \leq \vert \log(a) \vert^{h} \, \Bigg[ \vert <\tilde{u_{0}};\overline{e}_{n_{0}}> \vert + \Vert \tilde{u}_{1} \Vert\; \Big( a \,\vert \log(a) \vert^{-1} + a + \vert \log(a) \vert^{-1}+a^{2} \, \vert \log(a) \vert^{-1} \; \Big) \Bigg],$$ and then $$\label{!n0}
\vert <\tilde{u}_{1};\overline{e}_{n_{0}}> \vert^{2} \leq \vert \log(a) \vert^{2h} \, \Bigg[ \vert <\tilde{u_{0}};\overline{e}_{n_{0}}> \vert^{2} + \Vert \tilde{u}_{1} \Vert^{2} \; \vert \log(a) \vert^{-2} \Bigg].$$ In what follows, we calculate an estimation of $\underset{n \neq n_{0}}{\sum} \vert <\tilde{u}_{1};\overline{e}_{n}> \vert^{2}$. We star with equation $(\ref{vVone})$, since the other steps are the same, to obtain: $$\begin{aligned}
<\tilde{u}_{1};\overline{e}_{n}> &=& \frac{1}{\Big[1-\omega^{2} \mu_{0} \tau \, \lambda_{n} \Big]} \Bigg[ <\tilde{u_{0}};\overline{e}_{n}> + \mathcal{O}\bigg(\tau \, a^{3} \, \int_{B} \,\overline{e}_{n}(\eta) \int_{B} \vert \eta -\xi \vert \, \log(\vert \eta - \xi \vert) \, \tilde{u}_{1}(\xi) \, d\xi d\eta \bigg) \\
& + & \mathcal{O}\bigg( \tau \, a^{3} \, \log(a) \,\int_{B} \overline{e}_{n}(\eta) \int_{B} \vert \eta -\xi \vert \, \tilde{u}_{1}(\xi) \, d\xi \, d\eta \bigg) + \omega^{2} \mu_{0} \tau \, a^{2} \Big(-\frac{1}{2\pi} \log(k)(z)+\Gamma \Big) \, \int_{B}\tilde{u}_{1} d\xi \,\int_{B} \overline{e}_{n} d\eta \\
&+& \mathcal{O}\Big(a^{3} \, \tau \, \int_{B} \tilde{u}_{1} \; d\xi \int_{B} \overline{e}_{n} \; d\eta \Big) \Bigg]. \end{aligned}$$ Then $$\begin{aligned}
\sum_{n \neq n_{0}} \vert <\tilde{u}_{1};\overline{e}_{n}> \vert^{2} &\leq& C^{te} \Bigg[ \sum_{n \neq n_{0}} \vert <\tilde{u_{0}};\overline{e}_{n}> \vert^{2} + a^{2} \, \vert \log(a) \vert^{-2} \sum_{n \neq n_{0}} \Bigg\vert \int_{B} \,\overline{e}_{n}(\eta) \int_{B} \vert \eta -\xi \vert \, \log(\vert \eta - \xi \vert) \, \tilde{u}_{1}(\xi) \, d\xi d\eta \Bigg\vert^{2} \\
& + & a^{2} \,\sum_{n \neq n_{0}} \Bigg\vert \int_{B} \overline{e}_{n}(\eta) \int_{B} \vert \eta -\xi \vert \, \tilde{u}_{1}(\xi) \, d\xi \, d\eta \Bigg\vert^{2} + \vert \log(a) \vert^{-2} \, \Vert \tilde{u}_{1} \Vert^{2} \; \sum_{n \neq n_{0}} \, \Bigg\vert \int_{B} \overline{e}_{n}\, d\eta \Bigg\vert^{2} \\
&+& a^{2} \; \vert \log(a) \vert^{-2} \, \Vert \tilde{u}_{1} \Vert^{2} \; \sum_{n \neq n_{0}} \, \Bigg\vert \int_{B} \overline{e}_{n}\, d\eta \Bigg\vert^{2} \Bigg]. \end{aligned}$$ On the right side, except for the first term, we need to estimate the terms containing series. For this, we have $$\sum_{n \neq n_{0}} \Bigg\vert \int_{B} \,\overline{e}_{n}(\eta) \int_{B} \vert \eta -\xi \vert \, \log(\vert \eta - \xi \vert) \, \tilde{u}_{1}(\xi) \, d\xi d\eta \Bigg\vert^{2} \leq \int_{B} \Big\vert \int_{B} \vert \eta -\xi \vert \, \log(\vert \eta - \xi \vert) \, \tilde{u}_{1}(\xi) \, d\xi \Big\vert^{2} \, d\eta,$$ since the function $\vert \centerdot \vert \, \log(\vert \centerdot \vert)$ is bounded on $B$ we get $$\int_{B} \Big\vert \int_{B} \vert \eta -\xi \vert \, \log(\vert \eta - \xi \vert) \, \tilde{u}_{1}(\xi) \, d\xi \Big\vert^{2} \, d\eta \leq \mathcal{O}(\Vert \tilde{u}_{1} \Vert^{2}).$$ The same argument as before allows to deduce that $$\sum_{n \neq n_{0}} \Bigg\vert \int_{B} \overline{e}_{n}(\eta) \int_{B} \vert \eta -\xi \vert \, \tilde{u}_{1}(\xi) \, d\xi \, d\eta \Bigg\vert^{2} \leq \mathcal{O}(\Vert \tilde{u}_{1} \Vert^{2}).$$ Obviously we have also $$\sum_{n \neq n_{0}} \, \Bigg\vert \int_{B} \overline{e}_{n}\, d\eta \Bigg\vert^{2} \leq \Vert 1 \Vert^{2}.$$ Hence $$\label{serienneqn0}
\sum_{n \neq n_{0}} \vert <\tilde{u}_{1};\overline{e}_{n}> \vert^{2} \leq C^{te} \Bigg[ \sum_{n \neq n_{0}} \vert <\tilde{u_{0}};\overline{e}_{n}> \vert^{2} + \vert \log(a) \vert^{-2} \Vert \tilde{u}_{1} \Vert^{2} \Bigg].$$ By adding $(\ref{!n0})$ and $(\ref{serienneqn0})$, we get $$\begin{aligned}
\Vert \tilde{u}_{1} \Vert^{2} & = & \vert <\tilde{u}_{1};\overline{e}_{n_{0}}> \vert^{2} + \sum_{n \neq n_{0}} \vert <\tilde{u}_{1};\overline{e}_{n}> \vert^{2} \\
& \leq & \vert \log(a) \vert^{2h} \, \Bigg[ \vert <\tilde{u_{0}};\overline{e}_{n_{0}}> \vert^{2} + \Vert \tilde{u}_{1} \Vert^{2} \; \vert \log(a) \vert^{-2} \Bigg] \\
&+& C^{te} \Bigg[ \sum_{n \neq n_{0}} \vert <\tilde{u_{0}};\overline{e}_{n}> \vert^{2} + \vert \log(a) \vert^{-2} \Vert \tilde{u}_{1} \Vert^{2} \Bigg] \end{aligned}$$ hence $$\begin{aligned}
\Vert \tilde{u}_{1} \Vert^{2} & \leq & \vert \log(a) \vert^{2h} \, \Vert \tilde{u_{0}} \Vert^{2} + \vert \log(a) \vert^{2h-2} \, \Vert \tilde{u}_{1} \Vert^{2}, \\
\Vert \tilde{u}_{1} \Vert^{2} (1-\vert \log(a) \vert^{2h-2}) & \leq & \vert \log(a) \vert^{2h} \, \Vert \tilde{u_{0}} \Vert^{2} \end{aligned}$$ and, as $h < 1$, $$\Vert \tilde{u}_{1} \Vert^{2} \leq (1-\vert \log(a) \vert^{2h-2})^{-1} \, \vert \log(a) \vert^{2h} \, \Vert \tilde{u_{0}} \Vert^{2} \leq \vert \log(a) \vert^{2h} \, \Vert \tilde{u_{0}} \Vert^{2},$$ or $$\label{aprioriestimation}
\Vert u_{1} \Vert_{\mathbb{L}^{2}(D)} \leq \vert \log(a) \vert^{h} \, \Vert u_{0} \Vert_{\mathbb{L}^{2}(D)}.$$ The following proposition makes a link between the Fourier coefficient of the generated total field and that of the source field.
\[Y\] We have $$\label{equa611}
<u_{1};e_{n_{0}}> = \frac{ <u_{0};e_{n_{0}}>}{(1-\omega^{2} \mu_{0} \tau \, \lambda_{n_{0}})} + \mathcal{O}(a\,\vert \log(a) \vert^{2h-1}).$$
We write $$\int_{B} \tilde{u}_{1} d\xi = <\tilde{u}_{1};\overline{e}_{n_{0}}> \int_{B} \overline{e}_{n_{0}} d\xi + \sum_{n \neq n_{0}} <\tilde{u}_{1};\overline{e}_{n}> \int_{B} \overline{e}_{n} d\xi.$$ Use this representation in $(\ref{vVone})$ and rearrange the equation to get $$\begin{aligned}
<\tilde{u}_{1};\overline{e}_{n_{0}}> &=& \frac{1}{\Bigg[1-\omega^{2} \mu_{0} \tau \, \lambda_{n_{0}} - \omega^{2} \mu_{0} \tau \, a^{2} \Big(-\frac{1}{2\pi} \log(k)(z)+\Gamma \Big) \,\Big( \int_{B} \overline{e}_{n_{0}} d\eta \Big)^{2}\Bigg]} \Bigg[ <\tilde{u_{0}};\overline{e}_{n_{0}}> \\
&+&\omega^{2} \mu_{0} \tau \, a^{2} \Big( \frac{-1}{2\pi} \log(k)+\Gamma \Big) \, \,\int_{B} \overline{e}_{n_{0}} d\eta \; \sum_{n \neq n_{0}} <\tilde{u}_{1};\overline{e}_{n} > \, \int_{B} \overline{e}_{n} d\xi + \mathcal{O}\Big(a^{3} \, \tau \, \int_{B} \tilde{u}_{1} \, d\xi \Big) \\
&+& \omega^{2} \mu_{0} \tau \, a^{3} \, \int_{B} \,\overline{e}_{n_{0}}(\eta) \int_{B} \vert \eta -\xi \vert \, \log(\vert \eta - \xi \vert) \, \tilde{u}_{1}(\xi) \, d\xi d\eta \\ &+& \omega^{2} \mu_{0} \tau \, a^{3} \, \log(a) \,\int_{B} \overline{e}_{n_{0}}(\eta) \int_{B} \vert \eta -\xi \vert \, \tilde{u}_{1}(\xi) \, d\xi \, d\eta\Bigg]. \end{aligned}$$ We need to estimate the four last terms between brackets. We have $$\Big\vert \omega^{2} \mu_{0} \tau \, a^{2} \Big( \frac{-1}{2\pi} \log(k)+\Gamma \Big) \, \,\int_{B} \overline{e}_{n_{0}} d\eta \; \sum_{n \neq n_{0}} <\tilde{u}_{1};\overline{e}_{n} > \, \int_{B} \overline{e}_{n} d\xi \Big\vert \lesssim \tau \, a^{2} \; \Vert \tilde{u_{0}} \Vert \; \Vert 1 \Vert = \mathcal{O}(\vert \log(a) \vert^{-1}).$$ Next, use Holder inequality and the a priori estimate to obtain $$\label{up}
\Big\vert \omega^{2} \mu_{0} \tau \, a^{3} \, \int_{B} \,\overline{e}_{n_{0}}(\eta) \int_{B} \vert \eta -\xi \vert \, \log(\vert \eta - \xi \vert) \, \tilde{u}_{1}(\xi) \, d\xi d\eta \Big\vert \lesssim \mathcal{O}(a \vert \log(a) \vert^{h-1}).$$ Remark that the following term $$\omega^{2} \mu_{0} \tau \, a^{3} \, \log(a) \,\int_{B} \overline{e}_{n_{0}}(\eta) \int_{B} \vert \eta -\xi \vert \, \tilde{u}_{1}(\xi) \, d\xi \, d\eta ,$$ up to multiplicative constant $\vert \log(a) \vert$ behaves as $(\ref{up})$, then we estimate it as $\mathcal{O}(a \vert \log(a) \vert^{h})$, and obviously we have $$a^{3} \, \tau \, \int_{B} \tilde{u}_{1} \, d\xi \sim \mathcal{O}\big( a \, \vert \log(a) \vert^{h-1} \big).$$ Finally, we obtain $$\label{Z}
<u_{1};e_{n_{0}}> = \frac{ <u_{0};e_{n_{0}}>}{\Bigg[1-\omega^{2} \mu_{0} \tau \, \lambda_{n_{0}} - \omega^{2} \mu_{0} \tau \, \Big( -\frac{1}{2\pi} \log(k)(z)+\Gamma \Big) \,\Big( \int_{D} e_{n_{0}} d\eta \Big)^{2}\Bigg]} + \mathcal{O}(a\,\vert \log(a) \vert^{h-1}),$$ or in the following form $$\begin{aligned}
<u_{1};e_{n_{0}}> &=& \frac{ <u_{0};e_{n_{0}}>}{(1-\omega^{2} \mu_{0} \tau \, \lambda_{n_{0}}) \, \Bigg[1 - \displaystyle\frac{\omega^{2} \mu_{0} \tau \, a^{2} \Big( -\frac{1}{2\pi} \log(k)(z)+\Gamma \Big) \,\Big( \int_{B} \overline{e}_{n_{0}} d\eta \Big)^{2}}{(1-\omega^{2} \mu_{0} \tau \, \lambda_{n_{0}})}\Bigg]} + \mathcal{O}(a\,\vert \log(a) \vert^{h-1})\\
&=& \frac{ <u_{0};e_{n_{0}}>}{(1-\omega^{2} \mu_{0} \tau \, \lambda_{n_{0}}) \, \Big[1 + \mathcal{O}(\vert \log(a) \vert^{h-1}) \Big]} + \mathcal{O}(a\,\vert \log(a) \vert^{h-1}) \\ &=& \frac{ <u_{0};e_{n_{0}}>}{(1-\omega^{2} \mu_{0} \tau \, \lambda_{n_{0}})} + \mathcal{O}(a\,\vert \log(a) \vert^{2h-1}) \end{aligned}$$ which ends the proof.
- Consider the L.S.E for multiple particles $$\label{LSED}
v_{i}(x) - \omega^{2} \, \mu_{0} \, \tau \int_{D_{i}} G_{k}(x;y) v_{i}(y) \, dy - \omega^{2} \, \mu_{0} \, \tau \, \sum_{m \neq i}^{M} \int_{D_{m}} G_{k}(x;y) v_{m}(y) \, dy = u_{0}(x), \quad x \in D_{i}.$$ We use the expansion formula $(\ref{Gkexpansion})$ of $G_{k}(x;y)$ to write $$\begin{aligned}
v_{i}(x) & - & \omega^{2} \, \mu_{0} \, \tau \int_{D_{i}} \Phi_{0}(x,y) v_{i}(y) \, dy = u_{0}(x) + \omega^{2} \, \mu_{0} \, \tau \,\Big(-\frac{1}{2\pi} \log(k)(z_{i})+\Gamma \Big) \int_{D_{i}} v_{i} dy + \mathcal{O}\Big( \tau \, a \, \int_{D_{i}} v \; dy \Big) \\ & + & \omega^{2} \, \mu_{0} \, \tau \, \int_{D_{i}} \vert x-y \vert \, \log
(\vert x-y \vert) \, v_{i}(y) dy + \omega^{2} \, \mu_{0} \, \tau \, \sum_{m \neq i}^{M} \int_{D_{m}} \Phi_{0}(x,y) v_{m}(y) \, dy + \mathcal{O}\Big( \tau \, a \, \sum_{m \neq i}^{M} \int_{D_{m}} v \; dy \Big) \\ &+& \omega^{2} \, \mu_{0} \, \tau \, \sum_{m \neq i}^{M} \Big(-\frac{1}{2\pi} \log(k)(z_{m})+\Gamma \Big) \, \int_{D_{m}} v_{m} dy + \omega^{2} \, \mu_{0} \, \tau \, \sum_{m \neq i}^{M} \int_{D_{m}} \vert x-y \vert \, \log(\vert x-y \vert) v_{m}(y) \, dy.\end{aligned}$$ Scaling, we obtain $$\begin{aligned}
\label{equa710}
\nonumber
\tilde{v}_{i}(\eta) & - & \omega^{2} \, \mu_{0} \, \tau \, a^{2} \int_{B} \Phi_{0}(\eta,\xi) \tilde{v}_{i}(\xi) \, d\xi = u_{0}(z_{i}+a\, \eta) - \omega^{2} \, \mu_{0} \, \tau \, a^{2} \, \frac{1}{2\pi} \, \log(a) \, \int_{B} \tilde{v}_{i} \, d\xi + \mathcal{O}\Big(\tau \, a^{3} \, \int_{B} \tilde{v}_{i} \; d\xi \Big) \\ \nonumber
&+& \omega^{2} \, \mu_{0} \, \tau \,\Big(-\frac{1}{2\pi} \log(k)(z_{i})+\Gamma \Big) \, a^{2} \, \int_{B} \tilde{v}_{i} d\xi + \omega^{2} \, \mu_{0} \, \tau \,a^{3} \, \int_{B} \vert \eta-\xi \vert \, \log
(\vert \eta - \xi \vert) \, \tilde{v}_{i}(\xi) d\xi \\ \nonumber
& + & \omega^{2} \, \mu_{0} \, \tau \,a^{3} \, \log(a) \int_{B} \vert \eta-\xi \vert \,\tilde{v}_{i}(\xi) d\xi - \frac{1}{2 \pi} \; \omega^{2} \, \mu_{0} \, \tau \,a^{2} \, \sum_{m \neq i}^{M} \int_{B} \log \vert (z_{i}-z_{m})+a(\eta - \xi) \vert \, \tilde{v}_{m}(\xi) \, d\xi \\ \nonumber
&+& \omega^{2} \, \mu_{0} \, \tau \,\,a^{2} \, \sum_{m \neq i}^{M} \Big( \frac{-1}{2\pi}\log(k)(z_{m})+\Gamma \Big) \int_{B} \tilde{v}_{m}\, d\xi + \mathcal{O}\Big( \tau \, a^{3} \, \sum_{m \neq i}^{M} \int_{B} \tilde{v}_{m} \; d\xi \Big) \\
&+& \omega^{2} \, \mu_{0} \, \tau \,a^{2} \, \sum_{m \neq i}^{M} \int_{B} \vert (z_{i}-z_{m})+a(\eta - \xi)\vert \, \log(\vert (z_{i}-z_{m})+a(\eta - \xi)\vert) \tilde{v}_{m}(\xi) \, d\xi.\end{aligned}$$ We recall that $$A_{0} \, v(x) = \int_{B} \Phi_{0}(x,y) \; v(y) \, dy,$$ and denote $$T \, v(x) := \int_{B} v(y) \, dy, \quad x \in B.$$ Then: $$\Big[I - \omega^{2} \, \mu_{0} \, \tau \, a^{2} \, A_{0} + \omega^{2} \, \mu_{0} \, \tau \, a^{2} \, \frac{1}{2\pi} \, \log(a) \, T \Big] \tilde{v}_{i} = u_{0}(z_{i}+ a\, \cdot) + \omega^{2} \, \mu_{0} \, \tau \,\Big( -\frac{1}{2\pi} \log(k)(z_{i})+\Gamma \Big) \, a^{2} \, \int_{B} \tilde{v}_{i} d\xi$$ $$\begin{aligned}
\phantom \qquad \qquad \qquad \qquad &+& \omega^{2} \, \mu_{0} \, \tau \,a^{3} \, \int_{B} \vert \eta-\xi \vert \, \log
(\vert \eta - \xi \vert) \, \tilde{v}_{i}(\xi) d\xi + \omega^{2} \, \mu_{0} \, \tau \,a^{3} \, \log(a) \int_{B} \vert \eta-\xi \vert \,\tilde{v}_{i}(\xi) d\xi \\
&-&\frac{1}{2\pi} \, \omega^{2} \, \mu_{0} \, \tau \,a^{2} \, \sum_{m \neq i}^{M} \int_{B} \log \vert (z_{i}-z_{m})+a(\eta - \xi) \vert \, \tilde{v}_{m}(\xi) \, d\xi
\\ &+& \omega^{2} \, \mu_{0} \, \tau \,a^{2} \, \sum_{m \neq i}^{M} \int_{B} \vert (z_{i}-z_{m})+a(\eta - \xi)\vert \, \log(\vert (z_{i}-z_{m})+a(\eta - \xi)\vert) \tilde{v}_{m}(\xi) \, d\xi \\
&+& \omega^{2} \, \mu_{0} \, \tau \,a^{2} \, \sum_{m \neq i}^{M} \Big(-\frac{1}{2\pi}\log(k)(z_{m})+\Gamma\Big) \, \int_{B} \tilde{v}_{m}\, d\xi + \mathcal{O}\Big( \tau \, a^{3} \, \sum_{m = 1}^{M} \int_{B} \tilde{v}_{m} \; d\xi \Big). \end{aligned}$$ Also, we note by $$\mathfrak{Res}(A_{0};T) := [I - \omega^{2} \, \mu_{0} \, \tau \, a^{2} \, A_{0} + \omega^{2} \, \mu_{0} \, \tau \, a^{2} \, \frac{1}{2\pi} \, \log(a) \, T \Big]^{-1}.$$
In the definition of the operator $\mathfrak{Res}(A_{0};T)$ we cannot neglect the operator $T$ since it scales with the same order as $A_{0}$.
Then $$\begin{aligned}
\tilde{v}_{i} &=& \mathfrak{Res}(A_{0};T) \, (u_{0}(z_{i}+a \; \cdot)) + \omega^{2} \, \mu_{0} \, \tau \,\Big( \frac{-1}{2\pi}\log(k)(z_{i})+\Gamma \Big) \, a^{2} \, \int_{B} \tilde{v}_{i} d\xi \; \mathfrak{Res}(A_{0};T)(1)\\
&+& \omega^{2} \, \mu_{0} \, \tau \,a^{3} \,\mathfrak{Res}(A_{0};T) \Big( \int_{B} \vert \eta-\xi \vert \, \log
(\vert \eta - \xi \vert) \, \tilde{v}_{i}(\xi) d\xi \Big) \\
&+& \omega^{2} \, \mu_{0} \, \tau \,a^{3} \, \log(a) \; \mathfrak{Res}(A_{0};T) \Big( \int_{B} \vert \eta-\xi \vert \,\tilde{v}_{i}(\xi) d\xi \Big) \\
&-& \frac{1}{2\pi} \omega^{2} \, \mu_{0} \, \tau \,a^{2} \, \sum_{m \neq i}^{M} \; \mathfrak{Res}(A_{0};T) \Big( \int_{B} \log \vert (z_{i}-z_{m})+a(\eta - \xi) \vert \, \tilde{v}_{m}(\xi) \, d\xi \Big) \\
&+& \omega^{2} \, \mu_{0} \, \tau \,a^{2} \, \sum_{m \neq i}^{M} \Big(-\frac{1}{2\pi}\log(k)(z_{m})+\Gamma\Big) \, \int_{B} \tilde{v}_{m}\, d\xi \;\; \mathfrak{Res}(A_{0};T)(1) \\
&+& \omega^{2} \, \mu_{0} \, \tau \,a^{2} \, \sum_{m \neq i}^{M} \; \mathfrak{Res}(A_{0};T) \Big( \int_{B} \vert (z_{i}-z_{m})+a(\eta - \xi)\vert \, \log(\vert (z_{i}-z_{m})+a(\eta - \xi)\vert) \tilde{v}_{m}(\xi) \, d\xi \Big) \\
&+& \mathcal{O}\Big( \tau \, a^{3} \, \sum_{m = 1}^{M} \int_{B} \tilde{v}_{m} \; d\xi \Big) \; \mathfrak{Res}(A_{0};T)(1).\end{aligned}$$ Using the a priori estimate $(\ref{aprioriestimation})$, we obtain $$\begin{aligned}
\Vert \tilde{v}_{i} \Vert & \leq & \vert \log(a) \vert^{h} \Vert u_{0}(z_{i}+a \; \cdot) \Vert + a \, \vert \log(a) \vert^{h-1} \Vert \tilde{v}_{i} \Vert + \, a^{2} \, \Vert \tilde{v}_{i} \Vert \; \vert \log(a) \vert^{h} \; \Vert 1 \Vert \\ &+& \,a^{3} \, \vert \log(a) \vert^{h} \; \Big\Vert \int_{B} \vert \eta-\xi \vert \, \log(\vert \eta - \xi \vert) \, \tilde{v}_{i}(\xi) d\xi \Big\Vert + \,a^{3} \,\vert \log(a) \vert \; \vert \log(a) \vert^{h} \Big\Vert \int_{B} \vert \eta-\xi \vert \,\tilde{v}_{i}(\xi) d\xi \Big\Vert \\ &+& \,a^{2} \, \vert \log(a) \vert^{h} \; \sum_{m \neq i}^{M} \; \Big\Vert \int_{B} \log \vert (z_{i}-z_{m})+a(\eta - \xi) \vert \, \tilde{v}_{m}(\xi) \, d\xi \Big\Vert + \,a^{2} \, \vert \log(a) \vert^{h} \; \Vert 1 \Vert \; \sum_{m \neq i}^{M} \Vert \tilde{v}_{m}\, \Vert \\ &+& \,a^{2} \,\vert \log(a) \vert^{h} \sum_{m \neq i}^{M} \Big\Vert \int_{B} \vert (z_{i}-z_{m})+a(\eta - \xi)\vert \, \log(\vert (z_{i}-z_{m})+a(\eta - \xi)\vert) \tilde{v}_{m}(\xi) \, d\xi \Big\Vert \\
&+& a \, \vert \log(a) \vert^{h-1} \, \sum_{m = 1}^{M} \Vert \tilde{v}_{m} \Vert ,\end{aligned}$$ and $$\begin{aligned}
\Big\Vert \int_{B} \log \vert (z_{i}-z_{m})+a(\eta - \xi) \vert \, \tilde{v}_{m}(\xi) \, d\xi \Big\Vert^{2} &=& \int_{B} \, \Big\vert \int_{B} \log \vert (z_{i}-z_{m})+a(\eta - \xi) \vert \, \tilde{v}_{m}(\xi) \, d\xi \Big\vert^{2} \, d\eta \\
& \leq & \int_{B} \, \int_{B} \Big\vert \log \vert (z_{i}-z_{m})+a(\eta - \xi) \vert \,\Big\vert^{2} d\xi \, d\eta \, \; \Vert \tilde{v}_{m} \Vert^{2}. \end{aligned}$$ Hence $$\Big\Vert \int_{B} \log \vert (z_{i}-z_{m})+a(\eta - \xi) \vert \, \tilde{v}_{m}(\xi) \, d\xi \Big\Vert \lesssim \log(1/d_{im}) \; \Vert \tilde{v}_{m} \Vert.$$ The same calculus allows to obtain $$\Big\Vert \int_{B} \vert (z_{i}-z_{m})+a(\eta - \xi)\vert \, \log(\vert (z_{i}-z_{m})+a(\eta - \xi)\vert) \tilde{v}_{m}(\xi) \, d\xi \Big\Vert \lesssim d_{im} \, \log(1/d_{im}) \; \Vert \tilde{v}_{m} \Vert.$$ Gathering these estimates, we have $$\begin{aligned}
\Vert \tilde{v}_{i} \Vert & \leq & \vert \log(a) \vert^{h} \Vert u_{0}(z_{i}+a \; \cdot) \Vert + \Big[ a^{3} \; \vert \log(a) \vert^{h} \; + \,a^{3} \, \vert \log(a) \vert^{h} + \,a^{3} \; \vert \log(a) \vert^{1+h} + \,a \; \vert \log(a) \vert^{h-1} \Big] \Vert \tilde{v}_{i} \Vert \\
&+& \Big[ a^{2} \; \vert \log(a) \vert^{h} \; \log(1/d) + \,a^{2} \, \vert \log(a) \vert^{h} \; \Vert 1 \Vert \; + a^{2} \, \vert \log(a) \vert^{h} + a \, \vert \log(a) \vert^{h-1} \Big] \sum_{m \neq i}^{M} \Vert \tilde{v}_{m}\, \Vert. \end{aligned}$$ Then $$\Vert \tilde{v}_{i} \Vert \leq \vert \log(a) \vert^{h} \Vert u_{0}(z_{i}+a \; \cdot) \Vert + \,a \; \vert \log(a) \vert^{h-1} \; \Vert \tilde{v}_{i} \Vert + \,a \,\vert \log(a) \vert^{h-1} \; \sum_{m \neq i}^{M} \Vert \tilde{v}_{m} \Vert,$$ or $$\Vert \tilde{v}_{i} \Vert \, (1-\,a \; \vert \log(a) \vert^{h-1}) \leq \vert \log(a) \vert^{h} \Vert u_{0}(z_{i}+a \; \cdot) \Vert + \,a \,\vert \log(a) \vert^{h-1} \; \sum_{m \neq i}^{M} \Vert \tilde{v}_{m} \Vert,$$ hence $$\begin{aligned}
\Vert \tilde{v}_{i} \Vert_{\mathbb{L}^{2}(B)} & \leq & \vert \log(a) \vert^{h} \Vert u_{0}(z_{i}+a \; \cdot) \Vert_{\mathbb{L}^{2}(B)} + \,a \,\vert \log(a) \vert^{h-1} \; \sum_{m \neq i}^{M} \Vert \tilde{v}_{m} \Vert_{\mathbb{L}^{2}(B)} \\
\Vert \tilde{v}_{i} \Vert_{\mathbb{L}^{2}(B)}^{2} & \leq & \vert \log(a) \vert^{2h} \Vert u_{0}(z_{i}+a \; \cdot) \Vert_{\mathbb{L}^{2}(B)}^{2} + \,a^{2} \,\vert \log(a) \vert^{2h-2} \; \;M \; \sum_{m \neq i}^{M} \Vert \tilde{v}_{m} \Vert_{\mathbb{L}^{2}(B)}^{2} \\
\Vert \tilde{v}_{i} \Vert_{\mathbb{L}^{2}(B)}^{2} & \leq & \vert \log(a) \vert^{2h} \Vert u_{0}(z_{i}+a \; \cdot) \Vert_{\mathbb{L}^{2}(B)}^{2} + \,a^{2} \,\vert \log(a) \vert^{2h-2} \; M \, \Vert \tilde{u} \Vert_{(\Pi \; \mathbb{L}^{2}(B))}^{2}, \end{aligned}$$ we sum up to $M$, to obtain $$\begin{aligned}
\Vert \tilde{u} \Vert_{(\Pi \; \mathbb{L}^{2}(B))}^{2} & \leq & \vert \log(a) \vert^{2h} \Vert \tilde{u}_{0} \Vert_{(\Pi \; \mathbb{L}^{2}(B))}^{2} + M^{2} \,a^{2} \,\vert \log(a) \vert^{2h-2} \; \Vert \tilde{u} \Vert_{(\Pi \; \mathbb{L}^{2}(B))}^{2} \\
(1-M^{2} \,a^{2} \,\vert \log(a) \vert^{2h-2} ) \, \Vert \tilde{u} \Vert_{(\Pi \; \mathbb{L}^{2}(B))}^{2} & \leq & \vert \log(a) \vert^{2h} \Vert \tilde{u}_{0} \Vert_{(\Pi \; \mathbb{L}^{2}(B))}^{2} \\
\Vert \tilde{u} \Vert_{(\Pi \; \mathbb{L}^{2}(B))}^{2} & \leq & \vert \log(a) \vert^{2h} \Vert \tilde{u}_{0} \Vert_{(\Pi \; \mathbb{L}^{2}(B))}^{2}. \end{aligned}$$ We obtain after scaling back $$\label{apmp}
\Vert u \Vert_{\mathbb{L}^{2}(D)} \leq \vert \log(a) \vert^{h} \Vert u_{0} \Vert_{\mathbb{L}^{2}(D)}.$$
In the next proposition, which is analogous to proposition $(\ref{Y})$, we estimate the Fourier coefficient of the total field for dimer particles when $n \neq n_{0}$.
\[X\] For $n \neq n_{0}$, we have $$\label{W}
<u_{2};e^{(i)}_{n}> = \frac{1}{(1 - \omega^{2} \, \mu_{0} \, \tau \, \lambda_{n})} \Big[ < u_{0},e^{(i)}_{n}> + \mathcal{O}(\vert \log(a) \vert^{-h}) \,\vert <1,e^{(i)}_{n}> \vert \Big], \; i=1,2.$$
First of all, recall that $v_{m} = u_{|_{D_{m}}}, m=1,2$ and let $n \neq n_{0}$. Take the scalar product of $(\ref{equa710})$ with respect to $\overline{e}^{(i)}_{n}$, $i=1,2$, to obtain $$\begin{aligned}
<\tilde{v}_{1};\overline{e}_{n}> & - & \omega^{2} \, \mu_{0} \, \tau \, a^{2} \, \int_{B} \overline{e}_{n}(\eta) \int_{B} \Phi_{0}(\eta,\xi) \tilde{v}_{1}(\xi) \, d\xi \, d\eta \\ &=& < \tilde{u}_{0},\overline{e}_{n}> - \omega^{2} \, \mu_{0} \, \tau \, a^{2} \, \frac{1}{2\pi} \, \log(a) \, \int_{B} \tilde{v}_{1} \, d\xi \, \int_{B} \overline{e}_{n} d\eta \\
&+& \omega^{2} \, \mu_{0} \, \tau \,a^{2} \, \Bigg[ a \,\int_{B} \, \overline{e}_{n}(\eta) \int_{B} \vert \eta-\xi \vert \, \log(\vert \eta - \xi \vert) \, \tilde{v}_{1}(\xi) d\xi d\eta \\
&+& a \, \log(a) \int_{B} \, \overline{e}_{n}(\eta) \int_{B} \vert \eta-\xi \vert \,\tilde{v}_{1}(\xi) d\xi \, d\eta + \Big( \frac{-1}{2\pi}\log(k)+\Gamma \Big) \, \int_{B} \big(\tilde{v}_{1}+\tilde{v}_{2} \big) \, d\xi \, \int_{B} \overline{e}_{n} d\eta \\
&-& \frac{1}{2 \pi} \,\int_{B} \, \overline{e}_{n}(\eta) \int_{B} \log \vert (z_{1}-z_{2})+a(\eta - \xi) \vert \, \tilde{v}_{2}(\xi) \, d\xi \, d\eta + \mathcal{O}\Big( a \, \int_{B} \big( \tilde{v}_{1} + \tilde{v}_{2} \big) \, d\xi \Big) \\
&+& \int_{B} \overline{e}_{n}(\eta) \int_{B} \vert (z_{1}-z_{2})+a(\eta - \xi)\vert \, \log(\vert (z_{1}-z_{2})+a(\eta - \xi)\vert) \tilde{v}_{2}(\xi) \, d\xi \, d\eta \Bigg]_{:=\textit{error}}.\end{aligned}$$ The $\textit{error}$ part, with the help of Taylor’s formula, behaves as $\mathcal{O}\big( \vert \log(a) \vert^{1-h} \big) \vert <1,\overline{e}_{n}> \vert $ and we can write $$\begin{aligned}
\int_{B} \overline{e}_{n} \int_{B} \Phi_{0} \tilde{v}_{1} \, d\xi \, d\eta &=& <\tilde{v}_{1};\overline{e}_{n}> \, \int_{B} \overline{e}_{n} \int_{B} \Phi_{0} \overline{e}_{n} \, d\xi \, d\eta + \sum_{j \neq n} <\tilde{v}_{1};\overline{e}_{j}> \, \int_{B} \overline{e}_{n} \int_{B} \Phi_{0} \overline{e}_{j} d\xi \, d\eta \end{aligned}$$ $$\begin{aligned}
\phantom \quad &\stackrel{(\ref{ha})}=& <\tilde{v}_{1};\overline{e}_{n}> \,\bigg[ \frac{\lambda_{n}}{a^{2}} + \boldsymbol{\frac{1}{2\pi} \log(a) \Big(\int_{B} \overline{e}_{n} d\eta\Big)^{2}} \bigg] \stackrel{(\ref{whennneqm})} + \, \frac{1}{2\pi} \, \log(a) \, \int_{B} \overline{e}_{n} d\eta \sum_{j \neq n} <\tilde{v}_{1};\overline{e}_{j}> \int_{B} \overline{e}_{j} d\eta.\end{aligned}$$ we plug all this in the previous equation to obtain $$\begin{aligned}
<\tilde{v}_{1};\overline{e}_{n}> & - & \boldsymbol{\omega^{2} \, \mu_{0} \, \tau \, a^{2}} \, \Bigg[\boldsymbol{<\tilde{v}_{1};\overline{e}_{n}>} \, \int_{B} \overline{e}_{n}(\eta) \int_{B} \Phi_{0}(\eta,\xi) \overline{e}_{n}(\xi) \, d\xi \, d\eta \\
&& \qquad \qquad \quad + \sum_{j \neq n} <\tilde{v}_{1};\overline{e}_{j}> \, \int_{B} \overline{e}_{n}(\eta) \int_{B} \Phi_{0}(\eta,\xi) \overline{e}_{j}(\xi) \, d\xi \, d\eta \Bigg] = < \tilde{u}_{0},\overline{e}_{n}> \\ &-& \boldsymbol{\omega^{2} \, \mu_{0} \, \tau \, a^{2} \, \frac{1}{2\pi} \, \log(a)} \, \Bigg[\boldsymbol{<\tilde{v}_{1};\overline{e}_{n}> \,\int_{B} \overline{e}_{n} d\eta} + \sum_{j \neq n} <\tilde{v}_{1};\overline{e}_{j}> \, \int_{B} \overline{e}_{j} \, d\xi \, \Bigg] \, \boldsymbol{\int_{B} \overline{e}_{n} d\eta} \\ &+& \mathcal{O}(\vert \log(a) \vert^{-h}) \, \vert <1,\overline{e}_{n}> \vert.\end{aligned}$$ Next, we cancel the two terms given by series and those written with **bold symbol** and scale back the obtained formula to get $(\ref{W})$.
The result in $(\ref{W})$ also applies to the case $n = n_{0}$ with an error term of order $\mathcal{O}\big(\vert \log(a) \vert^{-h}\big)$.\
The next proposition improves the error term by improving the denominator term.
We have $$\label{v&V}
<u_{2};e^{(i)}_{n_{0}}> = \frac{<u_{0};e^{(i)}_{n_{0}}>}{(1-\omega^{2} \, \mu_{0} \, \tau \, \lambda_{n_{0}}) - \omega^{2} \, \mu_{0} \, \tau \, a^{2} \, \Phi_{0}(z_{1};z_{2}) \Big( \int_{B} \overline{e}_{n_{0}} \Big)^{2} } \; + \mathcal{O}(a), \qquad i=1,2.$$
In order to prove equality $(\ref{v&V})$ we take a scalar product with respect to $\overline{e}_{n_{0}}$ at the equation $(\ref{equa710})$, and after simplifications, we get: $$\label{algsystm}
\begin{bmatrix}
(1-\omega^{2}\,\mu_{0}\,\lambda_{n_{0}}\, \tau) & -\omega^{2}\,\mu_{0}\,\tau\,a^{2} \, \Phi_{0}\, \Big( \int_{B} \overline{e}_{n_{0}} \Big)^{2} \\
-\omega^{2}\,\mu_{0}\,\tau\,a^{2} \, \Phi_{0} \, \Big( \int_{B} \overline{e}_{n_{0}} \Big)^{2} & (1-\omega^{2}\,\mu_{0}\,\lambda_{n_{0}}\, \tau)
\end{bmatrix}
\begin{bmatrix}
<\tilde{u}_{2};\overline{e}^{(1)}_{n_{0}}> \\
\\
<\tilde{u}_{2};\overline{e}^{(2)}_{n_{0}}>
\end{bmatrix}
=
\begin{bmatrix}
<\tilde{u}_{0};\overline{e}^{(1)}_{n_{0}}> + \mathcal{O}(\vert \log(a) \vert^{-h}) \\
\\
<\tilde{u}_{0};\overline{e}^{(2)}_{n_{0}}> + \mathcal{O}(\vert \log(a) \vert^{-h})
\end{bmatrix}$$ We denote by $det$ the determinant of the last matrix, i.e $$\label{defdet}
det = \Big(1-\omega^{2}\,\mu_{0}\,\lambda_{n_{0}}\, \tau\Big)^{2} - \Big( \omega^{2}\,\mu_{0}\,\tau\,a^{2} \, \Phi_{0} \, \Big( \int_{B} \overline{e}_{n_{0}} \Big)^{2} \Big)^{2}, \quad \text{where} \quad \Phi_{0} = \Phi_{0}(z_{1};z_{2})$$ Next, we check that when we are close to the resonance the determinant $det \neq 0$. For this, and by construction of $\omega^{2}$, we have $$1 - \omega^{2} \mu_{0} \tau \lambda_{n_{0}} = \mp \vert \log(a) \vert^{-h},$$ and the fact that $d \sim a^{\vert \log(a) \vert^{-h}}$ implies that $ \tau a^{2} \Phi_{0}(z_{1},z_{2}) \sim \frac{1}{2\pi} \, \vert \log(a) \vert^{-h}$. Plug this in $(\ref{defdet})$ to obtain $$det = \vert \log(a) \vert^{-2h} \, \Bigg[1 - \bigg( \omega^{2} \mu_{0} \, \frac{1}{2\pi} \, \big( \int \overline{e}_{n_{0}} \big)^{2} \bigg)^{2} \Bigg]
\stackrel{(\ref{H})}= \vert \log(a) \vert^{-2h} \, \left[1 - \frac{\left( 1 \pm \vert \log(a) \vert^{-h} \right)}{\left( 1+\frac{\tilde{\lambda}_{n_{0}} \vert \log(a) \vert^{-1}}{\frac{1}{2\pi} \, \big( \int \overline{e}_{n_{0}} \big)^{2}}\right)^{2}} \, \right]$$ from [**[Hypotheses]{}\[hyp\]**]{}, we deduce that $$\left(\frac{\tilde{\lambda}_{n_{0}} \vert \log(a) \vert^{-1}}{\frac{1}{2\pi} \, \big( \int \overline{e}_{n_{0}} \big)^{2}}\right) \sim \vert \log(a) \vert^{-1},$$ then $$det = \vert \log(a) \vert^{-2h} \left[1 - \left( 1 \pm \vert \log(a) \vert^{-h} \right) \left( 1 + \vert \log(a) \vert^{-1} \right) \right] \sim \vert \log(a) \vert^{-3h}.$$ Since $det \neq 0$, the algebraic system $(\ref{algsystm})$ is invertible. We invert it and use the fact that $$<\tilde{u}_{0};\overline{e}^{(2)}_{n_{0}}> = <\tilde{u}_{0};\overline{e}^{(1)}_{n_{0}}> + \mathcal{O}(d),$$ to obtain $$\label{equa825}
<\tilde{u}_{2};\overline{e}^{(1)}_{n_{0}}> = \frac{<\tilde{u}_{0};\overline{e}^{(1)}_{n_{0}}>}{(1-\omega^{2}\,\mu_{0}\,\tau\,\lambda_{n_{0}})-\omega^{2}\, \mu_{0}\, \tau \, a^{2} \, \Phi_{0}(z_{1};z_{2}) \, \Big( \int_{B} \overline{e}_{n_{0}} \Big)^{2}}
+ \mathcal{O}\big(1\big),$$ and, after scaling, we get $(\ref{v&V})$.
Estimation of the scattering coefficient $\textbf{C}$
-----------------------------------------------------
\
From $(\ref{A})$ we have: $$w = \omega^{2} \, \mu_{0} \, \tau \Big[I - \omega^{2} \, \mu_{0} \, \tau \, A_{0} \Big]^{-1}(1) \quad \text{or} \quad \frac{1}{\omega^{2} \, \mu_{0} \, \tau} \; \Big[I - \omega^{2} \, \mu_{0} \, \tau \, A_{0} \Big](w) = 1.$$ Hence $$<1,e_{n}> = \frac{1}{\omega^{2} \, \mu_{0} \, \tau} <e_{n} ; \big[I - \omega^{2} \, \mu_{0} \, \tau \, A_{0} \big](w)> = \frac{1}{\omega^{2} \, \mu_{0} \, \tau} \; \Big[<e_{n},w> - \omega^{2} \, \mu_{0} \, \tau \; \lambda_{n} <e_{n},w> \Big]$$ and then $$\label{A1}
<w,e_{n}> = \frac{\omega^{2} \, \mu_{0} \, \tau}{1-\omega^{2} \, \mu_{0} \, \tau \; \lambda_{n}} <1,e_{n}>.$$ The next lemma uses $(\ref{A1})$ to gives a precision about the value of $\textbf{C}$.
The coefficient $\textbf{C}$ can be approximated as $$\label{F}
\textbf{C} = \frac{\omega^{2} \, \mu_{0} \, \tau}{(1-\omega^{2} \, \mu_{0} \, \tau \, \lambda_{n_{0}})} \, \Big( \int_{D} e_{n_{0}} \Big)^{2} + \mathcal{O}(\vert \log(a) \vert^{-1}).$$
We use the definition of $\textbf{C}$, given by $(\ref{R})$, to write $$\textbf{C} := \int_{D} w \, dx = \sum_{n} <w,e_{n}> \, <1,e_{n}>,$$ apply $(\ref{A1})$ to obtain $$\textbf{C} = \omega^{2} \, \mu_{0} \, \tau \, \Bigg[ \frac{1}{(1-\omega^{2} \, \mu_{0} \, \tau \, \lambda_{n_{0}})} \, \Big( \int_{D} e_{n_{0}} \Big)^{2} + \sum_{n \neq n_{0}} \frac{1}{(1-\omega^{2} \, \mu_{0} \, \tau \, \lambda_{n})} \, \Big( \int_{D} e_{n} \Big)^{2} \Bigg],$$ and, since the frequency $\omega$ is near $\omega_{n_{0}}$, and hence away from the other resonances we have $$\Big\vert \sum_{n \neq n_{0}} \frac{1}{(1-\omega^{2} \, \mu_{0} \, \tau \, \lambda_{n})} \, \Big( \int_{D} e_{n} \Big)^{2} \Big\vert \leq \sum_{n} \vert <1,e_{n}> \vert^{2} = \Vert 1 \Vert_{\mathbb{L}^{2}(D)} = \mathcal{O}\big(a^{2}\big).$$
From $(\ref{F})$, we see that $$\label{B}
\textbf{C} \sim \vert \log(a) \vert^{h-1}.$$ We deduce also the following formula: $$\label{C}
(1-\omega^{2} \, \mu_{0} \, \tau \, \lambda_{n_{0}}) = \textbf{C}^{-1} \, \omega^{2} \, \mu_{0} \, \tau \, \Big( \int_{D} e_{n_{0}} \Big)^{2} + \mathcal{O}(\vert \log(a) \vert^{-2h}).$$
Appendix {#the hypotheses-justification}
========
To motivate the natural character of the hypotheses stated in [**[Hypotheses]{}**]{} \[hyp\], let us make the following observations:
1. We prove that the upper bound of $\lambda_{n}$ is of order $a^{2} \, \vert \log(a) \vert$. For this, recalling and rescaling $(\ref{U})$ we obtain, see section \[appendixlemma\], in particular (\[G\]), for $a<<1,$ $$\label{T}
\lambda_{n} = a^{2} \, \Big( {\tilde{\lambda}_{n}} + \frac{1}{2} \vert \log(a) \vert (\int_{B}\bar{e_n}(\xi) d\xi)^2 \Big),$$ where $${\tilde{\lambda}_{n}}:=\frac{1}{\Vert\tilde{e_n}\Vert_{\mathbb{L}^2(B)}^2}\int_B LP(\tilde{e_n})(\eta)~\tilde{e_n}(\eta) d\eta$$ and $\tilde{e}_{n}$ is the scaled of any eigenfunction $e_n$ corresponding to $\lambda_n$. Take the absolute value in $(\ref{T})$ to obtain $$\vert \lambda_{n} \vert \leq a^{2} \, \left( \vert {\tilde{\lambda}}_{n} \vert + \frac{1}{2} \vert \log(a) \vert \, \vert <1;\bar{e}_{n}> \vert^{2} \right).$$ From the definition of $\tilde{\lambda}_{n}$, see $(\ref{lambdatilde})$, we have $\vert \tilde{\lambda}_{n} \vert \leq \Vert \Phi_{0} \Vert_{\mathbb{L}^{2}(B \times B)} < \infty $ and we use the\
Cauchy-Schwartz inequality to obtain $$\vert \lambda_{n} \vert \leq a^{2} \, \left( \Vert \Phi_{0} \Vert_{\mathbb{L}^{2}(B \times B)} + \frac{1}{2} \vert \log(a) \vert \, \vert B \vert^{2} \right) \lesssim a^{2} \, \vert \log(a) \vert.$$
2. For the lower bound, the situation is less clear. Nevertheless, we have the following results:
1. When the shape is a disc of radius $a$, we refer to (Theorem 4.1, [@RG]) for the existence of a sequence of eigenvalues given by $$\lambda_{k,j} = a^{2} \, \left[\mu_{j}^{(k)}\right]^{-2}, \quad k=0,1,2,\cdots \; j=1,2,\cdots$$ and the corresponding eigenfunctions given by $$u_{k,j}(r,\varphi) = \LARGE{\text{J}}_{k} \left( \mu_{j}^{(k)} \; r \; a^{-1} \right) \, e^{i \, k \, \varphi},$$ where $\LARGE{\text{J}}_{k}$ is the Bessel function of the first kind of order $k$ and $\mu_{j}^{(k)}$ are the roots of the following transcendental equation $$\begin{aligned}
\label{our equa}
\nonumber
k \LARGE{\text{J}}_{k}\left( \mu_{j}^{(k)} \right) + \frac{\mu_{j}^{(k)}}{2} \left( \LARGE{\text{J}}_{k-1}\left( \mu_{j}^{(k)} \right) - \LARGE{\text{J}}_{k+1}\left( \mu_{j}^{(k)} \right)\right) &=& 0, \qquad k=1,2,\cdots \\
\LARGE{\text{J}}_{0}\left( \mu_{j}^{(0)} \right) - \mu_{j}^{(0)} \, \log(a) \, \left( \LARGE{\text{J}}_{-1}\left( \mu_{j}^{(0)} \right)-\LARGE{\text{J}}_{1}\left( \mu_{j}^{(0)} \right) \right) &=& 0. \end{aligned}$$ We remark that (only) for $k = 0$, the associated eigenfunctions have a non zero average[^11].\
Next, in order to obtain a precision about the behaviour of $\{ \lambda_{0,j} \}_{j \geq 1}$ with respect to $a$, we need to investigate the behaviour of $\mu_{j}^{(0)}$ solutions of $(\ref{our equa})$. For this, we use the following properties of Bessel functions $$\LARGE{\text{J}}_{-1}(x)-\LARGE{\text{J}}_{1}(x) = 2 \LARGE{\text{J}}^{\prime}_{0}(x) = - 2 \LARGE{\text{J}}_{1}(x),$$ to write $(\ref{our equa})$ as $$\LARGE{\text{J}}_{0} \Big(\mu_{j}^{(0)}\Big) + 2 \, \log(a) \, \mu_{j}^{(0)} \LARGE{\text{J}}_{1} \Big(\mu_{j}^{(0)} \Big) = 0.$$ Set $\Psi(x) := \LARGE{\text{J}}_{0} ( x ) + 2 \, \log(a) \, x \, \LARGE{\text{J}}_{1} ( x )$ and use *Dixon’s* theorem, see [@Watson] page 480, to deduce that the roots of $\Psi$ are interlaced with those of $\LARGE{\text{J}}_{0}$, noted by $\{ x_{0,j} \}_{j \geq 1}$, and those of $\LARGE{\text{J}}_{1}$, noted by $\{ x_{1,j} \}_{j \geq 1}$. At this stage, we distinguish two cases
- The roots of $\Psi$ exceeding $x_{0,1}$:
For this case, a direct application of *Dixon’s* theorem, allows to deduce that $$\forall j \geq 2, \; x_{k,j-1} < \mu_{j}^{(0)} < x_{k,j}, \quad k=0,1$$ and $$\forall j \geq 2, \; a^{2} \, x^{-2}_{k,j} < \lambda_{0,j} < a^{2} \, x^{-2}_{k,j-1}, \quad k=0,1,$$ since $\big\lbrace x_{k,j} \big\rbrace_{j \geq 1 \atop k=0,1}$ are independent of $a$ we deduce that $\lambda_{0,j}$ behaves as $a^{2}$.
- The root of $\Psi$ less than $x_{0,1}$:
The analysis of this case is more delicate. First, we observe that if, for a certain $x$, $\Psi(x) = 0$, then $\LARGE{\text{J}}_{0}(x) \neq 0$. Otherwise, we would have also $\LARGE{\text{J}}_{1}(x) = 0$ which is impossible as the zeros of $\LARGE{\text{J}}_{0}$ and $\LARGE{\text{J}}_{1}$ are disjoint, see *Bourget’s Hypothesis*, page 484, section 15.28 in [@Watson]. Hence the equation $\Psi(x) = 0$ can be rewritten as $$\label{log=F0}
\frac{1}{2 \, \log(a)} = \frac{-x \, \LARGE{\text{J}}_{1} ( x )}{\LARGE{\text{J}}_{0} ( x )} := \textbf{F}_{0}(x).$$ Clearly, $\textbf{F}_{0}$ is a smooth function on each interval not containing a zero of $\LARGE{\text{J}}_{0}$ and from [@landau], see equation 27, we deduce that it is also a decreasing function, (see figure $\ref{F0}$, for a schematic picture).
![Solving, for $x \in (0,\nu)$, $\textbf{F}_{0}(x)=1/(-4 \, \log(10))$.[]{data-label="F0=alpha"}](./F0.png)
![Solving, for $x \in (0,\nu)$, $\textbf{F}_{0}(x)=1/(-4 \, \log(10))$.[]{data-label="F0=alpha"}](./solequa.png)
So, if we restrict our study to $(0 , \nu)$ with $\nu < x_{0,1} $ we deduce that $\textbf{F}_{0}^{-1}$ exists and is continuous, then the equation $(\ref{log=F0})$ is solvable and the solution that we obtain is also small, (see figure $(\ref{F0=alpha})$, for numerical demonstration).\
Now, since $x$ is small we use the asymptotic behaviour of $\textbf{F}_{0}$, see for instance (equation 25 in [@landau]), $\textbf{F}_{0}(x) \sim -x^{2}/2$ to write $(\ref{log=F0})$ as $$\frac{1}{2 \, \log(a)} \sim \frac{-x^{2}}{2},$$ and this implies that $ x \sim \Big( \log(1/a) \Big)^{\frac{-1}{2}}$. Finally $$\lambda_{0,1} \sim a^{2} \, \vert \log(a) \vert$$
2. For the case of an arbitrary shape $D$, with $\vert D \vert = \vert B_a \vert$ where $B_a$ is the disc of radius $a$, and referring to (Theorem 2.5, [@MRDS]) we have $\Vert LP_{D} \Vert \leq \Vert LP_{B_a} \Vert$. From the definition of $\Vert LP_{D} \Vert$, we write this inequality as a Faber-Krahn type inequality $$\frac{1}{\lambda_{0,1}^2(D)} = \Vert LP_{\Omega} \Vert \leq \Vert LP_{D} \Vert = \frac{1}{\lambda_1^2(B_a)} \quad \text{or equivalently} \quad \lambda_{1}(B_a) \geq \lambda_{0,1}(D).$$ We deduce the lower bound, and hence the behaviour, of the first eigenvalue $$\lambda_{0,1}(D) ~ \sim a^2 \vert \log(a)\vert, \quad \forall a <<1.$$ In addition from $(\ref{T})$, we see that $$\left( \int_{D} e_{1} \right)^{2} = \frac{\lambda_{0,1}}{a^{2} \, \vert \log(a) \vert} + \mathcal{O}\big(\vert \log(a) \vert^{-1}\big)$$ and hence $$\left( \int_{D} e_{1} \right)^{2} \sim 1 \;\; for \;\; a<<1.$$
[10]{}
M. Agranovsky and P. Kuchment,
A. Alsaedi, F. Alzahrani, D. P. Challa, M. Kirane and M. Sini, H. Ammari, D. Challa, M. Sini and P. A. Choudhury,
H. Ammari, A. Dabrowski, B. Fitzpatrick, P. Millien and M. Sini Volume 42, Issue 18, December 2019, Pages 6567-6579.
H. Ammari and J. Garnier and H. Kang and L. H. Nguyen and L. Seppecher
J. M. Anderson, D. Khavinson and V. Lomonosov, ,
G. Belizzi and O. M. Bucci. Microwave cancer imaging exploiting magnetic nanaparticles as contrast agent. , Vol. 58, N: 9, September 2011.
Y. Chen, I. J. Craddock, P. Kosmas. Volume: 57, Issue: 5, May 2010.
B. T. Cox, S. R. Arridge, and P. C. Beard, 23 (2007), pp. S95-S112
E. C. Fear, P. M. Meaney, M. A. Stuchly, IEEE Potentials, Vol. 22, n: 1, pp.12-18, 2003.
D. Finch, M. Haltmeier and Rakesh,
T. Kalmenov and D. Suragan,
P. Kuchment and L. Kunyansky, 2010, pp. 817-866.
P. Kuchment and L. Kunyansky,
L. J. Landau, .
W. Li and X. Chen, (2015) 10(2), 299-320.
W. Naetar and O. Scherzer, 7 (2014), pp. 1755-1774.
F. Natterer, .
A. Prost, F. Poisson and E. Bossy, arkiv:1501.04871v4
S. Qin, C. F. Caskey and K. W. Ferrara, Phys Med Biol. 2009 March 21; 54(6): R27
J. Rubinstein and Y. Pinchover, M. Ruzhansky, D. Suragan, .
O. Scherzer, , 2010.
J. D. Shea, P. Kosmas, B. D. Van Veen and S. C. Hagness, Volume 26, Number 7, pp 1-22, 2010.
P. Stefanov and G. Uhlmann, , 25 (2009). 075011.
F. Triki, M. Vauthrin. Mathematical modelization of the photoacoustic effect generated by the heating of metallic nanoparticles
G. N. Watson, , .
[^1]: Here, we describe the photoacoustic model assuming the TM-approximation of the electromagnetic field. The more realistic model is of course the full Maxwell system.
[^2]: We stated the model in the whole plan $\mathbb{R}^2$. However, we could also state it in a bounded domain supplemented with Dirichlet or Neumann boundary conditions.
[^3]: More exactly, using the expansion and the scales of the fundamental solution, we show that an eigenvalue of $A_{k}$ can be written as $$a^{2} \, \Big( {\tilde{\lambda}_{n}} + \frac{1}{2} \vert \log(a) \vert (\int_{B}\bar{e_n}(\xi) d\xi)^2 - \frac{1}{2} \log(k)(z) + \pi \, \Gamma \Big) + \mathcal{O}(a^{3}).$$
[^4]: Since $z_{1}$ and $z_{2}$ are sufficiently close, we make in $(\ref{pressure-tilde-expansion})$ an arbitrary choice of one of them, i.e. (\[pressure-tilde-expansion\]) does not distinguish between $z_{1}$ and $z_{2}$.
[^5]: We use the notation $v_{m} := u_{|_{D_{m}}}$ instead of $u_{m} := u_{|_{D_{m}}}$ to avoid confusion with $u_{0}, u_{1}$ and $u_{2}$ we defined before concerning the electric fields in the absence or the presence of one or two particles.
[^6]: The constant $\Gamma$ will be written as $\Gamma := \frac{i}{4}+\gamma$ where $\gamma$ is the Euler constant.
[^7]: Remember that we assumed that all nano-particles have the same electromagnetic properties.
[^8]: The constant 2$\pi$ in the denominator comes from the Poisson formula.
[^9]: For the definition of $p_{0}(t,x)$, see $(\ref{p0np})$.
[^10]: The dielectric-resonance that we want to excite is $\omega_{n_{0}}$ given by $$\label{exactMieresonance}
\omega_{n_{0}}^{2} = \frac{1}{\mu_{0} \, \lambda_{n_{0}} \, a^{-2} \, \vert \log(a) \vert^{-1}}.$$
[^11]: We can compute $\underset{D}{\int} u_{0,j} = \int_{0}^{2\pi} \int_{0}^{a} u_{0,j}(r,\varphi) \, r \, dr \, d\varphi = 2 \pi \, a^{2} \, \LARGE{\text{J}}_{1}\left(\mu_{j}^{(0)}\right) / \mu_{j}^{(0)}.$
|
{
"pile_set_name": "ArXiv"
}
|
=0.24in
[**Strong Traces Model of Self-Assembly Polypeptide Structures**]{}
[Gašper Fijavž]{}
\
e-mail: [[email protected]]{}
[Tomaž Pisanski]{}
\
e-mail: [[email protected]]{}
[Jernej Rus]{}\
\
e-mail: [[email protected]]{}
[**Abstract**]{}
A novel self-assembly strategy for polypeptide nanostructure design was presented in \[Design of a single-chain polypeptide tetrahedron assembled from coiled-coil segments, Nature Chemical Biology 9 (2013) 362–366\]. The first mathematical model (polypeptide nanostructure can naturally be presented as a skeleton graph of a polyhedron) from \[Stable traces as a model for self-assembly of polypeptide nanoscale polyhedrons, MATCH Commun. Math. Comput. Chem. 70 (2013) 317–330\] introduced stable traces as the appropriate mathematical description, yet we find them deficient in modeling graphs with either very small ($\le 2$) or large ($\ge 6$) degree vertices. We introduce *strong traces* which remedy both of the above mentioned drawbacks. We show that *every* connected graph admits a strong trace by studying a connection between strong traces and graph embeddings. Further we also characterize graphs which admit *parallel (resp. antiparallel) strong traces*.
=0.24in
Introduction {#sec:introd}
============
Recently Gradišar et. al [@gr-2013] presented a novel self-assembly strategy for polypeptide nanostructure design that represents a significant development in biotechnology. The main success of their research is a construction of a polypeptide self-assembling tetrahedron by concatenating $12$ coiled-coil-forming segments in a prescribed order. More precisely, a single polypeptide chain consisting of 12 segments was routed through $6$ edges of the tetrahedron in such a way that every edge was traversed exactly twice. In this way $6$ coiled-coil dimers were created and interlocked into a stable tetrahedral structure.
A polyhedron $P$ which is composed from a single polymer chain can be naturally represented with a graph $G(P)$ of the polyhedron. As in the self-assembly process every edge of $G(P)$ corresponds to a coiled-coil dimer, exactly two segments are associated with every edge of $G(P)$.
The first mathematical model was introduced in [@kl-2013], where the authors have shown that a polyhedral graph $P$ can be realized by interlocking pairs of polypeptide chains if its corresponding graph $G(P)$ contains a stable trace (to be defined later).
We find that the mathematical model introduced in [@kl-2013] has two important deficiencies:
1. it does not account for vertices of degree $\le 2$, and
2. it does not successfully model vertices of degree $\ge 6$.
The model proposed in this paper settles the above issues. On one hand it successfully extends to graphs with smaller vertex degrees. Even if for every polyhedron $P$ its graph $G(P)$ has minimum degree $\ge 3$, the model should also include graphs of smaller degrees. It is plausible that a quest may require constructions of polypeptide nanostructures with reactive parts being pendant to the main body of the polyhedral structure.
Now (2) touches the question of defining vertices in our structure. An edge in a graph is defined via identifying pairs of segments along a walk $W$. A vertex on the other hand is only defined implicitly: pairs of segments/edges that lie consecutively on this walk should meet in a common endvertex.
In the case where no vertex in $G$ has degree $\ge 6$ the procedure — (i) find a stable trace $W$ in $G$ and (ii) identify pairs of edges along $W$ and fold the resulting structure into a graph — shall produce the initial graph $G$. However if $G$ has a vertex of degree larger than $6$ this may not be the case. A stable trace in $G$ may fold to a graph different from $G$, as a vertex of degree $\ge 6$ may indeed split into a collection of independent vertices of degree $\ge 3$, see also Fig. \[fig:repetition\].
A *strong trace* in a graph, our key object (to be defined later), successfully resolves both above issues. In one sweep we can model graphs with vertices of both low and high degrees. What is more, strong traces admit a natural connection to embeddings of graphs in higher surfaces.
Our main results state that every connected graph admits a strong trace, and can therefore be correctly realized by folding its strong trace by edge identifications (Theorems \[thm:strong\] and \[thm:realize\]).
In what follows we use Section \[sec:double\] to describe some basic and necessary tools from graph theory. In Section \[sec:main\] we connect strong traces and embeddings of graphs and ultimately prove our main results. In Sections \[sec:anti\] and \[sec:parallel\] we generalize two additional concepts from [@kl-2013; @rus-2013] — antiparallel strong traces, parallel strong traces and also parallel $d$-stable traces.
Double traces {#sec:double}
=============
All graphs considered in this paper will be connected and finite. We denote the degree of a vertex $v$ by $d_G(v)$. The minimum and the maximum degree of $G$ will be denoted by $\delta(G)$ and $\Delta(G)$, respectively.
If $v$ is a vertex then $N(v)$ denotes set of vertices adjacent to $v$, and $E(v)$ is the set of edges incident with $v$. If $A$ is a set of vertices then $E(v,A)$ denotes the collection of edges incident with both $v$ and a vertex from $A$.
A *walk* in $G$ is an alternating sequence $$W=v_0 e_1 v_1 \ldots v_{\ell-1} e_\ell v_\ell,
\label{eq:walk}$$ so that for every $i=1,\ldots,k$ $e_i$ is an edge between vertices $v_{i-1}$ and $v_i$. We say that $W$ *passes through* or *traverses* edges and vertices contained in the sequence . The length of a walk is the number of edges in the sequence, and we call $v_0$ and $v_\ell$ the *endvertices* of $W$. A walk is *closed* if its endvertices coincide.
An *Euler tour* in $G$ is a closed walk which traverses every edge of $G$ exactly once. $G$ is an *Eulerian graph* if it admits an Euler tour. The fundamental Euler’s theorem asserts that a (connected) graph $G$ is Eulerian if and only if all of its vertices are of even degree.
A *double trace* in $G$ is a closed walk which traverses every edge of $G$ exactly twice. Next result essentially goes back to Euler and was since observed by various authors.
\[prop:double\] Every connected graph $G$ has a double trace.
Let $W$ be a double trace of length $\ell$, and let $N \subseteq N(v)$. We say that $W$ has an *$N$-repetition at $v$* if the following implication holds: $$\text{\emph{for every $i \in \{0,\ldots,\ell-1\}$: if $v=v_i$ then $v_{i+1} \in N$ if and only if $v_{i-1} \in N$.}}
\label{eq:repetition}$$ Intuitively $W$ has an $N$-repetition at $v$ if whenever $W$ visits $v$ coming from a vertex in $N$ it also returns to a vertex of $N$. Let us also note that we treat a double trace as a closed walk taking indices in modulo $\ell$. This implies that $v_1$ is the vertex immediately following $v_\ell$.
An $N$-repetition (at $v$) is a *$d$-repetition* if $|N|=d$, and a $d$-repetition will also be called a repetition *of order $d$*. An $N$-repetition at $v$ is *trivial* if $N=\emptyset$ or $N=N(v)$. Clearly if $W$ has an $N$-repetition at $v$, then it also has an $N(v)\setminus N$-repetition at $v$. We shall call this observation *symmetry of repetitions*.
In [@kl-2013] a $1$-repetition at $v$ was named a *retracing (at a vertex $v$)*, and a $2$-repetition at $v$ was denoted as a *repetition at a vertex $v$*. Note that in this paper a *repetition at $v$* can be of order different than $2$.
We call a double trace without nontrivial repetitions of order $<d$ a *$d$-stable trace*, extending the terms used in [@kl-2013] (where a $1$-stable trace was named a *proper trace* and the term *stable trace* was used to name $2$-stable traces). Graphs which admit $1$-stable traces were independently characterized by Sabidussi [@sa-1977] and later by Eggleton and Skilton [@eg-1984]. Graphs admitting $2$-stable traces were recently characterized in [@kl-2013]:
\[thm:1stable\] [[@sa-1977], [@eg-1984 Theorem $9$]]{} A connected graph $G$ admits a $1$-stable trace if and only if $\delta(G) > 1$.
\[thm:2stable\] [[@kl-2013 Theorem $3.1$]]{} A connected graph $G$ admits a $2$-stable trace if and only if $\delta(G) > 2$.
Note that a vertex $v$ of degree $d$ implies that every double trace $W$ in $G$ has a repetition of order $d$. A leaf in a graph necessarily implies no double trace is $1$-stable, and similarly, a vertex $v$ of degree $2$ implies that a repetition of order $2$ is present in every double trace.
Our key object in this paper is a *strong trace*, which is a double trace without nontrivial repetitions. Observe that in a graph $G$ with $\delta(G) \ge 3$ every strong trace is $2$-stable. If also $\Delta(G) \le 5$ then every $2$-stable trace is also a strong one.
However, if $v$ is a vertex of degree at least $6$, then a stable trace $W$ may have a $3$-repetition at $v$, see Fig. \[fig:repetition\].
(3,2) circle (3pt); (3.2,2) node\[right\][$v$]{}; (0,0) circle (3pt); (3,0) circle (3pt); (6,0) circle (3pt); (0,4) circle (3pt); (3,4) circle (3pt); (6,4) circle (3pt); (-0.1,4) .. controls (2.8,1.45) and (3.2,1.45) .. (6.1,4); (0.1,4) .. controls (3.1,1.45) and (2.9,2) .. (2.9,4); (3.1,4) .. controls (2.9,1.45) and (3.1,2) .. (5.9,4); (-0.1,0) .. controls (2.8,2.55) and (3.2,2.55) .. (6.1,0); (0.1,0) .. controls (3.1,2.55) and (2.9,2) .. (2.9,0); (3.1,0) .. controls (2.9,2.55) and (3.1,2) .. (5.9,0);
Our main results are the following theorems:
\[thm:strong\] Every connected graph $G$ admits a strong trace.
Now Theorem \[thm:strong\] implies:
\[thm:realize\] Every connected graph $G$ can be (at least in theory) constructed from a single coiled-coil-forming segment.
A weaker version of Theorem \[thm:realize\] (limited to graphs of polyhedra) was stated in [@kl-2013]. The classical Steinitz’ theorem [@ste-1922] namely states that every 3-connected planar graph $G$ is isomorphic to a graph of a polyhedron — $G=G(P)$ for some polyhedron $P$ — and vice versa, for every polyhedron $P$ its graph $G(P)$ is a planar $3$-connected graph. Now $3$-connectivity of $G$ implies $\delta(G) \ge 3$, a condition heavily used in [@kl-2013].
We shall prove Theorem \[thm:strong\] in the next section after establishing the connection between strong traces and embeddings of graphs in surfaces.
Graph embeddings and strong traces {#sec:main}
==================================
In this section we establish the duality between embeddings of graphs in surfaces and strong traces in graphs. We shall first cover the necessary material on combinatorial embeddings of graphs. For more detail on the topic see [@moh-2001].
A *(combinatorial) embedding* of a graph $G$ in a surface $\Sigma$ is a pair $(\Pi, \lambda)$, where $\Pi = \{ \pi_v \mid v \in V(G)\}$ so that for every vertex $v \in V(G)$ $\pi_v$ is a *cyclic permutation* of $E(v)$, and $\lambda : E(G) \rightarrow \{-1,1\}$. We shall call $\Pi$ the *rotation system* and $\pi_v \in \Pi$ the *local rotation around $v$*, whereas $\lambda$ is called the *signature (of edges)*.
The permutation $\pi_v$ describes the clock-wise ordering of edges emanating from $v$. For a pair of adjacent vertices $uv$ the signature $\lambda(uv)$ encodes the possible match of clockwise orientations $\pi_u$ and $\pi_v$: $\lambda(uv)=1$ if and only if the clockwise orientation around $u$ matches the one around $v$ when traversing the edge $uv$. A *facial walk* of $(\Pi, \lambda)$ is a closed walk in $G$ obtained by the following procedure. We start at an arbitrary vertex $u$, choose an arbitrary incident edge $uv$ and an initial signature value $\lambda_0 \in \{-1,1\}$. Now we repeat the following steps: move along the chosen edge $uv$, multiply the signature $\lambda_0$ by the signature of a traversed edge $\lambda(uv)$, and choose the next edge $vw$ so that either $\pi^{-1}_v(uv)=vw$ or $\pi_v(uv)=vw$ depending on whether $\lambda_0=1$ or $\lambda_0=-1$, respectively. We terminate the procedure when (i) we reach $u$, (ii) the next edge to travel is the initial edge $uv$, and (iii) $\lambda_0$ equals the initially chosen value. We consider two facial walks the same if they only differ in respective initial vertices and/or their orientations.
The surface $\Sigma$ is uniquely determined by the combinatorial embedding $(\Pi, \lambda)$. $\Sigma$ is orientable if $G$ contains no cycle with an odd number of edges having negative signature, and is nonorientable if there exists a cycle $C$ having an odd number of edges with negative signature. The genus of $\Sigma$ is determined by the number of facial walks of $(\Pi, \lambda)$.
The sense of orientation changes at every edge with negative signature when traveling along $C$. If the number of changes along $C$ is odd a narrow strip around $C$ is homeomorphic to the Möbius band. Face-wise — by decreasing the number of facial walks we obtain surfaces of higher genera.
Two embeddings $(\Pi, \lambda)$ and $(\Pi', \lambda')$ are *equivalent* if one can be obtained from the other by repetitively replacing a single local rotation at $v$ by its inverse and at the same time altering signatures of every edge emanating from $v$.
Observe that every edge $uv$ of $G$ appears exactly twice in the collection of facial walks of $G$ in an embedding $(\Pi, \lambda)$.
An embedding $(\Pi, \lambda)$ determines the collection of facial walks, but it is also the other way around. A collection of closed walks ${\cal W}=\{W_1,\ldots,W_k\}$ so that every edge $uv \in E(G)$ appears exactly twice in $\cal W$ determines the embedding $(\Pi, \lambda)$ up to equivalence: a sequence $e v e'$ along a facial walk implies that $e$ and $e'$ are consecutive in $\pi_v$, and a sequence $e v e' v' e''$ along a facial walk determines the signature of $e'$: $\lambda(e')=1$ if and only if either $\pi_v(e)=e'$ and $\pi_{v'}(e')=e''$ or $\pi_{v'}(e'')=e'$ and $\pi_v(e')=e$.
An alternative way to represent the surface $\Sigma$ is by taking its *polygonal schema*: take a collection of disks, one per each facial walk in ${\cal W}=\{W_1,\ldots,W_k\}$, and make identification along their borders according to pairs of edges in $\cal W$.
It is known that $\Sigma$ is orientable if the facial walks in $\cal W$ can be chosen in such a way that every edge is traversed twice in opposite directions.
We proceed with a basic result on embeddings of connected graphs. A *$k$-face embedding* is an embedding with exactly $k$ faces (facial walks). Next theorem was independently proven by Edmonds [@ed-1965] and later Pisanski [@pi-1978].
\[thm:1face\] [[@ed-1965], [@pi-1978]]{} Every connected graph $G$ admits a $1$-face embedding in some surface $\Sigma$.
[[**Proof. **]{}]{}Let $(\Pi,\lambda)$ be a combinatorial embedding of $G$ with the smallest number of facial walks. If the number of facial walks is at least $2$, then some edge $e=uv$ is contained in a pair of distinct facial walks $W_1$ and $W_2$. We claim that changing the signature of $e$ reduces the number of facial walks by one.
We may assume that $W_1$ and $W_2$ traverse $e$ in the same direction. Let us start walking along $W_1$. Continuing along $e$ with the change of its signature routes our walk following $W_2$, then back to $e$ where it continues along $W_1$. This implies that the walks $W_1$ and $W_2$ merge into a single facial walk in the adjusted embedding, see Fig. \[fig:construction1\]. The remaining facial walks clearly do not change.
[$\square$ ]{}
An easy consequence of Theorem \[thm:1face\] is the classical theorem of Ringel.
\[thm:1face:non\] [[@ri-1977 Theorem $13$], [@st-1978 Theorem $8$]]{} Every connected graph $G$ which is not a tree has a $1$-face embedding in some nonorientable surface.
[[**Proof. **]{}]{}Assume that $G$ is not a tree and let $(\Pi,\lambda)$ be a $1$-face embedding of $G$ in some surface $\Sigma$. Such an embedding exists by Theorem \[thm:1face\].
Assume that $\Sigma$ is orientable, and let $W$ be the (only) facial walk which traverses every edge twice, once in every direction. As $G$ is not a tree there exists an edge $e=u_1v_1$ which is not a cutedge. We claim that changing the signature of $e$ produces a $1$-face embedding $(\Pi',\lambda')$ of $G$ into a nonorientable surface $\Sigma'$.
Let us denote $$W = u_0 \ldots f_1 u_1 e v_1 e_2 v_2 \ldots v_k e_k v_1 e u_1 g_1 \ldots u_0.$$ Altering the signature of $e$ yields an alternative embedding whose only facial walk equals $$W' = u_0 \ldots f_1 u_1 e v_1 e_k v_k \ldots v_2 e_2 v_1 e u_1 g_1 \ldots u_0$$ obtained by reversing the subwalk between occurrences of $e$.
As $e$ is not a cutedge there exists a cycle $C$ passing through $e$, which contains an odd number of edges whose $\lambda'$ signature is negative. Hence $\Sigma'$ is a nonorientable surface. [$\square$ ]{}
Let $$W=v_0 e_1 v_1 \ldots v_{\ell-1} e_\ell v_\ell$$ be a double trace in $G$. Fix a vertex $v \in V(G)$ and let $E(v)$ be the set of edges emanating from $v$. Let us build a $2$-regular graph (a union of cycles) $F_{v,W}$, having $E(v)$ as its vertex set by making edges $e,e' \in E(v)$ adjacent if $e$ and $e'$ are consecutive edges along $W$ (where a $1$-repetition at $v$ constructs a loop and a $2$-repetition gives rise to a pair of parallel edges in $F_{v,W}$). The graph $F_{v,W}$ is also called the *vertex figure of $v$* (with respect to a double trace $W$).
The connection between vertex figures and graph embeddings is best explained via the following proposition.
\[prp:strong:vfigure\] Let $G$ be a connected graph and $W$ a double trace in $G$. Then $W$ is a strong trace if and only if every vertex figure $F_{v,W}$ is a single cycle.
[[**Proof. **]{}]{}Assume first that $W$ is not strong. Then there exists a nontrivial $N$-repetition at some vertex $v \in V(G)$. Let $N' = N(v) \setminus N$, which is also nonempty. We claim that the vertex figure $F_{v,W}$ contains at least two cycles.
Let $e$ be an edge incident with $v$ whose other endvertex lies in $N$. In the vertex figure $F_{v,W}$ the edge $e$ can only be adjacent to an edge from $E(v,N)$, and consequently none of the edges $e' \in E(v,N')$ lies in the same cycle of $F_{v,W}$ as $e$.
For the converse, let $W$ be a strong trace, and let us pick an arbitrary vertex $v$. Assume that $F_{v,W}$ contains a pair of disjoint cycles $C_1$ and $C_2$. Let $N$ be the set of endvertices of edges from $C_1$ different from $v$. Now $W$ contains an $N$-repetition at $v$, as entering $v$ from $N$ implies that $W$ also exits towards a vertex from $N$. As $N \ne \emptyset$ and $N \ne N(v)$ we have a nontrivial repetition at $v$ which is absurd. [$\square$ ]{}
Assume that $F_{v,W}$ is a single cycle. An orientation of $F_{v,W}$ can be interpreted as a cyclic permutation $\pi_v$ of $E(v)$. What is more, if every vertex figure is a single cycle, the collection of cyclic permutations $\Pi=\{\pi_v \mid v \in V(G)\}$ is the first component of an embedding, whose only facial cycle equals $W$.
To sum it all up. By Theorem \[thm:1face\] every connected graph admits a $1$-face embedding $(\Pi,\lambda)$ into some closed surface $\Sigma$. The only facial cycle $W$ of this embedding is a double trace, and as every vertex figure $F_{v,W}$ is a single cycle, Proposition \[prp:strong:vfigure\] implies $W$ is strong. This completes the proof of Theorem \[thm:strong\].
Theorem \[thm:strong\] easily implies the following proposition, which in turn implies both Theorem \[thm:1stable\] and Theorem \[thm:2stable\].
\[prp:nstable\] Let $G$ be a connected graph. Then $G$ admits a $d$-stable trace if and only if $\delta(G)>d$.
[[**Proof. **]{}]{}It is enough to note that a strong trace in $G$ is $d$-stable, provided that no vertex in $G$ has degree $\le d$. [$\square$ ]{}
Antiparallel strong traces {#sec:anti}
==========================
Let $W$ be a double trace in $G$. As mentioned in Section \[sec:introd\] every edge $e=uv$ of graph $G$ corresponds to a coiled-coil dimer and is thus traversed exactly twice in strong trace and $d$-stable trace $W$. If $W$ traverses $e$ in the same direction twice (either both times from $u$ to $v$ or both times from $v$ to $u$) then we call $e$ a [*parallel edge*]{} (with respect to $W$), otherwise $e$ is an [*antiparallel edge*]{}. A double trace $W$ is a [*parallel double trace*]{} if every edge of $G$ is parallel and an [*antiparallel double trace*]{} if every edge of $G$ is antiparallel.
The motivation for this concept also comes from self-assembly nanostructure design [@gr-2013]. Parallel double traces represent polyhedra in which on every edge the two coiled-coil-forming segments would be aligned in the same direction while antiparallel double traces represent polyhedra in which on every edge two coiled-coil-forming segments would be aligned in the opposite direction. Because of apparent lack of polypeptide pairs which form antiparallel coiled-coil dimers [@gr-2013], especially detailed study of the first type would be of a great use. In this section we discuss antiparallel strong traces and turn to parallel strong traces in next section.
The main result of this section can be read as follows.
\[thm:santi\] A graph $G$ admits an antiparallel strong trace strong trace if and only if $G$ has a spanning tree $T$ such that each connected component of $G - E(T)$ has an even number of edges.
In the rest of this section we shall prove Theorem \[thm:santi\].
Connection between antiparallel double traces and embeddings of graphs was (to some extent) already observed in [@sk-1990] and [@th-1990].
\[thm:anti-embedding\] A graph $G$ admits an antiparallel strong trace if and only if $G$ has an $1$-face embedding in some orientable surface.
[[**Proof. **]{}]{}Assume first that $G$ admits an antiparallel strong trace $W$. Now $W$ represents an unique facial walk of $G$ in an embedding $(\Pi, \lambda)$ and for every $v \in V(G)$ the vertex figure $F_{v,W}$ is a single cycle. Therefore $G$ has a $1$-face embedding in some surface $\Sigma$. Because every edge in $W$ is traversed twice in opposite direction, $\Sigma$ is orientable.
Conversely, let $(\Pi, \lambda)$ be a $1$-face embedding of $G$ in some orientable surface $\Sigma$. A $1$-face embedding $(\Pi, \lambda)$ determines an unique facial walk $W$. Clearly vertex figure $F_{v,W}$ is a single cycle for every $v \in V(G)$ and Proposition \[prp:strong:vfigure\] implies that $W$ is a strong trace in $G$. Because $\Sigma$ is orientable, every edge in $W$ is traversed twice in opposite directions, and is therefore antiparallel. [$\square$ ]{}
Xuong characterized graphs which admit embeddings in orientable surface with at most 2 faces [@xu-1979]. The [*Betti number*]{} of a graph $G$ is defined as $\beta(G) = |E(G)| - |V(G)| + 1$. Observe also, as orientable surfaces have even Euler characteristics, that the number of faces in an orientable embedding of a graph $G$ is of different parity as its Betti number $\beta(G)$.
\[thm:upper\] [[@xu-1979 Theorem $2$]]{} A connected graph $G$ with even (odd) Betti number has an embedding in orientable surface with at most 2 faces if and only if it contains a spanning tree $T$ such that all (all but one) of connected components of $G - E(T)$ have an even number of edges.
A special case of Theorem \[thm:upper\] was later presented in [@beh-1979] and [@th-1995]:
\[thm:strictly-upper\] A connected graph $G$ has an $1$-face embedding in an orientable surface if and only if $G$ has a spanning tree $T$ such that each connected component of $G - E(T)$ has an even number of edges.
To sum it all up. By Theorem \[thm:anti-embedding\] a connected graph $G$ admits an antiparallel strong trace if and only if $G$ has an $1$-face embedding in some orientable surface. By Theorem \[thm:strictly-upper\] the latter is true if and only if $G$ has a spanning tree $T$ such that each connected component of $G - E(T)$ has an even number of edges. This completes the proof of Theorem \[thm:santi\].
Already in $1895$ Tarry [@tarry-1895] observed that every graph admits an antiparallel double trace. Almost a hundred years later Thomassen [@th-1990] characterized graphs that admit antiparallel $1$-stable traces (thus solving a problem posed by Ore [@ore-1951]):
\[thm:1anti\] [[@th-1990 Theorem $3.3$]]{} A graph $G$ admits an antiparallel $1$-stable trace if and only if $\delta(G) > 1$ and $G$ has a spanning tree $T$ such that each component of $G-E(T)$ either has an even number of edges or contains a vertex $v$ with $d_G(v)\ge 4$.
It would be interesting to characterize graphs which admit antiparallel $d$-stable traces, for every integer $d$. By now it was observed that connection between graphs which admit antiparallel $d$-stable traces and pseudo-surfaces exists (for more on pseudo-surfaces, see [@pot-2003]). Therefore the same approach as for characterization of graphs which admit antiparallel strong traces can not be used. Thus we pose:
Characterize graphs that admit an antiparallel $d$-stable trace for $d > 1$.
Parallel strong traces {#sec:parallel}
======================
We conclude with a characterization of graphs admitting parallel strong traces and parallel $d$-stable traces. Next proposition, which was observed in [@kl-2013] easily follows if we traverse some Eulerian circuit of graph twice.
\[prp:parallel\] [[@kl-2013 Proposition $5.4$]]{} A graph $G$ admits a parallel $1$-stable trace if and only if $G$ is Eulerian.
In [@rus-2013] a similar theorem for parallel $2$-stable traces was proven. Observe that in an Eulerian graph $G$ the condition $d_G(v) \ge 3$ is equivalent to $d_G(v) \ge 4$.
\[thm:parallel\] [[@rus-2013 Theorem $2.2$]]{} A graph $G$ admits a parallel $2$-stable trace if and only if $G$ is Eulerian and $\delta(G) \ge 3$.
Our main result in this section is the following theorem, whose immediate corollary Theorem \[thm:nparallel\] easily implies both Proposition \[prp:parallel\] and Theorem \[thm:parallel\].
\[thm:parallel:strong\] Let $G$ be a connected graph. $G$ admits a parallel strong trace if and only if $G$ is Eulerian.
[[**Proof. **]{}]{}If $G$ is not Eulerian then $G$ does not admit a parallel double trace — the number of times a double trace enters a vertex $v$ of odd degree is on one hand odd (as it is equal to the number of times a double trace leaves) and even (as every edge incident with $v$ is either used twice or $0$ times for entering $v$), which is absurd.
For the converse direction let us consider a parallel double trace $W$ so that the collection of vertex figures $\{ F_{u,W} \mid u \in V(G)\}$ cumulatively has as few cycles as possible. If every vertex figure contains exactly one cycle, then by Proposition \[prp:strong:vfigure\] $W$ is a strong trace.
If on the other hand there exists a vertex whose vertex figure contains at least two cycles we shall be able to construct an alternative parallel double trace $W'$, so that the collection of alternative vertex figures $\{F_{u,W'} \mid u \in V(G)\}$ contains strictly fewer cycles. This will be the final contradiction in the proof.
Let $v$ be a vertex so that its vertex figure $F_{v,W}$ splits $E(v)$ into (at least) two cycles $C_1$ and $C_2$. Choose an edge $e_1 = u_1v \in C_1$ so that $W$ uses $e_1$ in the direction towards $v$. Let $e_2=vu_2$ and $e_3=vu_3$ be the edges from $C_1$ that immediately succeed both occurrences of $e_1$ along $W$ (note that $e_2$ may be equal to $e_3$). Next let $f_4=u_4v$ and $f_5=vu_5$ be edges from $C_2$ so that $u_4 f_4 v f_5 u_5$ is a subsequence of $W$.
Without loss of generality (by choosing an alternative initial vertex along $W$) we may assume that $$W = \ldots u_1 e_1 v e_2 u_2 \ldots u_1 e_1 v e_3 u_3 \ldots u_4 f_4 v f_5 u_5 \ldots .$$ Observe the following walk $$W'= \ldots u_1 e_1 v e_3 u_3 \ldots u_4 f_4 v e_2 u_2 \ldots u_1 e_1 v f_5 u_5 \ldots$$ obtained by interchanging the two “interior” subwalks of $W$ between the three shown occurrences of $v$ in $W$, also see Fig. \[fig:construction2\].
As $W'$ traverses the same collection of edges (in the same direction) as $W$, the walk $W'$ is indeed a parallel double trace. If $u \ne v$ then the new vertex figure $F_{u,W'}$ equals the original vertex figure $F_{u,W}$, since every pair $e,e'$ of edges meeting at $u$ are consecutive along $W'$ if and only if they are consecutive along $W$.
Now $W'$ only changes pairs of consecutive edges from $C_1 \cup C_2$, hence the only possible cycles of $F_{v,W'}$ which are not present in $F_{v,W}$ consist of edges from $C_1 \cup C_2$. Now the adjacencies $e_1-e_2$ and $f_4-f_5$ were replaced by $e_2-f_4$ and $e_1-f_5$ which implies that $C_1$ and $C_2$ merge into exactly one new cycle in $F_{v,W'}$ containing all edges from $C_1 \cup C_2$. Hence the total number of cycles in vertex figures has decreased by exactly one, which concludes the proof. [$\square$ ]{}
The next theorem easily follows:
\[thm:nparallel\] A connected graph $G$ admits a parallel $d$-stable trace if and only if $G$ is Eulerian and $\delta(G) > d$.
Note that the construction used in [@rus-2013] for the proof of Theorem \[thm:parallel\] could be extended to yield an alternate proof of Theorem \[thm:parallel:strong\].
Conclusion
==========
Let us finish with a pair of open problems. We have provided a model for constructing a polypeptide nanostructure using a strong trace in the corresponding graph. A strong trace is a particular version of a closed walk, but is nevertheless encoded by a sequence which pins out its initial and terminal vertex.
- Should we care which vertex of a nanostructure should be the initial vertex of an encoding of a double trace? How do both physical and chemical properties of the structure relate to the initial node of the polypeptide chain.
A fixed graph $G$ contains many strong traces, [@gr-2013] quotes 40 strong traces for the cube graph, for example. Putting the initial vertex and the orientation aside, there still are many options and criteria along which one may choose a supposedly better strong trace.
- Given a nanostructure, which strong trace to choose in order to maximize the probability an appropriate polypeptide chain will self assemble into the desired structure.
Acknowledgements {#acknowledgements .unnumbered}
================
We are grateful to Michal Kotrbčik and Thomas W. Tucker for several remarks and suggestions which were of great help. Work supported in part by the ARSS of Slovenia, Research Grants P1-0294, P1-0297 and N1-0011: GReGAS, supported in part by the European Science Foundation.
=0.21in
[99]{}
M. Behzad, G. Chartrand, L. Lesniak-Foster, [*Graphs and Digraphs*]{}, Prindle, Weber and Schmidt, Boston, 1979.
J. Edmonds, On the surface duality of linear graphs, [*J. Res. Natl. Bur. Stand., Sec. B: Math.$\&$ Math. Phys.*]{} [**Vol. 69B**]{}, No. 1–2 (1965) p. 121.
R.B. Eggleton, D.K. Skilton, Double tracings of graphs, [*Ars Combin.*]{} [**17A**]{} (1984) 307–323.
H. Gradišar, S. Božič, T. Doles, D. Vengust, I. Hafner Bratkovič, A. Mertelj, B. Webb, A. Šali, S. Klavžar, R. Jerala, Design of a single-chain polypeptide tetrahedron assembled from coiled-coil segments, [*Nature Chemical Biology*]{} [**9**]{} (2013) 362–366.
S. Klavžar, J. Rus, Stable traces as a model for self-assembly of polypeptide nanoscale polyhedrons, [*MATCH Commun. Math. Comput. Chem.*]{} [**70**]{} (2013) 317–330.
B. Mohar, C. Thomassen [*Graphs on Surfaces*]{}, The Johns Hopkins University Press, 2001.
O. Ore, A problem regarding the tracing of graphs, [*Elemente der Math.*]{} [**6**]{} (1951) 49–53.
T. Pisanski, Vložitve grafov v sklenjene ploskve, MSc thesis (Slovene), University of Ljubljana (1978).
P. Potočnik, T. Pisanski, [*Graphs on surfaces*]{}, Handbook of Graph Theory (2003), CRC Press LLC 611–624.
G. Ringel, The combinatorial map theorem, [*J. Graph Theory*]{} [**1**]{} (1977) 141–155.
J. Rus, Parallelism of strong traces, submitted to [*Discrete Appl. Math.*]{}, 2013.
G. Sabidussi, Tracing graphs without backtracking, [*Operations Research Verfahren XXV*]{}, Symp. Heidelberg, Teil 1 (1977) 314–332.
S. Stahl, Generalized embedding schemes, [*J. Graph Theory*]{} [**2**]{} (1978) 41–52.
E. Steinitz, Polyeder und Raumeinteilungen, [*Encyclopädie der mathematischen Wissenschaften*]{}, [**Band 3**]{} (Geometries) (1922) pp. 1–139.
M. Škoviera, R. Nedela, The maximum genus of a graph and doubly eulerian trails, [*Bollettino U. M. I.*]{} [**(7) 4-B**]{} (1990) 541–551.
G. Tarry, Le problème des labyrinthes, [*Nouv. Ann.*]{} [**(3) XIV**]{} (1895) 187–190.
C. Thomassen, Bidirectional retracting-free double tracings and upper embeddability of graphs, [*J. Combin. Theory Ser.*]{} [**B50**]{} (1990) 198–207.
C. Thomassen, [*Embeddings and minors*]{}, Handbook of Combinatorics (1995), North-Holland 302–349.
N. H. Xuong, How to determine the maximum genus of a graph, [*J. Combin. Theory Ser.*]{} [**B26**]{} (1979) 217–225.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'It was believed, with little theoretical basis, that the accretion disc (AD) is destroyed in nova outbursts, and recovers only a few decades later. We looked for observational evidence for the presence of ADs in young novae. We discuss two cases: 1. Nova V1974 Cyg 1992 - we found permanent superhumps in its light curve - a very strong evidence for the presence of the AD according to the disc-instability model. 2. Nova V1425 Aql 1995 - its possible classification as an Intermediate Polar system suggests that it’s most likely that the accretion is maintained through an AD.'
author:
- Alon Retter
- Elia Leibowitz
title: The Presence of Accretion Disks in Novae Shortly After their Outbursts
---
\#1[[*\#1*]{}]{} \#1[[*\#1*]{}]{} =
\#1 1.25in .125in .25in
Introduction
============
It is not known what is the fate of the AD in a classical nova system immediately following the outburst event. It was assumed, that it is being destroyed by this cataclysmic eruption. In addition, there are no theoretical calculations concerning the question when is the AD rebuilt in the remnant system. Leibowitz et al. (1992) discovered an eclipse three weeks after maximum light in Nova V838 Herculis 1991. They interpreted it as the occultation of the AD by the secondary star. A major aim of A.R. Ph.D. thesis was to look for further evidence for the presence of ADs in young (months-years old) novae.
Observational results
=====================
We describe the photometric results of two objects:
Nova V1974 Cygni 1992
---------------------
Two distinct periodicities in the light curve of V1974 Cyg were independently discovered by Semeniuk et al. (1995) and by Retter, Ofek & Leibowitz (1995). Semeniuk et al. suggested, that the 2.04 hr period, which is larger than the second period by about 5%, is the spin period of the rotating white dwarf, and predicted that it will continue its decrease towards the shorter assumed orbital period. Retter et al. suggested a connection between the second periodicity of the nova and the superhump phenomenon in the SU UMa stars, based on the fact that the two periods of V1974 Cyg fits well within the Stolz & Schoembs (1984) relation for the two periods of SU UMa systems.
Retter, Leibowitz & Ofek (1997) and Skillman et al. (1997) showed that the longer period stopped the trend of decrease in 1995, and began to increase during that year. They also listed many photometric features in the light curve of the nova, that resemble the properties of systems that are in a state of permanent superhumps. They, therefore, concluded that V1974 Cyg is also exhibiting the permanent superhumps phenomenon. Superhumps characterize the SU UMa class of CVs that are known to have an AD in their underlying stellar system. Thus the observations in V1974 Cyg indicate the presence of an AD in that system 30 months after its eruption.
Nova V1425 Aquilae 1995
-----------------------
Retter, Leibowitz & Kovo-Kariti (1997) found three periodicities in the power spectrum of Nova Aql 1995. They interpret them as the orbital period of the binary system, the spin period of a magnetic white dwarf and the beat period between them. This suggests that the system belongs to the Intermediate Polar group. Only one object out of the 13 Intermediate Polars listed by Hellier, 1996 (see his Fig. 2) is believed to be a disc-less system. Based on this statistics, we may regard it as very likely that no later than 1996 May, 15 months after its outburst, Nova Aql 95 already possessed an AD within its binary system.
Summary
=======
Our observations on these two novae support the notion, that ADs do exist in young novae, already a few months after the outburst.
Hellier, C., 1996, in Evans, N., Wood, J.H., eds., Proc. IAU Colloq. 158, Kluwer, Dordrecht.
Leibowitz, E.M., Mendelson, H., Mashal, E., Prialnik, D., Seitter, W., 1992, , 385, L49.
Retter, A., Leibowitz, E.M., Kovo-Kariti, O., 1997, , submitted.
Retter, A., Leibowitz, E.M., Ofek, E.O., 1997, , 283, 745.
Retter, A., Ofek, E.O., Leibowitz, E.M., 1995, IAU Circ. 6158.
Semeniuk, I., DeYoung, J.A., Pych, W., Olech, A., Ruszkowski, M., Schmidt, R.E., 1995, Acta Astron., 45, 365.
Skillman, D.R., Harvey, D., Patterson, J., Vanmunster, T., 1997, , 109, 114.
Stolz, B., Schoembs, R., 1984, , 132, 187.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We introduce a geometrically natural probability measure on the group of all Möbius transformations of the circle. Our aim is to study “random” groups of Möbius transformations, and in particular random two-generator groups. By this we mean groups where the generators are selected randomly. The probability measure in effect establishes an isomorphism between random $n$-generators groups and collections of $n$ random pairs of arcs on the circle. Our aim is to estimate the likely-hood that such a random group is discrete, calculate the expectation of their associated parameters, geometry and topology, and to test the effectiveness of tests for discreteness such as Jørgensen’s inequality.'
author:
- 'Gaven Martin and Graeme O’Brien [^1]'
title: |
[**Random Kleinian Groups**]{}, [**I**]{}\
Random Fuchsian Groups.
---
\[section\]
\[theorem\][Lemma]{} \[theorem\][Corollary]{} \[theorem\][Remark]{} \[theorem\][Definition]{} \[theorem\][Conjecture]{} \[theorem\][Proposition]{} \[theorem\][Example]{}
@equation=@theorem
Introduction.
=============
In this paper we introduce the notion of a random Fuchsian group. For us this will mean a finitely generated Fuchsian group where the generators are selected from a geometrically natural probability measure on the space of Möbius transformations of the circle. Our ultimate aim is to study random Kleinian groups, but the Fuchsian case is quite distinct in many ways - for instance the set of precompact cyclic subgroups (generated by elliptic elements) has nonempty interior in the Fuchsian case, and therefore will have positive measure in any reasonable probability measure we might seek to impose. Whereas for Kleinian groups this is not the case. However, the motivation for the probability measure we chose is similar in both cases. We seek something “geometrically natural” and with which we can compute. We should expect that almost surely (that is with probability one) a finitely generated subgroup of the Möbius group is free. We shall see that the probability a random two generator group is discrete is greater than $\frac{1}{20}$, a value we conjecture to being close to optimal, and this value is certainly less than $\frac{1}{4}$. If we condition by choosing only hyperbolic elements, this probability becomes $\frac{1}{5}$. The cases where we condition by choosing two parabolic elements, both in the Fuchsian and Kleinian cases is discussed in a sequel [@MOY] as rather more theory is required to get precise answers. Here we give a bound of $\frac{1}{6}$ in the Fuchsian case, the actual value being approximately $0.3148\ldots$.
Here we also consider such things as the probability that the axes of hyperbolic generators cross. This allows us to get some understanding of the likely-hood of different topologies arising. For instance if we choose two random hyperbolic elements with pairwise disjoint isometric circles, the quotient space is either the two-sphere with three holes, or a torus with one hole. The latter occurring with probability $\frac{1}{3}$.
To study these questions of discreteness we set up a topological isomorphism between $n$ pairs of random arcs on the circle and $n$-generator Fuchsian groups. We determine the statistics of a random cyclic group completely, however, the statistics of commutators of generators is an important challenge with topological consequences and which we only partially resolve.
Random Fuchsian Groups.
=======================
We introduce specific definitions in the context of Fuchsian groups. These will naturally motivate more general definitions for the case of Kleinian groups in later work.
If $A\in PSL(2,\IC)$ has the form $$\label{Fspace}
A =\pm \left( \begin{array}{cc} a & c \\ \bar c & \bar a \end{array} \right), \hskip15pt |a|^2-|c|^2 = 1,$$ then the associated linear fractional transformation $f:\oC\to\oC$ defined by $$\label{fdef}
f(z) = \frac{az + c}{\bar c z + \bar a}$$ preserves the unit circle since $\left| \frac{az + c}{\bar c z + \bar a} \right| = |\zbar| \left| \frac{az + c}{\bar a \zbar+\bar c |z|^2} \right|$, with the implication that $|z|=1$ implies $|f(z)|=1$.
The rotation subgroup ${\bf K}$ of the disk, $z\mapsto \zeta^2 z$, $|\zeta|=1$, and the nilpotent or parabolic subgroup ${\bf P}$ (conjugate to the translations) have the respective representations $$\left( \begin{array}{cc} \zeta & 0 \\ 0 & \bar \zeta \end{array} \right), \;\;\;|\zeta|=1, \hskip15pt \left( \begin{array}{cc} 1+it& t \\ t & 1-i t\end{array} \right),\;\;\; t\in \IR .$$ The group of all matrices satisfying (\[Fspace\]) will be denoted ${\cal F}$. It is not difficult to construct an algebraic isomorphism ${\cal F}\equiv PSL(2,\IR)\equiv Isom^+(\IH^2)$, the isometry group of two-dimensional hyperbolic space (see [@Beardon]) and we will often abuse notation by moving between $A$ and $f$ interchangeably. Despite some efforts to use $PSL(2,\IR)$, we feel the approach we take is geometrically more natural when working in ${\cal F}$. In particular, our measures are obviously invariant under the action of the compact group ${\bf K}$. We also seek distributions from which we can make explicit calculations and are geometrically natural (see in particular Lemma \[3.2\]).
We therefore impose the following distributions on the entries of this space of matrices ${\cal F}$. We select
- \(i) $\zeta=a/|a|$ and $\eta=c/|c|$ are chosen uniformly in the circle $\IS$, with arclength measure, and
- \(ii) $t=|a|\geq 1$ is chosen so that $$2\arcsin(1/t) \in [0,\pi]$$ is uniformly distributed.
Notice that the product $\zeta\eta$ is uniformly distributed on the circle as a simple consequence of the rotational invariance of arclength measure. Further, this measure is equivalent to the uniform probability measure $\arg(a)\in [0,2\pi]$. It is thus clear that this selection process is invariant under the rotation subgroup of the circle.
Next, if $\theta$ is uniformly distributed in $[0,\pi]$, then the probability distribution function (henceforth p.d.f.) for $\sin \theta$ is $\frac{1}{\pi} \frac{1}{\sqrt{1-y^2}}$ for $y\in [-1,1]$. Since $t\mapsto1/t$, for $t>0$ is strictly decreasing, we can use the change of variables formula for distribution functions to deduce the p.d.f. for $|a|$.
The random variable $|a|\in [1,\infty)$ has the p.d.f. $$F_{|a|}(x)= \frac{2}{\pi} \; \frac{1}{x\sqrt{x^2-1}}$$
Next notice that the equation $1+|c|^2=|a|^2$ tells us that $\arctan(\frac{1}{|c|})$ is also uniformly distributed in $[0,\pi]$.
Another equivalent formulation is the following. We require that the matrix entries $a$ and $c$ have arguments $\arg(a)$ and $\arg(c)$ which are uniformly distributed on $\IR \mod 2\pi$. We write this as $\arg(a)\in_u [0,2\pi]_\IR$ and $\arg(c)\in_u [0,2\pi]_\IR$. We illustrate with a lemma.
If $\arg(a),\arg(b)\in_u [0,2\pi]_\IR$, then $\arg(ab), \arg(a/b) \in_u [0,2\pi]_\IR$.
[**Proof.**]{} The usual method of calculating probability distributions for combinations of random variables via characteristic functions shows that if $\theta,\eta$ are selected from a uniformly distributed probability measure on $[0,2\pi]$, then the p.d.f. for $\theta+\eta\in [0,4\pi]$ is given by $$\label{f93i} g(\zeta)=\left\{ \begin{array}{lllll}
\frac{\zeta}{8\pi^2}&0\leq \zeta< 2\pi\\
\\
\frac{4\pi-\zeta}{8\pi^2}&2\pi \leq \zeta \leq 4\pi.
\end{array}\right.$$ We reduce mod $2\pi$ and observe $$\frac{\zeta}{8\pi^2} + \frac{4\pi-\zeta}{8\pi^2} = \frac{1}{2\pi}$$ and this gives us once again the uniform probability density on $[0,2\pi]$. The result also follows for $a-b$ as clearly $-b \in_u [0,2\pi]_\IR$ and $a-b=a+(-b)$. $\Box$
\[cor2.6\] If $a\in_u [0,2\pi]_\IR$ and $k\in \IZ$, then $ka \in_u [0,2\pi]_\IR$.
It now follows that any finite integral linear combination of variables $a_i\in_u [0,2\pi]_\IR$ has the same distribution.
In what follows we will also need to consider variables supported in $[0,\pi]$ or smaller subintervals and as above we will write this as $a\in_u [0,\pi]_\IR$ and so forth.
In a moment we will calculate some distributions naturally associated with Möbius transformations such as traces and translation lengths. Every Möbius transformation of the unit disk $\ID$ can be written in the form $$\label{mob}
z\mapsto \zeta^2 \, \frac{z-w}{1-\bar w z}, \hskip10pt |\zeta|=1, w\in \ID$$ Thus one could consider another approach by choosing distributions for $\zeta\in \IS$ and $w\in\ID$. It seems clear one would want $\zeta$ uniformly distributed in $\IS$. The real question is by what probability measure should $w$ be chosen on $\ID$ ? If $w$ is chosen rotationally invariant, then the choice boils down to probability measures on radii. The choices we have made turn out as follows. The matrix representation of (\[mob\]) in the form (\[Fspace\]) is $$\zeta^2 \, \frac{z-w}{1-\bar w z} \leftrightarrow \left(\begin{array}{cc} \frac{\zeta}{\sqrt{1-|w|^2} } & -\frac{\zeta w}{\sqrt{1-|w|^2}} \\- \frac{\zetabar \bar w}{\sqrt{1-|w|^2} }& \frac{\zetabar}{\sqrt{1-|w|^2} }
\end{array} \right)$$ Hence $\zeta$ and $w/|w|$ will be uniformly distributed in $\IS$. Then, $|w|<1$ necessarily and $$\arccos(|w|)=\arcsin(\sqrt{1-|w|^2} ) \in [0,\pi/2]$$ is uniformly distributed and we find $|w| = |f(0)|$ has the p.d.f. $\frac{2}{\pi \sqrt{1-y^2}}$, $y\in [0,1])$.
Let $f\in {\cal F}$ be a random Möbius transformation. Then the p.d.f. for $y=|f(0)|$ is $\frac{2}{\pi \sqrt{1-y^2}}$. The expected value of $|f(0)|$ is $$E[\;|f(0)|\; ]=\frac{2}{\pi} \int_{0}^{1} \frac{ y}{\sqrt{1-y^2}}\, dy = \frac{2}{\pi} = 0.63662\ldots$$
The hyperbolic distance here between $0$ and $f(0)$ is $\log \frac{1+|f(0)|}{1-|f(0)|} = \log \frac{\pi+2}{\pi-2} = 1.50494 \ldots $.
Fixed points
============
The fixed points of a random $f\in {\cal F}$ are solutions to the same quadratic equation and one should therefore expect some correlation. From (\[fdef\]) we see the fixed points are the solutions to $az+c=z(\bar c z+ \bar a)$. That is $$\label{fp}
z_\pm =\frac{1}{\bar c}\left( i \Im m(a) \pm \sqrt{\Re e(a)^2-1}\right), \hskip15pt |a|^2=1+|c|^2.$$ We consider two cases and will soon establish that ${\rm Pr}\{|\Re e(a)|\leq 1\}=\frac{1}{2}$ so each case occurs with equal probability.
[**Case 1.** ]{} $f$ elliptic or parabolic. Then $|\Re e(a)|\leq 1$ and so $\arg(z_\pm) = \frac{\pi}{2}+\arg(c)$. Thus the argument of both fixed points is the same and that angle is uniformly distributed in $[0,\pi]$.
[**Case 2.** ]{} $f$ hyperbolic. Then $\Re e(a) >1$ and $|z_\pm|=1$. We calculate that the derivative $$|f'(z_\pm)| = \frac{1}{|\bar c z_\pm +\bar a|^2} = \frac{1}{|i \Im m(a) \pm \sqrt{\Re e(a)^2-1}+\bar a|^2} = \frac{1}{|\Re e(a) \pm \sqrt{\Re e(a)^2-1}|^2}$$ Hence $|f'(z_+)|<1$ and $z_+$ is an [*attracting*]{} fixed point, with $z_-$ being [*repelling*]{}.
We have chosen $\arg(c)$ to be uniformly distributed and so the argument of either fixed point, say $z_+$, is uniformly distributed. The interesting question is the distribution of the angle (at $0$) between the fixed points. That is the argument of $z_+\overline{z_-}$. This will reflect the correlation we are looking for. This angle is easily seen to be the angle $\phi\in [0,\pi]$ where $\cos(\phi/2)= \Im m(a)/|c|$. Then $$\begin{aligned}
\cos(\phi/2) & = & \Im m(a)/|c| = \frac{|a| \sin \theta}{\sqrt{|a|^2-1}} = \frac{\sin \theta}{\cos \alpha}\end{aligned}$$ where we are able to assume that both $\theta$ and $\alpha$ are uniformly distributed in $[0,\pi/2]$ and we are conditioned by $ {\sin \theta}\leq {\cos \alpha}.$
We will calculate the distribution of ${\sin \theta}/{\cos \alpha}$ carefully when we come to the calculation of the parameters determining a Möbius group. We report the p.d.f here as follows.
\[p.d.f.1\] The distribution of the random variable $$X=\frac{\sin(\theta)}{\cos(\alpha)} ,$$ for $\theta$ and $\alpha$ uniformly distributed in $[0,\pi/2]$ is given by the formula $$\label{dist1}
h_X(x)=\frac{4}{\pi^2x}\;
\log{\frac{1+x}{1-x}}, \hskip10pt 0\leq x< 1.$$
We can now use the change of variables formula to compute the p.d.f for $\phi/2$. That is we want the distribution for $Y=\cos^{-1}(h_X(x))$, given $h_X(x)\leq 1$. We can compute this distribution to be $$h_Y(y) = \frac{4}{\pi^2}\; \tan(y)\;
\log{\frac{1+\cos(y)}{1-\cos(y)}}$$
\[p.d.f.2\] Let $\phi \in [0,\pi]$ be the angle subtended at $0$ by the fixed points of a random hyperbolic element in ${\cal F}$. Then the p.d.f. for $\eta = \phi/2$ is given by $$\label{dist2}
H_Y(\eta) = \frac{4}{\pi^2}\; \tan(\eta)\;
\log{\frac{1+\cos(\eta)}{1-\cos(\eta)}}$$
Some hyperbolic trigonometry reveals the the hyperbolic line between a pair of points $z_\pm\in \IS$ meets the closed disk of hyperbolic radius $r$ (denoted $\ID_\rho(r)$) when the angle $\phi$ formed at $0$ satisfies $$\cosh(r) \geq \frac{1}{\sin(\phi/2)}.$$ If $z_\pm$ are the fixed points of a hyperbolic element $f$, then this hyperbolic line joining them is called the axis of $f$, denoted ${\rm axis}(f)$. We can therefore compute the probability that the axis of a random hyperbolic element meets $\ID_\rho(r)$ by setting $\delta=\sin^{-1}(1/\cosh(r) )$ and computing $$\begin{aligned}
{\cal P}({\rm axis}(f)\cap \ID_{\rho}(r) \neq \emptyset ) & = & \frac{4}{\pi^2} \; \int_{0}^{\delta} \tan(\eta)\; \log{\frac{1+\cos(\eta)}{1-\cos(\eta)}} \; d\eta \\
& = & \frac{4}{\pi^2} \; \int_{0}^{\tanh(r)} \frac{1}{x} \; \log{\frac{1+x}{1-x}} \; dx \\
& = & \frac{4}{\pi^2} \big[ \text{Li}_2(\tanh (r))-\text{Li}_2(-\tanh (r))\big]\end{aligned}$$ Here $\text{Li}_2(s) = \sum_{1}^{\infty} n^{-2} s^n $ is a polylog function. Thus, for instance, this probability exceeds $\frac{1}{2}$ as soon as $r>0.678\ldots$ and exceeds $0.95$ as soon as $r>2.24419$.
\
The p.d.f. $H_Y$ for the angle $\phi/2$ between fixed points of a random hyperbolic $f\in {\cal F}$ and the convolution $H_Y*H_Y$.
Now, the bisector $\zeta_f$ of the smaller circular arc between the fixed points of a random hyperbolic element of $f$ is uniformly distributed on the circle. Then, given $f$ and $g$ random hyperbolic elements of ${\cal F}$ and angles $\phi_f$ and $\phi_g$ between their fixed points. The p.d.f. for $\phi_f/2+\phi_g/2$ is the convolution $H_Y*H_Y$. We note that $e^{i\theta}= \xi = \zeta_f\overline{\zeta_g}$ is uniformly distributed as well. Given $\xi$ the fixed points of $f$ and of $g$ intertwine (so that the axes cross) if both $\phi_f+\phi_g \geq 2\theta$ and $|\phi_f-\phi_g|<2\theta$. We can use the distributions above to calculate these probabilities, but it is quite complicated and we will find another route to this probability a bit later.
Isometric Circles and Traces.
=============================
The isometric circles of the Möbius transformation $f$ defined at (\[fdef\]) are defined to be the two circles $$C_+ =\Big\{|z+\frac{\bar a}{\bar c} | = \frac{1}{|c|}\Big\}, \hskip10pt C_-=\Big\{z:|z-\frac{a}{\bar c}|=\frac{1}{|c|} \Big\}$$ which are paired by the action of $f$ and $f^{-1}$, $f^{\pm1}(C_{\pm})=C_{\mp}$. The [*isometric disks*]{} are the finite regions bounded by these two circles.
Since $|a|^2=1+|c|^2\geq 1 $, both these circles meet the unit circle in an arc of angle $\theta\in [0,\pi]$. Some elementary trigonometry reveals that $$\label{a}
\sin \frac{\theta}{2} = \frac{1}{|a|}$$ Thus by our choice of distribution for $|a|$ we obtain the following key result.
\[3.2\] The arcs determined by the intersections of the finite disks bounded by the the isometric circles of $f$, where $f$ is chosen according to the distribution (i) and (ii), are centred on uniformly distributed points of $\IS$ and have arc length uniformly distributed in $[0,\pi]$.
It is this lemma which supports our claim that the p.d.f. on ${\cal F}$ is natural and suggests the way forward for an analysis of random Kleinian groups.
The isometric circles of $f$ are disjoint if $\left|\frac{ a}{\bar c}+\frac{\bar a}{\bar c} \right| \geq \frac{2}{|c|} $. This occurs if $$|\tr(f)| = |a+\bar a| = 2 |\Re e(a)| \geq 2$$ Since the disjointness of isometric circles has important geometric consequences we will need to find the p.d.f. for the random variable $t=|\tr(f)|$.
As $|\Re e(a)| = |a||\cos(\theta)|$, for a fixed $\theta\in [0,\pi/2]$, the probability $$\begin{aligned}
\label{tracedistn}
Pr[ \{ |a|\geq 1/\cos \theta \}] =1 - \frac{2}{\pi} \int_{1}^{1/\cos \theta} \frac{dx}{x\sqrt{x^2-1}} = 1-\frac{2}{\pi}\theta\end{aligned}$$ As $a/|a|$ is unformly distributed on the circle, we have $\theta|[0,\pi/2]$ uniformly distributed in $[0,\pi/2]$. Therefore using the obvious symmetries we may calculate that $$\begin{aligned}
Pr[ \{ |a+\bar a| \geq 2 \}] = \frac{2}{\pi} \; \int_{0}^{\pi/2} \; 1-\frac{2}{\pi}\theta \; d\theta = \frac{1}{2}.\end{aligned}$$
Let $f\in {\cal F}$ be a Möbius transformation chosen randomly from the distribution described in (i) and (ii). Then the probability that the isometric circles of $f$ are disjoint is equal to $\frac{1}{2}$.
Therefore we have the following simple consequence concerning random cyclic groups.
Let $f\in {\cal F}$ be a Möbius transformation chosen randomly from the distribution described in (i) and (ii). Then the probability that the cyclic group $\langle f \rangle$ is discrete is equal to $\frac{1}{2}$.
[**Proof.**]{} The matrix $A\in SL(2,\IC)$ represents the hyperbolic Möbius transformation $f$ if and only if $-2 \leq \tr A \leq 2$. This occurs with probability $\frac{1}{2}$. The matrix $A$ represents an elliptic transformation of finite order, or a parabolic transformation if and only if $\tr(A) = \pm 2\cos(p\pi/q)$, $p,q\in \IZ$, and this set is countable and therefore has measure zero. The result follows. $\Box$.
We now note the following trivial consequence.
Let $f,g\in {\cal F}$ be Möbius transformations chosen randomly from the distribution described in (i) and (ii). Then the probability that the group $\langle f,g \rangle$ is discrete is no more than $\frac{1}{4}$.
Actually we can use (\[tracedistn\]) to determine the p.d.f. for $|\tr(A)|$. We will do this two ways.
First, for $s\geq 2$, $$\begin{aligned}
\label{tracedist}
Pr[ \{ |\tr(A)| \geq s \}] & = & Pr[ \{ 2|a|\cos \theta \geq s \}] = Pr[ \{ |a|\geq s/(2\cos \theta) \}] \\
& = & 1 - \frac{4}{\pi^2} \int_{0}^{\pi/2} \; \int_{1}^{s/2\cos \theta} \frac{dx}{x\sqrt{x^2-1}} \; d\theta \\
& = & 1 - \frac{4}{\pi^2} \int_{0}^{\pi/2} \cos^{-1} \Big( \frac{2\cos \theta}{s} \Big)\; d\theta \end{aligned}$$ We can now differentiate this function of $s$ under the integral, integrate with respect to $\theta$ (using the symmetry to reduce it to being over $[0,\pi/2]$), to obtain the probability density function for $|\tr(A)|$ (for $|\tr(A)|\geq 2$), $$\begin{aligned}
F[s] = \frac{4}{\pi^2\, s} \cosh^{-1}\Big( \frac{s}{\sqrt{s^2-4}} \Big) , \hskip10pt s\geq 2.
\end{aligned}$$ This gives the distribution for $\tr^2 A$ as $$\begin{aligned}
G[t] = \frac{2}{\pi^2\, t } \cosh^{-1}\Big( \frac{\sqrt{t}}{\sqrt{t-4}} \Big) = \frac{2}{\pi^2 \, t} \log \frac{\sqrt{t}+2}{\sqrt{t-4}} , \hskip10pt t\geq 4.
\end{aligned}$$ Then the random variable $\beta = \tr^2A-4\geq 0$ has distribution $$\begin{aligned}
\label{beta1}
G[\beta] = \frac{1}{\pi^2 (\beta+4)} \log \left( 1+\frac{8+4\sqrt{\beta+4}}{\beta} \right), \hskip10pt \beta \geq 0.
\end{aligned}$$ We could now follow through a similar, but more difficult, calculation to determine the distribution for $\beta$ in the interval $-4 \leq \beta \leq 0$. It turns out to be $$\begin{aligned}
G[\beta] = \frac{1}{\pi^2(\beta+4) }\log \left( \frac{2+\sqrt{\beta + 4}}{2 - \sqrt{\beta + 4}} \right), \hskip15pt \beta\in [-4,0].\end{aligned}$$ We will return to this in a moment through a different approach as we can immediately use (\[beta1\]) to find the distribution of the translation length of hyperbolic elements.
As we have seen, every element $f\in{\cal F}$ which is not elliptic (conjugate to a rotation, equivalently $\beta(f)\in [-4,0)$) or parabolic (conjugate to a translation, equivalently $\beta(f)=0$) fixes two points on the circle and the hyperbolic line ${\rm axis}(f)$ with those points as endpoints. The transformation acts as a translation by constant hyperbolic distance $\tau(f)$ along its axis. This number $\tau(f)$ is called the [*translation length*]{} and is related to the trace via the formula [@GehMar] $$\beta(f) = 4 \sinh^2 \; \frac{\tau}{2}, \hskip10pt \tau = \cosh^{-1}\left(1+\frac{\beta}{2}\right)$$ We obtain the distribution for $\tau = \tau(f)$ from the change of variables formula for p.d.f. using (\[beta1\]) $$\begin{aligned}
H[\tau] & = & \frac{2}{\pi^2} \, \tanh \frac{\tau}{2} \;\log \left(\frac{ \cosh \, \frac{\tau}{2} +1}{ \cosh\, \frac{\tau}{2} -1} \right) \\
& = & - \frac{4}{\pi^2} \; \tanh \frac{\tau}{2} \; \log \tanh\, \frac{\tau}{4} \\\end{aligned}$$ Unlike our earlier distribution $G$, the p.d.f for $\tau$ has all moments. In particular once we observe $$\int_{0}^{\infty} t \,\tanh \,\frac{t}{2} \log\Big[\tanh \, \frac{t}{4}\Big] dt = -\pi^2 \log 2$$ we have the following theorem.
For randomly selected hyperbolic $f\in {\cal F}$ the p.d.f. for the translation length $\tau=\tau(f)$ is $$H[\tau] = - \frac{4}{\pi^2} \; \tanh \frac{\tau}{2} \; \log \tanh\, \frac{\tau}{4}$$ (illustrated below) and the expected value of the translation length is $$E[[\tau]] = 4\log 2 \approx 2.77259 \ldots$$
\
The p.d.f for the translation length $\tau$ of a random hyperbolic element of ${\cal F}$.
However there is another way to see these results and which is more useful in what is to follow in that it more clearly relates to the geometry.
The parameter $\beta=\tr^2(A)-4$
================================
We being with the following theorem.
\[th\_beta\] If a Möbius transformation $f$ is randomly chosen in ${\cal F}$, then $$\beta(f)=4\left(\frac{\cos^2(\theta)}{\sin^2(\alpha)}-1\right)\;\;\;\;\;\;\;\;\theta\in_u[0,2\pi],\;\alpha\in_u\left[0,\frac{\pi}{2}\right]\label{beta1*}$$ where $2\alpha$ is the arc length intersection of the isometric circles of $f$ with $\IS$ and $\theta$ is the argument of the leading entry of $A$, the matrix representative for $f$.
[**Proof.**]{} Let $A=\left(\begin{array}{cc} a & c \\ \bar c & \bar a \end{array}\right)$. Then $$\beta = \tr^2 A -4 = [2 \Re e(a)]^2-4 = 4 |a|^2 \cos^2(\theta)-4$$ and the result follows by (\[a\]) and Lemma \[3.2\]. $\Box$
\[th\_c2\_s2\] The distribution of the random variable $$w=\frac{\cos^2(\theta)}{\sin^2(\alpha)} , \hskip10pt{\rm for} \;\;\;\; \theta\in_u[0,2\pi] \;\;\; {\rm and} \;\;\; \alpha\in_u[0,\frac{\pi}{2}]$$ is given by the formula
$$\label{dist1}
h(w)=\frac{1}{\pi^2w}
\log \left| \frac{\sqrt{w}+1}{\sqrt{w}-1}\right|, \hskip20pt w\geq 0.$$
[**Proof.**]{} The p.d.f’s of $x=\cos^2(\theta)$ and $y=\sin^2(\alpha)$ are $$\label{c2f2} f(x) =\frac{1}{\pi\sqrt{x(1-x)}}\;{\rm for} \;\cos^2(\theta), \;\; {\rm and} \;\;
f(y)=\frac{1}{\pi\sqrt{y(1-y)}}\; {\rm for} \;\sin^2(\alpha).$$ and these are identically distributed when both $\theta$ and $\alpha$ are identically distributed. They are also monotonic for $x,y\in[0,\frac{1}{2})$ and also for $x,y\in(\frac{1}{2},1]$ and as the distributions are anti-symmetric about $\frac{1}{2}$. Therefore we can use the change of variables formula and the Mellin convolution to compute the p.d.f. Write $x=\cos^2(\theta)$, $y=\sin^2(\alpha)$ and $w=\frac{\cos^2(\theta)}{\sin^2(\alpha)}$. We use the Mellin convolution for quotients as in [@Springer], noting that the distributions $f(x)$ and $f(y)$ in are identical. For $x,y\in(0,1)$ the upper integration limits for the convolution integrals will be $y<1\times\frac{1}{w}$ whenever $w> 1$ and $y<1$ otherwise, accordingly the Mellin convolution for the quotient of the p.d.f’s over $(0,\infty)$ is calculated as follows where we have ensured the piecewise differentiability of the integrand. $$\label{hw1}
h(w)=\left\{\begin{array}{ll}
\int_{0}^{1} y\;f(x)f(y) dy\;\;\;\;\;\;&w<1\\
\\
\int_0^{\frac{1}{w}} y\;f(x)f(y) dy&w>1
\end{array}\right.$$ and the indefinite integral embedded in both components of is given as $$\begin{aligned}
\label{p.d.f_c}
\int y\;f(yw)f(y)dy&= & \int y\;\frac{1}{\pi\sqrt{yw(1-yw))}}\frac{1}{\pi\sqrt{y(1-y)}}dy \nonumber \\
& = & \frac{1}{\pi^2 \sqrt{w}}\int\frac{1}{\sqrt{(1-y)(1-yw)}}dy \nonumber \\
&=& \frac{2}{\pi^2w} \; \log\left(w\sqrt{(y-1)}+\sqrt{w(yw-1})\right) .\end{aligned}$$ Simplification of the $\log$ term in (\[p.d.f\_c\]) yields $$\begin{aligned}
\lefteqn{\log\left(w\left(w(2y-1)-1+2\sqrt{w(y-1)(yw-1)}\right)\right)} \\
& = &
\left\{ \begin{array}{lll}
e_0&=\log(-w(w+1-2\sqrt{w}))\;\;\;\;&{\rm at} \;y=0\\
e_1&=\log(w(w-1))&{\rm at} \;y=1\\
e_{\frac{1}{w}}&=\log(-w(w-1))&{\rm at} \;y=\frac{1}{w}.
\end{array}\right.\end{aligned}$$ and accordingly the definite integrals in (\[hw1\]) evaluate to $$\int_0^1 y\;f(yw)f(y)dy = \frac{1}{\pi^2w}(e_1-e_0), \;\;\;
\int_0^{\frac{1}{w}} y\;f(yw)f(y)dy= \frac{1}{\pi^2w}(e_{1/w}-e_0).$$ If we now let $v=\sqrt{w}$, then $$\begin{aligned}
e_1-e_0&=& \log(w(w-1))-\log(-w(w+1-2\sqrt{w}))=\log\left(\frac{w(w-1)}{-w(w+1-2\sqrt{w}}\right))\\
&=&\log\left(\frac{(v^2-1)}{-(v^2+1-2v}\right)=\log\left(\frac{(v-1)(v+1)}{-(v-1)^2}\right)= \log\left(\frac{1+\sqrt{w}}{1-\sqrt{w}}\right)\end{aligned}$$ and $$\begin{aligned}
e_{1/w}-e_0&=& \log(-w(w-1))-\log(-w(w+1-2\sqrt{w}))\\ & = & \log\left[\frac{\log(-w(w-1))}{-w(w+1-2\sqrt{w}}\right]\\
&=& \log\left(\frac{-(v^2-1)}{-(v^2+1-2v}\right)= \log\left(\frac{\sqrt{w}+1}{\sqrt{w}-1}\right)\end{aligned}$$ We therefore deduce that the distribution of $w=\frac{\cos^2(\theta) }{\sin^2(\alpha)}$ is given by (\[dist1\]) as claimed. $\Box$
From this, and a little obvious manipulation to see these formulas actually agree with those obtained earlier, we obtain the result we were looking for.
\[p.d.f.\_beta\] The distribution of $\beta(f)$ for $f$ randomly chosen from ${\cal F}$ is given by $$G[\beta] =\frac{4}{\pi^2(\beta+4)} \;
\log \Big|\frac{\sqrt{\beta+4}+2}{\sqrt{\beta+4}-2} \Big|, \hskip10pt \beta\geq -4$$
\
The p.d.f for the parameter $\beta(f)$ for a random element $f\in {\cal F}$.
The topology of the quotient space.
===================================
Topologically there are two surfaces whose fundamental group is isomorphic to $F_2$, the free group on two generators. These are the $2$-sphere with three holes $\IS^{2}_{3}$, and the Torus with one hole $T^{2}_{1}$. Thus we can expect that a group $\Gamma=\langle f,g \rangle$ generated by two random hyperbolic elements of ${\cal F}$ if discrete, has quotient space $$\ID^2/\Gamma \in\{ \IS^{2}_{3}, T^{2}_{1} \}$$ We would like to understand the likely-hood of one of these topologies over the other. The topology is determined by whether the axes of $f$ and $g$ cross (giving $ T^{2}_{1} $) or not (giving $\IS^{2}_{3}$). This is the same thing as asking if the hyperbolic lines between the fixed points of $f$ and the fixed points of $g$ cross or not, and this in turn is determined by a suitable cross ratio of the fixed points. In fact, the geometry of the commutator $\gamma(f,g)=\tr [f,g]-2$ determines not only the topology of the quotient, but also the hyperbolic length of the shortest geodesic - it is represented by either $f$, $g$ or $[f,g]=fgf^{-1}g^{-1}$ and their Nielsen equivalents. In fact the three numbers $\beta(f),\beta(g)$ and $\gamma(f,g)$ determine the group $\langle f,g \rangle$ uniquely up to conjugacy. Since we have already determined the natural probability densities for $\beta(f)$ and $\beta(g)$ we need only identify the p.d.f. for $\gamma=\gamma(f,g)$ to find a conjugacy invariant way to identify random discrete groups. Unfortunately this is not so straightforward and we do not know this distribution. However important aspects of this distribution can be determined.
Commutators and cross ratios.
-----------------------------
We follow Beardon [@Beardon] and define the cross ratio of four points $z_1,z_2,z_3,z_4\in \IC$ to be $$[z_1,z_2,z_3,z_4] = \frac{(z_1-z_3)(z_2-z_4)}{(z_1-z_2)(z_3-z_4)}$$ In order to address the distribution of $\gamma(f,g) = \tr [f,g]-2$ we need to understand the cross ratio distribution. This is because of the following result from §7.23 & §7.24 [@Beardon] together with a little manipulation.
Let $\ell_1$, with endpoints $z_1,z_2$, and $\ell_2$, with endpoints $w_1,w_2$, be hyperbolic lines in the unit disk model of hyperbolic space. So $z_1,z_2,w_1,w_2\in \IS$, the circle at infinity. Let $\delta$ be the hyperbolic distance between $\ell_1$ and $\ell_2$, and should they cross, let $\theta\in [0,\pi/2]$ be the angle at the intersection. Then $$\label{cxdist}
\sinh^2 \Big[ \frac{1}{2}(\delta+i\theta)\Big] \times [z_1,w_1,z_2,w_2] = -1$$
The number $\delta+i\theta$ is called the [*complex distance*]{} between the lines $\ell_1$ and $\ell_2$ where we put $\theta=0$ if the lines do not meet. The proof of this theorem is simply to use Möbius invariance of the cross ratio and the two different models of the hyperbolic plane. If the two lines do not intersect, we choose the Möbius transformation which sends the disk to the upper half-plane and $\{z_1,z_2\}$ to $\{-1,+1\}$ and $\{w_1,w_2\}$ to $\{-s,s\}$ for some $s>1$. Then $\delta=\log s$ and $$[-1,-s,1,s]=\frac{-4s}{(1-s)^2 } = \frac{-4}{(e^{\delta/2}-e^{-\delta/2})^2 } = - \frac{1}{\sinh^2(\delta/2)}$$ while if the axes meet at a finite point, we choose a Möbius transformation of the disk so the line endpoints are $\pm 1$ and $e^{\pm i\theta}$ and the result follows similarly.
We next recall Lemma 4.2 of [@GehMar] which relates the parameters and cross ratios.
Let $f$ and $g$ be Möbius transformations and let $\delta+i\theta$ be the complex distance between their axes. Then $$4 \gamma (f,g) = \beta(f) \, \beta(g) \, \sinh^2(\delta+i\theta).$$
We note from (\[cxdist\]) that $$\sinh^2(\delta+i\theta) = \left(1-\frac{2}{[z_1,w_1,z_2,w_2]}\right)^2-1$$ For a pair of hyperbolics $f$ and $g$ we have $\beta(f),\beta(g)\geq 0$ with $\delta=0$ if the axes meet. Thus the axes cross if and only if $\gamma<0$, or equivalently $$\label{cr1} [z_1,w_1,z_2,w_2] > 1.$$ Actually to see the latter point, we choose the Möbius transformation which sends $z_1\mapsto 0$, $z_2\mapsto\infty$, $w_1\mapsto 1$. Then $z_2\mapsto z$, say, and $$[z_1,w_1,z_2,w_2] = \frac{(0-1)(\infty-z)}{(0-\infty)(1-z)}=\frac{1}{1-z}$$ The image of the axes (and therefore the axes themselves) cross when $z<0$, equivalently when (\[cr1\]) holds.
Cross ratio of fixed points.
----------------------------
Supposing that $f$ and $g$ are randomly chosen hyperbolic elements we want to discuss the probability of their axes crossing. If $f$ has fixed points $z_1,z_2$ and $g$ has fixed points $w_1,w_2$. We identified the formula for the fixed points above at (\[fp\]) and if we notate the random variables (matrix entries) $a,c$ for $f$ and $\alpha,\beta$ for $g$ we have $$\begin{aligned}
z_1,z_2 & = & \frac{1}{\bar c}\left( i \Im m(a) \pm \sqrt{\Re e(a)^2-1}\right), \hskip15pt |a|^2=1+|c|^2\\
w_1,w_2 & = & \frac{1}{\bar \beta}\left( i \Im m(\alpha) \pm \sqrt{\Re e(\alpha)^2-1}\right), \hskip15pt |\alpha|^2=1+|\beta|^2\end{aligned}$$ and as both elements are hyperbolic we have $\Re e(a)\geq1$ and $\Re e(\alpha)\geq 1$. We put $U= i \Im m(a) + \sqrt{\Re e(a)^2-1}$ and $V= i \Im m(\alpha) + \sqrt{\Re e(\alpha)^2-1}$ Then $$\begin{aligned}
[z_1,w_1,z_2,w_2]
& = & \frac{4 \sqrt{\Re e(a)^2-1} \sqrt{\Re e(\alpha)^2-1}}{\,\bar c \, \bar\beta\,\left(\frac{U}{\bar c} -\frac{V}{\bar \beta} \right) \left(\frac{-\bar U}{\bar c}-\frac{-\bar V}{\bar \beta}\right)} \\
& = & \frac{4 \sqrt{\Re e(a)^2-1} \sqrt{\Re e(\alpha)^2-1}}{2 \Re e [U\bar V] -c\bar \beta \frac{|U|^2}{|c|^2}-\bar c \beta \frac{|V|^2}{|\beta|^2} } = \frac{2 \sqrt{\Re e(a)^2-1} \sqrt{\Re e(\alpha)^2-1}}{ \Re e [U\bar V] - \Re e[c\bar \beta] } \end{aligned}$$ as we recall $1=|z_i|=|U|/|c|$ and similarly $|V|/|\beta|=1$. Thus we want to understand the statistics of the cross ratio, and in particular to determine when $$[z_1,w_1,z_2,w_2] = \frac{2 \sqrt{\Re e(a)^2-1} \sqrt{\Re e(\alpha)^2-1}}{ \Re e [U\bar V] - \Re e[c\bar \beta] } \geq 1$$ We have $$\begin{aligned}
a=\frac{1}{\sin\theta}e^{i\phi}, \;\; \theta\in_u[0,\pi/2], \phi\in_u[0,2\pi], &&
c=\cot \theta e^{i\delta}, \;\; \delta \in_u[0,2\pi] \\
\alpha =\frac{1}{\sin \eta }e^{i\psi}, \;\; \eta \in_u[0,\pi/2], \psi\in_u[0,2\pi] &&
\beta =\cot \eta e^{i\zeta}, \;\; \zeta \in_u[0,2\pi] \end{aligned}$$ Then $ \sqrt{\Re e(a)^2-1} = \sqrt{\frac{\cos^2\phi}{\sin^2\theta}-1}, \hskip10pt \sqrt{\Re e(\alpha)^2-1} =\sqrt{\frac{\cos^2\psi}{\sin^2\eta}-1}$, $\Phi=\arg c\bar\beta$ is uniformly distributed in $[0,2\pi]$ and $$\Re e [U\bar V] - \Re e[c\bar \beta] = \frac{\sin \phi}{\sin\theta}\; \frac{\sin \psi}{\sin\eta} +\sqrt{\frac{\cos^2\phi}{\sin^2\theta}-1}\sqrt{\frac{\cos^2\psi}{\sin^2\eta}-1} -\cot \eta\cot \theta \cos \Phi$$ This gives $$\begin{aligned}
\lefteqn{ \frac{2 \sqrt{\Re e(a)^2-1} \sqrt{\Re e(\alpha)^2-1}}{ \Re e [U\bar V] - \Re e[c\bar \beta] } }\\ & = &\frac{2 \sqrt{ \cos^2\phi -\sin^2\theta}\sqrt{\cos^2\psi-\sin^2\eta }}{ \sin \phi \; \sin \psi +\sqrt{ \cos^2\phi -\sin^2\theta}\sqrt{\cos^2\psi-\sin^2\eta }-\cos \eta\cos \theta \cos \Phi} \\
& = &\frac{2 \sqrt{ 1-X^2 }\sqrt{1-Y^2}}{ XY+\sqrt{ 1-X^2 }\sqrt{1-Y^2} - \cos \Phi} = Z\end{aligned}$$ where we define the random variables $$X =\frac{\sin \phi}{\cos \theta}, \;\;\;\; {\rm and} \;\;\;\; Y= \frac{\sin \psi}{ \cos\eta}$$ In order for $Z\geq 1$ we need $|X|\leq 1$, $|Y|\leq 1$ and $$\sqrt{ 1-X^2 }\sqrt{1-Y^2} \geq \cos \Phi - XY$$ If this last condition holds, then $ [z_1,w_1,z_2,w_2] \geq 1$ requires $$\sqrt{ 1-X^2 }\sqrt{1-Y^2} \geq XY - \cos \Phi$$ Notice that $X$, $Y$ and $\Phi\in_u[0,2\pi]$ are independent, with $X$ and $Y$ identically distributed. Unfortunately $\sqrt{ 1-X^2 }\sqrt{1-Y^2} \pm XY $ is difficult to find directly as $\sqrt{ 1-X^2 }\sqrt{1-Y^2}$ and $XY$ are not independent. We therefore write $$X = \sin S, \;\;\;\ S\in [-\frac{\pi}{2},\frac{\pi}{2}], \hskip10pt Y = \sin T, \;\;\;\ T\in [-\frac{\pi}{2},\frac{\pi}{2}]$$ so that $$\sqrt{ 1-X^2 }\sqrt{1-Y^2} \pm XY = \cos(S\mp T)$$ and we have the two requirements $$\label{2conds} \cos(S\mp T) \geq \pm \cos(\Phi)$$ Following the arguments of §5 we have the p.d.fs $$\begin{aligned}
X & {\rm with \;\; p.d.f.} &F_X(x) = \frac{2}{\pi^2 x} \log \Big| \frac{1+x}{1-x} \Big|, \hskip10pt -1\leq x \leq 1 \\
S& {\rm with \;\; p.d.f.} & F_S(\theta) = \frac{2}{\pi^2} \cot(\theta) \log \Big| \frac{1+\sin(\theta)}{1-\sin(\theta)} \Big|, \hskip10pt -\frac{\pi}{2}\leq \theta \leq \frac{\pi}{2} \end{aligned}$$ We can remove various symmetries and redundancies for the situation to simplify. For instance we may assume $S\geq 0$ and reduce to ranges where $\cos$ is either increasing or decreasing so we can remove it. We quickly come to the following conditions equivalent to (\[2conds\]) with $S$ and $T$ identically distributed as above and $\Phi\in_u[0,\pi/2]$, $$0\leq S, \;\;\;\;-\Phi \leq S-T \leq \Phi, \;\;\;\; {\rm and}\;\;\;\; S+T+\Phi \leq \pi$$ This now sets up an integral which we implemented on Mathematica numerically and which returned the value $0.429\ldots$. We also ran an experiment using random numbers generated by Mathematica to construct the associated matrices $$A=\left(\begin{array}{cc} a & c \\ \bar c & \bar a \end{array} \right), \;\;\;B=\left(\begin{array}{cc} \alpha & \beta\\
\bar \beta & \bar \alpha \end{array} \right),$$ where $a=\frac{e^{i\theta_1}}{\sin(\eta_1)}$, $c=\cot(\eta_1)\; e^{i\theta_2}$, $\alpha=\frac{e^{i\psi_1}}{\sin(\eta_2)}$, $\beta=\cot(\eta_2)\; e^{i\psi_2}$ and distributed $$\theta_1,\theta_2,\psi_1,\psi_2 \in_u[0,2\pi], \;\;\;\; \eta_1,\eta_2\in_u[0,\pi/2]$$ We put $\gamma = \gamma(A,B)=\tr [A,B]-2$.
\
Left: Histogram of $\gamma(A,B)$ values.\
Right: Histogram of $\gamma(A,B)$ values conditioned by $A$ and $B$ hyperbolic.
We ran through about $10^7$ random matrix pairs of hyperbolic generators and found the probability that $\gamma<0$ to be about $0.429601$.
Let $f,g$ be randomly chosen hyperbolic elements of ${\cal F}$. Then the probability that the axes of $f$ and $g$ cross is $\approx 0.429$.
In contrast, we have the following theorem.
Let $\zeta_1,\zeta_1$ and $\eta_1,\eta_2$ be two pairs of points, each randomly and uniformly chosen on the circle. Let $\alpha$ be the hyperbolic line between $\zeta_1$ and $\zeta_2$ and $\beta$ the hyperbolic line between $\eta_1$ and $\eta_2$. Then the probability that $\alpha$ and $\beta$ cross is $\frac{1}{3}$.
[**Proof.**]{} We can forget the points come in pairs and label them $z_i$, $i=1,2,3,4$ in order around the circle. There are three different cases all with the same probability. [**1.**]{} $z_1$ connects to $z_2$, hence $z_3$ to $z_4$ and the lines are disjoint. [**2.**]{} $z_1$ connects to $z_3$, hence $z_2$ to $z_4$ and the lines intersect. [**3.**]{} $z_1$ connects to $z_4$, hence $z_2$ to $z_3$ and the lines are disjoint.
The result now follows. $\Box$
Together these theorems quantify the degree to which the fixed points are correlated on the circle. However what we would like to understand is the probability $$\Pr\{\gamma<0| f,g \; \mbox{hyperbolic and $\langle f,g\rangle$ is discrete} \}.$$ Notice that $\gamma(A,B)\in [-4,0]$ implies $\tr^2[A,B]-4\in[-4,0]$ and $[A,B]$ is elliptic and of finite order on a countable subset of $[-4,0]$.
If $f,g\in{\cal F}$ are randomly chosen and if $\gamma(f,g)\in [-4,0]$, then $\langle f,g\rangle$ is almost surely not discrete.
We found that $0.266818$ ($\frac{4}{15}$ ?) of our $10^7$ pairs of hyperbolic elements had $\gamma<-4$ while $0.162394$ of the pairs had $-4<\gamma<0$ and so were not discrete and free with probability one. About $\frac{1}{9}$ of our pairs failed Jørgensen’s test for discreteness, [@Jorg].
\
Histogram of the cross ratio of the fixed points of a randomly chosen pair of hyperbolic elements.
In the histogram above the singularities are at $0$ and $1$. We make the observation that it seems quite likely that ${\rm Pr}\{[z_1,w_1,z_2,w_2]\geq 1\}=\frac{1}{5}$. It is somewhat of a chore to calculate the cross ratio distribution $X_{cr}$ of four randomly selected point on the circle. This is done in [@GJM] and the distribution is similar to that above, with singularities at $0$ and $1$. However for that distribution the probability that $${\rm Pr}\{X_{cr}<0\}={\rm Pr}\{0<X_{cr}<1\}={\rm Pr}\{X_{cr>1}\}=\frac{1}{3}$$ (as can be seen from the action of the group $S_4$ on the cross ratio, [@Beardon]). This shows the distributions are definitely different.
We next turn to a discussion of positive results for discreteness.
Discreteness
============
We now have an easy lower bound for the probability a group generated by two random elements of ${\cal F}$ is discrete based on the following Klein combination theorem (or “ping pong” lemma).
\[pingpong\] Let $f_i$ $i=1,2,\ldots, n$ be hyperbolic transformations of the disk whose isometric disks are all disjoint. Then the group generated by these hyperbolic transformations $\langle f_1,f_2,\ldots,f_n \rangle$ is discrete and isomorphic to the free group $F_n$.
We have already seen that the probability that the isometric disks of a randomly chosen $f\in {\cal F}$ are disjoint is $\frac{1}{2}$. We can slightly generalise this using Corollary \[cor2.6\].
Let $\alpha$ and $\beta$ be arcs on $\IS^1$ with uniformly randomly chosen midpoints $\zeta_\alpha$ and $\zeta_\beta$ and subtending angles $\theta_\alpha$ and $\theta_\beta$ uniformly chosen from $[0,\pi]$. The the probablity that $\alpha$ and $\beta$ meet is $\frac{1}{2}$.
[**Proof.**]{} The smaller arc subtended between $\zeta_\alpha$ and $\zeta_\beta$ has length $\Theta = \arg(\zeta_\alpha \overline{\zeta_\beta})$ and is uniformly distributed in $[0,\pi]$. Then $\alpha$ and $\beta$ are disjoint if $\Theta-\theta_\alpha/2-\theta_\beta/2 \geq 0$. Since Corollary \[cor2.6\] tells us that $2\Theta - \theta_\alpha-\theta_\beta$ is uniformly distributed in $[-2\pi,2\pi]$ the probability this number is positive is $\frac{1}{2}$. $\Box$.
Using Lemma \[pingpong\] this quickly gives us the obvious bound that if $f,g\in {\cal F}$ are randomly chosen, then the probability that $\langle f,g \rangle$ is discrete is at least $\frac{1}{64}$. For $n$ generator groups this number is at least $2^{-(2n-1)!}$. However we are going to have to build a bit more theory to prove the following substantial improvements of these estimates.
\[thma\] The probability that randomly chosen $f,g\in {\cal F}$ generate a discrete group $\langle f,g\rangle$ is at least $\frac{1}{20}$.
\[thmb\] The probability that two randomly chosen hyperbolic transformation $f,g\in {\cal F}$ generate a discrete group $\langle f,g\rangle$ is at least $\frac{1}{5}$.
Let $f,g$ be randomly chosen parabolic elements in ${\cal F}$. Then the probability $\langle f,g\rangle$ is discrete is at least $\frac{1}{6}$, $$\Pr\{ \langle f,g \rangle \;\; \mbox{is discrete given $ f,g\in {\cal F}$ are parabolic} \} \geq \frac{1}{6}$$
Notice that $f$ is parabolic or the identity if and only if $\Re e(a)\in \{\pm1\}$. Theorem \[thma\] follows from Theorem \[thmb\] and the fact that the probablity we choose two hyperbolic elements is independent and of probablity equal to $\frac{1}{4}$.
Random arcs on a circle.
========================
Let $\alpha$ be an arc on the circle $\IS$. We denote its midpoint by $m_\alpha\in \IS$ and its arclength by $\ell_\alpha\in [0,2\pi]$. Conversely, given $m_\alpha\in \IS$ and $\ell_\alpha\in [0,2\pi]$ we determine a unique arc $\alpha = \alpha(m_\alpha,\ell_\alpha)$ with this data.
A random arc $\alpha$ is the arc uniquely determined when we choose $m_\alpha\in \IS$ uniformly (equivalently $\arg(m_\alpha)\in_u[0,2\pi])$ and $\ell_\alpha \in_u[0,2\pi]$. We will abuse notation and also refer to random arcs when we restrict to $\ell_\alpha\in_u [0,\pi]$ as for the case of isometric disk intersections. We will make the distinction clear in context.
A simple consequence of our earlier result is the following corollary.
If $m_\alpha,m_\beta \in_u\IS$ and $\ell_\alpha,\ell_\beta\in_u[0,\pi]$, then $$\Pr \{\alpha \cap \beta = \emptyset\} = \frac{1}{2}$$
We need to observe the following lemma.
\[parabolic\] If $m_\alpha,m_\beta \in_u\IS$ and $\ell_\alpha,\ell_\beta\in_u[0,2\pi]$, then $$\Pr \{\alpha \cap \beta = \emptyset\} = \frac{1}{6}$$
[**Proof.**]{} We need to calculate the probability that the argument of $\zeta = m_\alpha \overline{m_\beta}$ is greater than $(\ell_\alpha+\ell_\beta)/2$. Now $\theta = \arg(\zeta)$ is uniformly distributed in $[0,\pi]$. The joint distribution is uniform, and so we calculate $$\begin{aligned}
\Pr\{\theta \geq \ell_\alpha+\ell_\beta \} & = & \frac{1}{\pi^3} \int\int\int_{\{\theta\geq \alpha + \beta\}} 1 \; d\theta\,d\alpha\, d\beta \\
& = & \frac{1}{\pi^3} \int_{0}^{\pi} \int_{0}^{\theta} \int_{0}^{\theta-\alpha} d\, \beta \,d\alpha \, d\theta = \frac{1}{6} \end{aligned}$$ and the result follows. $\Box$
Next we consider the probablity of disjoint pairs of arcs.
Let $m_{\alpha_1},m_{\alpha_2},m_{\beta_1},m_{\beta_2} \in_u\IS$ and $\ell_\alpha,\ell_\beta\in_u[0,\pi]$. Set $$\alpha_i=\alpha(m_{\alpha_i},\ell_{\alpha_i}), \hskip10pt \beta_i=\alpha(m_{\beta_i},\ell_{\beta_i})$$ Then the probability that all the arcs $\alpha_i,\beta_i$, $i=1,2$ are disjoint is $1/20$, $$\Pr \{(\alpha_1\cap\alpha_2)\cup(\beta_1\cap\beta_2)\cup (\alpha_1\cap\beta_1) \cup (\alpha_1\cap\beta_2) \cup (\alpha_2\cap\beta_1) \cup (\alpha_2\cap\beta_2) = \emptyset\} = \frac{1}{20}$$
[**Proof.**]{} We first observe that the events $$(\alpha_1\cap\beta_1) =\emptyset, \;\; (\alpha_1\cap\beta_2)=\emptyset, \;\; (\alpha_2\cap\beta_1) =\emptyset, \;\; (\alpha_2\cap\beta_2) =\emptyset$$ are not independent as (for among other reasons) $\alpha_1$ and $\alpha_2$, and similarly $\beta_1$ and $\beta_2$ may overlap. The probability that $(\alpha_2\cap\beta_2) =\emptyset$ and $(\alpha_2\cap\beta_2) =\emptyset$ we have already determined to be equal to $\frac{1}{4}=\frac{1}{2}\times\frac{1}{2}$. The result now follows from the next lemma. $\Box$
\[1/5\] Let $m_{\alpha_1},m_{\alpha_2},m_{\beta_1},m_{\beta_2} \in_u\IS$ and $\ell_\alpha,\ell_\beta\in_u[0,\pi]$. Set $$\alpha_i=\alpha(m_{\alpha_i},\ell_{\alpha_i}), \hskip10pt \beta_i=\alpha(m_{\beta_i},\ell_{\beta_i})$$ and suppose we are given that $(\alpha_1\cap\alpha_2)=(\beta_1\cap\beta_2)=\emptyset$. Then the probability that all the arcs $\alpha_i$ are disjoint from the arcs $\beta_j$, $i,j=1,2$ is $1/5$, $$\Pr \{(\alpha_1\cap\beta_1) \cup (\alpha_1\cap\beta_2) \cup (\alpha_2\cap\beta_1) \cup (\alpha_2\cap\beta_2) = \emptyset\} = \frac{1}{5}$$
[**Proof.**]{} Conditioned by the assumption that $\alpha_1$ and $\alpha_2$ are disjoint, and that $\beta_1$ and $\beta_2$ are disjoint, we have note the events $$(\alpha_1\cap\beta_1) =\emptyset, \;\; (\alpha_1\cap\beta_2)=\emptyset, \;\; (\alpha_2\cap\beta_1) =\emptyset, \;\; (\alpha_2\cap\beta_2) =\emptyset$$ are independent. A little trigonometry reveals that $$\alpha_i \cap \beta_j =\emptyset \leftrightarrow \frac{\ell_{\alpha}+\ell_{\beta}}{2} \leq 2\arcsin \frac{|m_{\alpha_i}-m_{\beta_j}|}{2} = \arg (m_{\alpha_i}\overline{m_{\beta_j}})$$ Now the four variables $\theta_{i,j} = \arg (m_{\alpha_i}\overline{m_{\beta_j}})$, $i,j=1,2$, are uniformly distributed in $[0,\pi]$ and independent. We are requiring $$\min_{i,j} \theta_{i,j} \geq \frac{\ell_{\alpha}+\ell_{\beta}}{2}$$ Now $\frac{\ell_{\alpha}+\ell_{\beta}}{2}=\psi$ is uniformly distributed in $[0,\pi]$ and $$\Pr\{ \min_{i,j} \theta_{i,j} \geq \psi \} = (1-\frac{\psi}{\pi})^4 \label{df}$$ Since $$\frac{1}{\pi} \; \int_{0}^{\pi} (1-\frac{\psi}{\pi})^4 =\frac{1}{5}$$ the result claimed follows. $\Box$
In passing we further note that equation (\[df\]) gives us a density function $\rho(\psi) = 4 (1-\frac{\psi}{\pi})^3$ and hence an expected value of $$\begin{aligned}
\frac{4}{\pi^2} \int_{0}^{\pi} \psi (1-\frac{\psi}{\pi})^3 \; d\psi & = & 4 \int_{0}^{1} (1-t) t^3\; dt = \frac{1}{5}.\end{aligned}$$
Generalising this result for a greater number of disjoint pairs of arcs quickly gets quite complicated. We state without proof given here the following which we will not use.
Let $m_{\alpha_1},m_{\alpha_2},m_{\beta_1},m_{\beta_2},m_{\gamma_1},m_{\gamma_2} \in_u\IS$ and $\ell_\alpha,\ell_\beta,\ell_\gamma\in_u[0,\pi]$. Set $$\alpha_i=\alpha(m_{\alpha_i},\ell_{\alpha_i}), \hskip10pt \beta_i=\alpha(m_{\beta_i},\ell_{\beta_i}), \hskip10pt \gamma_i=\alpha(m_{\gamma_i},\ell_{\gamma_i})$$ Then the probability that all the arcs $\alpha_i,\beta_i,\gamma_i$, $i=1,2$ are all disjoint is $\frac{3}{1000}$.
One can get results if there is additional symmetry. For instance if the lengths of all the arcs are the same.
Let $m_{i_1},m_{i_2} \in_u\IS^1$, $i=2,\ldots,n$ and $\ell_\alpha\in_u[0,\pi]$. Then the probability that the arcs $\alpha_{ij}=\alpha(m_{i_j},\ell_\alpha)$ are disjoint is $$\frac{1}{(2n) n!} \int_{0}^{1} \sum_{k=0}^{[2-x]} (-1)^k \left(\begin{array}{c} n \\ k\end{array} \right) (2-x-k)^{n} \; dx$$
[**Proof.**]{} We cyclically order the set $\{m_{i_i}:i=2,\ldots, n, j=1,2\}$ and let $\theta_k$ be the angle between the $k^{th}$ and $k+1^{st}$ point (mod $k$). Then $\sum_{k=1}^{2n} \theta_k = 2\pi$. The arcs are disjoint if $\theta_k \geq \ell_\alpha$. First we have $2n-1$ independent random variables $\{\theta_k\}_{k=1}^{2n-1}$ whose minimum must exceed $\alpha$, and second they also must satisfy $2\pi - \sum_{k=1}^{2n-1} \theta_k \geq \ell_\alpha$. The first gives us a factor $\frac{1}{2n}$, and for the second we note that the sum of $m$ uniformly distributed random variables in $[0,1]$ has the Irwin-Hall distribution, $$\label{IH}
F_n(x) = \frac{1}{(m-1)!} \sum_{k=0}^{[x]} (-1)^k \left(\begin{array}{c} m \\ k\end{array} \right) (x-k)^{m-1}$$ Thus $$\Pr\left\{2-\frac{\ell_\alpha}{\pi} \geq \sum_{k=1}^{2n-1} \frac{\theta_k}{\pi}\right\} = \int_{0}^{2-t} F_{2n-1} (t)\; dt$$ The result follows. $\Box$
As an example, for two pairs of equi-length arcs we have $$F_3(x) = \left\{ \begin{array}{ll} x^2/2, & 0\leq x\leq 1 \\ (-2x^2+6x-3)/2, & 1\leq x \leq 2 \\ (x^2-6x+9)/2, & 2\leq x \leq 3 \end{array}\right.$$ We see that $$\begin{aligned}
\int_{0}^{2-t} F_3(x)\; dx & = & \int_{0}^{1} F_3(x)\; dx + \int_{1}^{2-t} F_3(x) \; dx = \frac{1}{6} + \frac{2}{3}-\frac{t}{2}-\frac{t^2}{2}+\frac{t^3}{3} \\
\int_{0}^{1} \int_{0}^{2-t} F_3(x) \,dx dt & = & \frac{1}{6} + \int_{0}^{1} \frac{2}{3}-\frac{t}{2}-\frac{t^2}{2}+\frac{t^3}{3} \, dt = \frac{1}{6}+\frac{1}{3} = \frac{1}{2} \end{aligned}$$ and so the probability that two pairs of random equi-arclength arcs with arclength uniformly distributed in $[0,\pi]$, are disjoint is $\frac{1}{8}$. Similarly for three pairs the probability is $\frac{9}{200}$.
Random arcs to Möbius groups.
=============================
Given data $m_{\alpha_1},m_{\alpha_2} \in \IS$ with arclength $\ell_\alpha\in [0,\pi]$ we see, just as above, that the arcs centered on the $m_{\alpha_i}$ and on length $\ell_\alpha$ determine a matrix which can be calculated by examination of the isometric circles. We have $$\label{Adef} A= \left(\begin{array}{cc} a & c\\ \bar c & \bar a \end{array} \right), \;\;\; c = i \sqrt{m_{\alpha_1}\, m_{\alpha_2}}\; \cot \frac{\ell_\alpha}{2},\;\;\;\; a=
i \sqrt{ \overline{m_{\alpha_1}} \,m_{\alpha_2}}\; {\rm cosec} \frac{\ell_\alpha}{2}$$ where we make a consistent choice of sign by ensuring $$\frac{c}{a} = m_{\alpha_1} \cos \frac{\ell_\alpha}{2}$$ Of course interchanging $m_{\alpha_1}$ and $m_{\alpha_2}$ sends $a$ to $-\bar a$, and so the data actually uniquely determines the cyclic group $\langle f \rangle$ generated by the associated Möbius transformation $$f(z) = - m_{\alpha_2 }\; \frac{z+m_{\alpha_1}\cos \frac{\ell_\alpha}{2}}{z\, \cos \frac{\ell_\alpha}{2}+m_{\alpha_1}}$$ and not necessarily $f$ itself.
As a consequence we have the following theorem.
There is a one-to-one correspondence between collections of $n$ pairs of random arcs and $n$-generator Fuchsian groups. A randomly chosen $\langle f \rangle\subset {\cal F}$ corresponds uniquely to $m_{\alpha_1}, m_{\alpha_2} \in_u \IS^1$ and $\ell_\alpha\in_u [0,\pi]$.
Notice also that if we recognise the association of cyclic groups with the data and say two cyclic groups are close if they have close generators, then this association is continuous.
We have already seen that for a pair of hyperbolic elements if all the isometric disks are disjoint then the “ping ping” lemma implies discreteness of the groups in question. Then the association between Fuchsian groups and random arcs quickly establishes Theorems \[thma\] and \[thmb\] via Lemma \[1/5\].
If $f$ is a parabolic element of ${\cal F}$, then the isometric circles are adjacent and meet at the fixed point. Conversely, if two random arcs of arclength $\ell_\alpha$ are adjacent we have $\arg(m_{\alpha_1}\overline{m_{\alpha_2}})=\ell_\alpha$, and from (\[Adef\]) $$a=
i (\cos\frac{\ell_\alpha}{2} +i\sin \frac{\ell_\alpha}{2} ) {\rm cosec} \frac{\ell_\alpha}{2} = -1+ i \cot\frac{\ell_\alpha}{2}$$ and $\tr^2(A)-4=0$ so that $A$ represents a parabolic transformation. Similarly if the arcs overlap, then $\tr^2(A)\leq 2$ and $A$ represents an elliptic transformation.
Let $f,g$ be randomly chosen parabolic elements in ${\cal F}$. Then the probability $\langle f,g\rangle$ is discrete is at least $\frac{1}{6}$.
[**Proof.**]{} As $f$ and $g$ are parabolic, their isometric disks are tangent and the point of intersection lies in a random arc of arclength uniformly distributed in $[0,2\pi]$. Discreteness follows from the “ping pong” lemma and Lemma \[parabolic\]. $\Box$
[99]{} A.F. Beardon, *The geometry of discrete groups*, Graduate texts in mathematics [**91**]{}, Springer-Verlag, 1983 Gehring, F.W. and Martin, G.J., *Commutators, collars and the geometry of Mobius groups*, Jounal D’Analyse Mathematique, [**63**]{}, 1994, 175–219. Jorgensen, T., *On discrete groups of Mobius transformations*, Amer. J Math, Vol. 98, No. 3 (1976), pp. 739–749 G. Martin, [*The cross ratio distribution and random punctured tori*]{}. to appear G. Martin and G. O’Brien, [*Random Fuchsian groups and the dimension of their limit sets.*]{} to appear G. Martin, G. O’Brien and Y. Yamashita, [*Random Kleinian Groups, [**II**]{} : Two parabolic generators*]{}. to appear M.D. Springer, [*The algebra of random variables*]{}, Wiley Series in Probability and Mathematical Statistics. John Wiley & Sons, New York-Chichester-Brisbane, 1979. xix+470 pp. ISBN: 0-471-01406-0
[^1]: Research supported in part by grants from the N.Z. Marsden Fund. This work forms part of G. O’Brien’s Thesis. AMS (1991) Classification. Primary 30C60, 30F40, 30D50, 20H10, 22E40, 53A35, 57N13, 57M60
|
{
"pile_set_name": "ArXiv"
}
|
[P. Piirola$^a$, E. Pietarinen$^{a,b}$, and M.E. Sainio$^{a,b}$]{}[$^a$Department of Physics,\
$^b$Helsinki Institute of Physics,\
P.O. Box 64, 00014 University of Helsinki, Finland]{} [Most of the data from the meson factories were available only after the $\pi N$ partial wave analysis of Koch and Pietarinen [@kochpie] was published over 20 years ago. Since then, both the experimental precision and the theoretical framework have evolved a lot as well as the computing technology. Both the new and the earlier data are to be analysed by a highly modernised version of the earlier approach. Especially the propagation of the measurement errors in the analysis will be considered in detail, visualisation tools will be developed using the Python/Tkinter combination [@python], and the huge data base of experiments will be handled by MySQL [@mysql].]{}
Introduction
============
About 20 years ago Koch and Pietarinen performed an energy-independent partial-wave analysis on pion-nucleon elastic and charge-exchange differential cross sections and elastic polarisations for laboratory momenta below 500 MeV/c incorporating the constraints from fixed-$t$ dispersion relations as well as crossing and unitarity (the KH78 and KH80 analyses) [@kochpie]. Since then, however, new low-energy data have emerged in all charge channels: examples of recent high precision results for the differential cross sections are given in refs. [@pavan; @frlez; @janousch], polarisation parameter in refs. [@gaulard; @hofman; @wieser] and the spin rotation parameter in ref. [@supek]. There are also some new measurements of integrated cross sections [@kriss]. Especially the high precision measurements of the hadronic level shift and width on pionic hydrogen and deuterium [@PSI], giving information of the pion-nucleon interaction just at the threshold, have opened a completely new chapter in the study of the low-energy pion-nucleon interaction. Another direction where significant advances have taken place is the theoretical framework where we study the low-energy pion-nucleon interaction. The tool, chiral perturbation theory ($\chi$PT), has been developed in the 80’s and 90’s and the work continues. The development of $\chi$PT motivates a new partial wave analysis from the theoretical point of view — on one hand strong interaction physics is becoming a precision science also in the low-energy region, on the other hand, there is need for $\pi N$ phenomenology to fix some of the low-energy constants appearing in the meson-baryon lagrangian. See, for example, the talk of Meißner in these proceedings [@meissner].
It is our goal to make use of the new data in an analysis which, in addition to the requirements of analyticity, crossing and unitarity, includes the constraints from chiral symmetry.
The old KH78/80 analysis
========================
The aim of the KH analysis was to determine the amplitudes satisfying several conditions:
1. The amplitudes had to reproduce all the data which were available at that time, which means $\frac{d\sigma}{d\Omega}$, $\sigma_{\mathrm{tot}}$ and $P$ within their associated errors.
2. The solution had to fulfil the isospin invariance.
3. All partial waves had to satisfy the unitarity condition.
4. The crossing symmetry was implicitly assumed, because Mandelstam variables were used.
5. The invariant amplitudes were going to have the correct analyticity properties in $s$ at fixed-$t$.
6. The amplitudes at fixed-$s$ were to be analytic in $\cos \theta$ in the small Lehman ellipse.
The experimental data is not enough to fix a unique partial wave solution, but further theoretical constraints are needed. The constraints from fixed-$t$ analyticity and from the isospin invariance are strong enough to resolve the ambiguities [@hohler].
Three stages of the analysis
============================
Fixed-$t$ analysis
------------------
The old KH analyses consist of three phases: fixed-$t$ analysis, fixed-$\theta$ analysis and fixed-$s$ analysis, which were iterated until the results agreed up to about 3 %. The fixed-$t$ analysis was carried out at 40 $t$-values in the range from zero to $-1.0$ GeV$^2$. The analysis would be too complicated, if one were working with dispersion integrals, so the expansion [@pietexp1; @pietexp2] $$\label{pietexp}
C(\nu, t) = C_N(\nu,t)+H(Z,t) \sum_{i=0}^n c_i Z^i$$ was used for $t$-values smaller than $-4m\mu$, ($\nu = (s-u)/4 m_N$). In the expansion the nucleon pole term $C_N(\nu,t)$ is treated separately, and the sum is multiplied by a factor $H(Z,t)$, which describes the expected asymptotic behaviour. The essence of the expansion is that the sum is written in terms of functions $Z$, which have the correct analytic behaviour, i.e. it is [*not*]{} a polynomial approximation, but a series presentation of an analytic function, which is just truncated at some reasonable point (ca. $n=50$ or $n=100$), because infinite accuracy is impossible. The condition of smoothness and the compatibility with the data constrain the terms with large index $i$ to be negligible [@pietexp1; @pietexp2].
The expansion coefficients $c_i$ are determined by minimising $$\chi^2 = \chi^2_{\mathrm{data}} + \chi^2_{\mathrm{pw}} +
\chi^2_{\mathrm{penalty}} \;.$$ Here $\chi^2_\mathrm{data}$ comes from the experimental errors, $\chi^2_\mathrm{pw}$ belongs to the deviation from the fixed-$s$ partial wave solution and the last term is used to suppress large values of the higher coefficients of the expansion. In practice, the analyticity constraints cannot be used without smoothing the data. The aim is, of course, to smooth out the statistical fluctuations without distorting the physically relevant structures.
Fixed-$\theta$ analysis
-----------------------
The fixed-$t$ constraint is often used only for $t$ values from zero to about $-0.5$ GeV$^2$, because the partial wave expansions for the imaginary parts of the invariant amplitudes do not converge for large $|t|$. In the range $t \in (-0.5, -1.5]$ GeV$^2$ the truncated partial wave expansions can still be reasonable approximations, but for $t$ values below ca. $-1$ or $-1.5$ GeV$^2$ the fixed-$t$ analyticity cannot be applied anymore. So another analyticity constraint is used to cover the rest of the angular range at intermediate and at high energies. The calculation was made at 18 angles between $\cos \theta = -0.9 \ldots 0.8$. Analysing methods are the same as in the fixed-$t$ analysis: the expansion method is used and the coefficients are fixed by minimising $\chi^2$, i.e. by fitting to the data and to the fixed-$t$ solution.
Fixed-$s$ analysis
------------------
The third stage, the fixed-$s$ analysis, is a standard phase shift analysis in the sense that the partial waves are fitted to the data. On the other hand, it is not the usual one, because the partial waves are fitted also to the fixed-$t$ and to the fixed-$\theta$ amplitudes. Now 92 momentum values were selected from the energy range $0\ldots200$ GeV/c, 6 of the momenta were above 6 GeV/c. Again, the coefficients were fixed by minimising $\chi^2$ which now included also a term suitable to enforce unitarity.
Treatment of the data
=====================
The electromagnetic corrections proposed by Tromborg et al. at momentum values below $0.65$ GeV/c [@tromborg], were applied to the data. At higher momentum, only the one-photon exchange correction was applied, taking into account the Coulomb phase. In all three different analyses, the data were shifted to the selected energy bins (i.e. the selected values of $s$, $t$ or $\theta$) using the previous solution of the iteration to calculate the correction. Some data points requiring too large a momentum shift were omitted. The normalisation of some data sets had to be corrected to guarantee a smooth extrapolation to the forward direction, and to the input for the forward amplitude.
Life after KH80
===============
The latest KH phase shift analysis was finished in 1980. After that there has been very accurate measurements of pionic hydrogen level shift and width, $\frac{d \sigma}{d \Omega}$ has been measured with good accuracy, many spin rotation parameter measurements has been done as well as polarisation parameter measurements. Also, some integrated cross section measurements have been performed. The newer data has never been analysed by the methods of Koch and Pietarinen, and for example the results of Pavan et al. [@pavan] are not compatible with the results of the old analyses. So, an updated version of the analysis is certainly needed.
The code of Pietarinen
======================
The original code was made for the NDP Fortran compiler, which runs under MS-DOS. The code needs to be ported to UNIX. We have tested the code in an old MS-DOS machine, and most of the main tasks seem to be working correctly. What is still needed, is a modification of the $s$-plane conformal mapping. This has effects on all routines, which are related to the fixed-$t$ analysis. Also some modifications are needed to be able to study the isospin analysis.
The code base is divided into several parts:
- There is a program for comparing the partial wave solution to existing data. It simply plots the data and the solution in the same picture, and allows the comparison of different data sets to the solution.
- The second part is for shifting the experimental data points into the fixed-$t$ bins. The earlier solution is used for interpolation, and those data points which are to be shifted too much are rejected.
- The next part is for making a starting value for the fixed-$t$ expansion.
- One program is for the iteration to adjust the fixed-$t$ amplitude to the experimental data and to the current solution.
- The main part of the program makes the actual partial wave analysis and adjusts the solution simultaneously to the data and to the fixed-$t$ amplitudes.
The current status
==================
Porting the code
----------------
During the porting process the most extensive work is needed for writing the graphical user interface, the data base engine and the plotting routines. The graphics routines of the original code were impossible to get working under UNIX, so we decided to use the Python/Tkinter combination for the GUI, and the Python/Gnuplot combination for plotting routines. The old code was reused as much as possible, but many parts still needed almost complete rewriting. We decided to write all the new code in Fortran 95, so, at the moment, most of the calculation engine is written in Fortran 77 and some parts in Fortran 95. All routines of the old code were modified to take almost all input from the stdin and to write output to stdout, so it should now be possible to change the whole GUI with a reasonable amount of work, whenever it becomes necessary.
The current status
------------------
At present, the plotting program, the interpolation program and the program calculating the starting value of the fixed-$t$ analysis are ported to UNIX, and all the functionality of the original versions is implemented (fig. \[xplot\]). The part, which iterates to adjust the fixed-$t$ amplitudes, the partial wave solution, and the experimental data, compiles OK but the GUI is still under construction. The heart of the whole program, the part making the actual analysis, is still in a completely untested stage.
Comparison of the numerics
--------------------------
We have compiled the code with different compilers running on different operating systems in order to check the stability of the mathematical subroutines[^1]. Comparing the results of the routines compiled by different compilers showed that the routines are [*not*]{} currently stable enough for production use.
For illustration, the interpolator part of the program calculates 173670 interpolated data points. When comparing the results of routines compiled by GNU Fortran to those calculated by NDP Fortran, one notices that 96% of the new data points agree up to 0.01%, but in some cases there are significant discrepancies: namely 73 data points of the 173670 differ by more than 1%, and in the worst case the difference is 28%. The cause of these discrepancies is unknown when writing this.
The next phase
==============
After chasing the bugs and finding the reasons for the numerical unstabilities, we are hoping to find a better way to handle the propagation of the experimental errors than that used in the old KH analysis.
Because of the size of the data base, and because of the discrepancies in the different data sets, the visualisation of the data and the partial wave solutions is essential. Also, during the analysis, the program is used a lot, so the graphical user interface has to be easy and efficient to use. For making the choice of the data sets as easy as possible, we have plans to convert our text file data bases to MySQL format.
We intend to get the first preliminary results by the end of the year.
R. Koch and E. Pietarinen, Nucl. Phys. **A336**, 331 (1980). G. van Rossum et al., `http://www.python.org/` (1990-2001). M. Widenius et al., `http://www.mysql.com/`. M.M. Pavan et al., nucl-ex/0103006 (2001). E. Frlež et al., Phys. Rev. **C57**, 3144 (1998), hep-ex/9712024. M. Janousch et al, Phys. Lett. **B414**, 237 (1997). C.V. Gaulard et al., Phys. Rev. **C60**, 024604 (1999). G.J. Hofman et al., Phys. Rev. **C58**, 3484 (1998). R. Wieser et al., Phys. Rev. **C54**, 1930 (1996). I. Supek et al., Phys.Rev. **D47**, 1762 (1993). B.J. Kriss et al., Phys. Rev. **C59**, 1480 (1999). H.C. Schröder at al., Phys. Lett. **B469**, 25 (1999). Ulf-G. Meißner, these proceedings, hep-ph/0108133 (2001). G. Höhler, in: [*Landolt-Börnstein*]{} Vol. 9b2 (1983). E. Pietarinen, Nucl. Phys. **B107**, 21 (1976). E. Pietarinen, Physica Scripta **14**, 11 (1976). B. Tromborg, S. Waldenstr[ø]{}m, and I. [Ø]{}verb[ø]{}, Phys.Rev. **D15**, 725 (1977). J.C. Alder et al., Phys. Rev. **D27**, 1040 (1983). J.C. Alder et al., Lett. Nuovo Cim. **23**, 381 (1978). V.S. Bekrenev et al., Sov. J. Nucl. Phys. **24**, 45 (1976).
[^1]: We have tried the old NDP Fortran compiler for MS-DOS, the Compaq Fortran in Digital UNIX, the Lahey/Fujitsu Fortran for Linux, and different versions of the GNU/Linux Fortran.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We demonstrate the equivalence of two definitions of a Gibbs measure on a subshift over a countable group, namely a conformal measure and a Gibbs measure in the sense of the Dobrushin-Lanford-Ruelle (DLR) equations. We formulate a more general version of the classical DLR equations with respect to a measurable cocycle, which reduce to the classical equations when the cocycle is induced by an interaction or a potential, and show that a measure satisfying these equations must be conformal. To ensure the consistency of these results with earlier work, we review methods of constructing an interaction from a potential and vice versa, such that the interaction and the potential constructed from it, or vice versa, induce the same cocycle.'
address:
- 'Mathematics Department, University of British Columbia, Vancouver, British Columbia, Canada, V6T 1Z2 and Departamento de Matemática, Instituto de Matemática e Estatística, Universidade de São Paulo, R. do Matão 1010, São Paulo, SP 05508-900, Brazil'
- 'Mathematics Department, University of British Columbia, Vancouver, British Columbia, Canada, V6T 1Z2'
author:
- 'Lu[í]{}sa Borsato'
- Sophie MacDonald
title: 'Conformal measures and the Dobrushin-Lanford-Ruelle equations'
---
Introduction
============
This paper is concerned with two notions of a Gibbs measure on a subshift over a countable group. The first of these is defined by the Dobrushin-Lanford-Ruelle (DLR) equations, or equivalently a Gibbsian specification. This notion of a Gibbs measure appears for instance in the classical theorems of Dobrushin [@dobrushin1970conditional] and Lanford-Ruelle [@lanfordruelle1969observables]. The second is the notion of a conformal measure, introduced in [@petersen1997symmetric] and [@denker-urbanski-1991-conformal] and used for instance by Meyerovitch in [@meyerovitch-2013-gibbs-eqm] as the setting for a stronger Lanford-Ruelle theorem. There are other definitions in the literature, such as a Gibbs measure in the sense of Bowen, but we do not consider these here.
The purpose of the present article is to show that the two notions of Gibbs measure recalled above coincide in some generality. Our results build on those of Kimura [@kimura-2015-thesis], who proves two results relevant here. The first is that every conformal measure, with respect to an appropriately regular potential, satisfies the DLR equations for that potential. The second is a partial converse, namely that every measure satisfying the DLR equations for such a potential is topologically Gibbs. This is a weaker property than being conformal, although equivalent on certain subshifts, such as shifts of finite type [@meyerovitch-2013-gibbs-eqm]. Sarig ([@sarig1999survey], [@sarig2009notes]) obtains the full converse in the case of a topologically mixing one-sided shift of finite type, using martingale and Ruelle transfer operator methods.
Our main result, Theorem \[main-thm-abs\], strengthens these partial results to a full converse in a more general setting. Specifically, we show that any measure satisfying certain equations with respect to a measurable cocycle on the Gibbs relation must also be conformal with respect to that cocycle. When the cocycle is induced either by an interaction or by a potential in the standard way, these equations reduce to the classical DLR equations. We prove this result for arbitrary subshifts with finite alphabet on an arbitrary countable group. The results of Kimura and Sarig in the forward direction (conformal implies DLR) can also be generalized to our setting; in §\[equiv\_sec\], we mention the idea for the proof but refer readers to [@kimura-2015-thesis] for the details in Kimura’s setting, as the proof strategy changes very little.
The plan is as follows. In §\[def\_sec\], we review the definitions and basic facts required to prove our main result in §\[equiv\_sec\]. In §\[intrxn\_sec\] and §\[potl\_sec\], we recall well-known material on interactions and potentials, respectively, in order to show that the equations involved in our main theorem do in fact reduce to the classical DLR equations. In §\[potl\_from\_intrxn\_sec\], we recall results of Muir and Kimura, elaborating on Ruelle, by which a potential can be constructed from a sufficiently regular interaction, and vice versa, with “physical” data (Gibbs and equilibrium measures) preserved.
In §\[potl\_sec\] and §\[potl\_from\_intrxn\_sec\], we require that the underlying group admit a finite generating set that yields a certain spherical growth condition, defined in §\[potl\_sec\]. This condition is satisfied, for any generating set, by any group of polynomial growth, such as ${\mathbb{Z}}^d$, the case of greatest physical interest. It is also satisfied by any group $G$ isomorphic to the free group $F_n$, with generating set of cardinality $n$.
Cocycles and the Gibbs relation: definitions and properties {#def_sec}
===========================================================
Throughout, let $G$ be a countable group with identity $e$. Let ${\mathcal{A}}$ be a finite alphabet equipped with the discrete topology, and $X \subseteq {\mathcal{A}}^G$ a subshift, i.e. a closed set in the product topology, invariant under the natural right action of $G$ via $(x \cdot g)_h = x_{g h}$. The topology on $X$ is generated by cylinders, i.e. sets of the form $[\omega] = \{ x \, | \, x_{\Lambda} = \omega \}$ for finite sets $\Lambda \Subset G$. This topology can be induced by a metric such that the resulting metric space is complete and separable; that is, ${\mathcal{A}}^G$ is a Polish space. We equip $X$ with the Borel $\sigma$-algebra ${\mathcal{S}}$.
The *Gibbs relation*, also called the asymptotic relation, is the equivalence relation ${\mathfrak{T}}_X \subset X \times X$ such that $(x,y) \in {\mathfrak{T}}_X$ if and only if $x_{\Lambda^c} = y_{\Lambda^c}$ for some finite set $\Lambda \Subset G$. Let $(\Lambda_N)_{N=1}^{\infty}$ be a sequence of finite sets exhausting $G$, i.e. $(\Lambda_N)_{N=1}^{\infty}$ is an increasing sequence and $G = \displaystyle\cup_{N=1}^{+\infty} \Lambda_N$. Define the subrelation ${\mathfrak{T}}_{X,N} = \{ (x,y): x_{\Lambda_N^c} = y_{\Lambda_N^c} \} \subseteq {\mathfrak{T}}_X$. Observe that, for each subrelation ${\mathfrak{T}}_{X,N}$, each equivalence class is a finite set, and that ${\mathfrak{T}}_X = \cup_{N=0}^{\infty} {\mathfrak{T}}_{X,N}$. (In the language of Borel equivalence relations, this means that ${\mathfrak{T}}_X$ is *hyperfinite* [@kechris2019borel], which we mention for context, although we do not use any theorems about hyperfiniteness in this paper.) This shows in particular that every equivalence class in ${\mathfrak{T}}_X$ is at most countable. Note that we can write each subrelation as ${\mathfrak{T}}_{X,N} = \cap_{n=N}^{\infty} \cup_{\omega \in {\mathcal{A}}^{\Lambda_n \setminus \Lambda_N}} [\omega] \times [\omega]$, which shows that ${\mathfrak{T}}_{X,N}$ is a measurable subset of $X \times X$ in the product $\sigma$-algebra ${\mathcal{S}}\otimes {\mathcal{S}}$, as is ${\mathfrak{T}}_X$.
For Borel sets $A, B \subseteq X$, a *holonomy* of ${\mathfrak{T}}_X$ (${\mathfrak{T}}_{X, N}$) is a Borel isomorphism $\psi: A \to B$ such that $(x, \psi(x)) \in {\mathfrak{T}}_X$ (${\mathfrak{T}}_{X, N}$) for all $x \in A$. We say that a holonomy $\psi$ is *global* if $A = B = X$. The definition for ${\mathfrak{T}}_{X,N}$ is analogous, with a holonomy of ${\mathfrak{T}}_{X,N}$ also a holonomy of ${\mathfrak{T}}_X$, for every $N$.
For a Borel set $A \subseteq X$, we denote ${\mathfrak{T}}_X(A) = \cup_{x \in A} \{ y \in X | (x,y) \in {\mathfrak{T}}_X \}$, and the same for the subrelations. The saturations ${\mathfrak{T}}_X(A)$ and ${\mathfrak{T}}_{X, N}(A)$ are easily shown to be Borel using the fact that the diagonal in $X \times X$ is measurable in the product $\sigma$-algebra, which follows as an easy exercise from the fact that $X$ is Polish.
We say that a measure $\mu$ on $X$ (by which we always mean a Borel probability measure) is ${\mathfrak{T}}_X$-nonsingular if for every Borel $A \subset X$ with $\mu(A) = 0$, we have $\mu({\mathfrak{T}}_X(A)) = 0$. Note that if $\mu$ is ${\mathfrak{T}}_X$-nonsingular and $\psi: A \to B$ is a holonomy of ${\mathfrak{T}}_X$, then whenever $E \subset A$ has $\mu(E) = 0$, we have $\mu(\psi(E)) \leq \mu({\mathfrak{T}}_X(E)) = 0$. In particular, the Radon-Nikodym derivative $\frac{d(\mu \circ \psi)}{d\mu}$ is well-defined. The same holds with ${\mathfrak{T}}_X$ replaced by ${\mathfrak{T}}_{X,N}$.
A (real-valued) cocycle on ${\mathfrak{T}}_X$ is a Borel measurable function $\phi: {\mathfrak{T}}_X \to {\mathbb{R}}$ such that $\phi(x,y) + \phi(y,z) = \phi(x,z)$ for all $x,y,z \in X$ with $(x,y), (y,z) \in {\mathfrak{T}}_X$ (so that $(x,z) \in {\mathfrak{T}}_X$ as well). Any cocycle on ${\mathfrak{T}}_X$ clearly restricts to a cocycle on ${\mathfrak{T}}_{X,N}$, for any given $N$. Given a ${\mathfrak{T}}_X$-nonsingular measure $\mu$ on $X$, we say that a Borel function $D: {\mathfrak{T}}_X \to {\mathbb{R}}$ is a *Radon-Nikodym cocycle* on ${\mathcal{R}}$ with respect to $\mu$ if the pushforward of $\mu$ by any holonomy $\psi: A \to B$ of ${\mathfrak{T}}_X$ satisfies $\frac{d(\mu \circ \psi)}{d\mu}(x) = D(x, \psi(x))$ for $\mu$-a.e. $x \in A$. It is routine to show that any ${\mathfrak{T}}_X$-nonsingular measure $\mu$ on $X$ has a $\mu$-a.e. unique Radon-Nikodym cocycle. Indeed, if $\psi_1, \psi_2: A \to B$ are two holonomies that agree $\mu|_A$-a.e., then they yield equal derivatives $\frac{d(\mu \circ \psi_1)}{d\mu}(x) = \frac{d(\mu \circ \psi_2)}{d\mu}(x)$ for $\mu$-a.e. $x \in A$, so in particular, given a holonomy $\psi: X \to X$, the value $\frac{d(\mu \circ \psi)}{d\mu}(x)$ depends, except for at most a null set of points $X$, on the pair $(x,\psi(x))$; we can therefore take $D(x,y) = \frac{d(\mu \circ \psi)}{d\mu}(x)$ for some holonomy $\psi$ with $\psi(x)=y$.
Let $\mu$ be a ${\mathfrak{T}}_X$-nonsingular Borel probability measure on $X$, and let $\phi: {\mathfrak{T}}_X \to {\mathbb{R}}$ be a cocycle. We say that $\mu$ is $(\phi, {\mathfrak{T}}_X)$-*conformal* if for any holonomy $\psi: A \to B$ of ${\mathfrak{T}}_X$, and $\mu$-a.e. $x \in A$, we have $D_{\mu, {\mathfrak{T}}_X}(x,\psi(x)) = \exp(\phi(x,\psi(x)))$.
The name “conformal measure” was given in [@denker-urbanski-1991-conformal], motivated by Patterson’s study [@patterson1976fuchsian] of measures on the limit sets of a particular groups of conformal mappings of the unit disc in the complex plane, or more generally of hyperbolic space. In the case of identically zero cocycle, conformal measures were also studied in [@petersen1997symmetric] under the name ${\mathfrak{T}}_X$-invariant measures.
Let $X \subseteq {\mathcal{A}}^G$ be a subshift, $\phi$ a cocycle on ${\mathfrak{T}}_X$, and $\mu$ a measure on $X$. For a Borel set $A \subseteq X$ and a finite set $\Lambda \Subset G$, the DLR equation for $x \in X$ is as follows:
$$\label{dlr-eq-cocycle}
\mu(A \,|\, \mathcal{F}_{\Lambda^c})(x) =
\sum_{\eta \in {\mathcal{A}}^{\Lambda}} \left[ \sum_{\zeta \in {\mathcal{A}}^{\Lambda}} \exp( \phi(\eta x_{\Lambda^c}, \zeta x_{\Lambda^c})) \mathbf{1}_X(\zeta x_{\Lambda^c}) \right]^{-1} \mathbf{1}_A(\eta x_{\Lambda^c})$$
We say that $\mu$ is DLR with respect to $\phi$ if, for any Borel $A \subseteq X$ and any $\Lambda \Subset G$, holds for $\mu$-a.e. $x \in X$.
To prove our main result, we will need the following lemma:
\[group\] There exists a countable group $\Gamma$ of global holonomies of $X$ such that $${\mathfrak{T}}_X = \{ (x, \gamma(x)): x \in X, \gamma \in \Gamma \}.$$ In other words, $\Gamma$ generates ${\mathfrak{T}}_X$.
The group $\Gamma$ can be described explicitly as a countable increasing union of finite groups $\Gamma_N$. For each $N$, the group $\Gamma_N$ generates ${\mathfrak{T}}_{X, N}$ and is isomorphic to the symmetric group of order $|{\mathcal{A}}^{\Lambda_N}|$. Take $\Gamma_N$ to be generated by holonomies $\psi$ of the following form: given $\omega, \eta \in {\mathcal{A}}^{\Lambda_N}$, define $\psi_{\omega, \eta}: X \to X$ by $$\psi_{\omega, \eta}(x) = \begin{cases}
\eta x_{\Lambda_N^c} & x_{\Lambda_N} = \omega, \, \eta x_{\Lambda_N^c} \in X \\
\omega x_{\Lambda_N^c} & x_{\Lambda_N} = \eta, \, \omega x_{\Lambda_N^c} \in X \\
x & \text{otherwise}
\end{cases}$$ That is, $\psi_{\omega, \eta}$ exchanges $\omega$ and $\eta$, wherever possible, and otherwise does nothing. These involutions were considered in [@meyerovitch-2013-gibbs-eqm] and [@kimura-2015-thesis], for slightly different purposes.
Observe that $(x,y) \in {\mathfrak{T}}_{X,N}$ if and only if there exists $\psi \in \Gamma_N$ with $\psi(x) = y$, so ${\mathfrak{T}}_{X,N}$ is precisely the orbit relation of $\Gamma_N$. The result for $\Gamma$ is immediate.
We mention for context that Lemma \[group\] is a special case of the main theorem of [@feldman-moore-1977-equivalence], which in fact asserts the same for any Borel equivalence relation on a Polish space in which every equivalence class is countable. This result was adapted to the symbolic setting in [@meyerovitch-2013-gibbs-eqm], with the countability of the equivalence classes established via the expansivity of the shift action. The proof is presented for subshifts over ${\mathbb{Z}}^d$, but the same proof goes through for arbitrary countable groups without modification. However, since we establish Lemma \[group\] directly, we do not need to appeal to the theorem of [@feldman-moore-1977-equivalence] (nor the symbolic corollary in [@meyerovitch-2013-gibbs-eqm]).
Equivalence of the conformal and DLR properties {#equiv_sec}
===============================================
For us, the main value of Lemma \[group\] is the following lemma, which reveals in particular that to show that a given measure is conformal (such as in Theorem \[main-thm-abs\]), it is sufficient to consider only global holonomies.
\[ets\_group\_conf\] Let $\mu$ be a Borel probability measure on $X$, let $\phi$ be a cocycle on ${\mathfrak{T}}_X$, and let $\Gamma$ be a countable group generating ${\mathfrak{T}}_X$. Then $\mu$ is $(\phi, {\mathfrak{T}}_X)$-conformal if and only if, for each $\gamma \in \Gamma$, the pushforward $\mu \circ \gamma$ is absolutely continuous with respect to $\mu$, with $\frac{d(\mu \circ \gamma)}{d\mu}(x) = \exp(\phi(x, \gamma(x))$ for $\mu$-a.e. $x \in X$.
The “only if” direction is immediate from the definition of conformal measure. To confirm the “if” direction, we first check nonsingularity. Let $A \subset X$ be Borel with $\mu(A) = 0$. Then ${\mathfrak{T}}_X(A) = \bigcup_{\gamma \in \Gamma} \gamma(A)$, which is a countable union and thus has measure zero by the explicit expression for $\frac{d (\mu\circ\gamma)}{d\mu}$.
Now let $\psi: A \to B$ be a holonomy of ${\mathfrak{T}}_X$ and let $E \subseteq A$ be Borel. Let $\Gamma = (\gamma_n)_{n \in {\mathbb{N}}}$ be an enumeration of $\Gamma$. For each $n \in {\mathbb{N}}$, let $E_n = \{ x \in E: \psi(x) = \gamma_n(x) \}$. To see that each $E_n$ is Borel, define the map $\tau_n: X \to X \times X$ by $\tau_n(x) = (\psi(x), \gamma_n(x))$, which is clearly measurable in the product $\sigma$-algebra. Then $E_n = \tau_n^{-1}(D)$ where $D \subset X \times X$ is the diagonal, which, as discussed above, is also Borel in the product $\sigma$-algebra, because $X$ is Polish.
Now let $E_0' = E_0$, and for $n \geq 1$, let $E_n' = E_n \setminus \cup_{k=0}^{n-1} E_k$. The Borel sets $E_n'$ partition $E$, so $$\mu(\psi(E)) = \sum_{n=0}^{\infty} \mu(\gamma_n(E_n')) = \int_{E} \exp(\phi(x, \psi(x))) \, d\mu(x)$$ Thus $\frac{d(\mu \circ \psi)}{d\mu}(x) = \exp( \phi(x, \psi(x))$ for $\mu$-a.e. $x \in A$, as required.
We will use Lemma \[ets\_group\_conf\] in concert with the following lemma, which reduces the question of $(\phi, {\mathfrak{T}}_X)$-conformality to that of conformality with respect to the finite-order subrelations.
\[fin\_imply\_full\] Let $\mu$ be a measure on $X$ and $\phi$ a cocycle on ${\mathfrak{T}}_X$. Suppose that $\mu$ is $(\phi, \mathfrak{T}_{X,N})$-conformal for each $N \geq 0$. Then, $\mu$ is $(\phi, \mathfrak{T}_X)$-conformal.
Let $\psi: X \to X$ be a global holonomy of the Gibbs relation $\mathfrak{T}_X$ and let $A \subseteq X$ be a Borel set. We begin by writing $A$ as the increasing union $A = \cup_{N = 0}^{\infty} A_N$, where $A_N = \{ x \in A: (x, \psi(x)) \in \mathfrak{T}_{X,N} \}$. Since $\psi|_{A_N}$ is a local holonomy of $\mathfrak{T}_{X,N}$ and $\mu$ is $(\phi, \mathfrak{T}_{X, N})$-conformal, we have $$\begin{aligned}
\mu(\psi(A)) &= \lim_{N \to \infty} \mu(\psi(A_N)) \\
&= \lim_{N \to \infty} \int_{A_N} \exp( \phi(x, \psi(x) ) ) \, d\mu(x) \\
&= \int_{A} \exp( \phi(x, \psi(x) ) ) \, d\mu(x),\end{aligned}$$ by dominated convergence. Thus, $\mu$ is indeed $(\phi, {\mathfrak{T}}_X)$-conformal.
To echo the comment above on hyperfiniteness, we remark here that both of these results apply, with the same proofs, to any hyperfinite Borel equivalence relation on any Polish space. The following lemma, by contrast, seems to rely more specifically on the structure of $X$ as a subshift.
\[dlr\_conf\_subrel\_abs\] Let $X \subseteq \mathcal{A}^{G}$ be a subshift, let $\phi$ be a cocycle on $X$, and let $\mu$ be a DLR measure on $X$ with respect to $\phi$. Let $N \geq 1$. Then $\mu$ is $(\phi, {\mathfrak{T}}_{X,N})$-conformal.
It is enough to show that $\mu(\psi([\omega])) = \int_{[\omega]} \exp( \phi(x, \psi(x))) \, d\mu(x)$ for any cylinder $[\omega]$ and (by Lemma \[ets\_group\_conf\]) any global holonomy $\psi$ of ${\mathfrak{T}}_{X,N}$. Fix a holonomy $\psi: X \to X$ of ${\mathfrak{T}}_{X, N}$. Since the equivalence classes of ${\mathfrak{T}}_{X, N}$ are finite, and in fact have bounded cardinality, there exists some $r \geq 0$ such that $\psi^r(x) = x$, for all $x \in X$. Let $m \geq N$ and fix $\omega \in {\mathcal{A}}^{\Lambda_m}$. We now partition $X$ according to the orbits of points under $\psi$, in such a way that $[\omega]$ is partitioned into sets that are easy to control. Specifically, for each $\overline{\eta} = (\eta_0, \dots, \eta_{r-1}) \in ( \mathcal{A}^{\Lambda_m})^r$, let $$T_{\overline{\eta}} = \{ x \in X: \psi^j(x)_{\Lambda_m} = \eta_j, 0 \leq j \leq r-1 \}$$ We then have $[\omega] = \sqcup_{\overline{\eta} : \eta_0 = \omega } T_{\overline{\eta}}$, and $\psi(T_{\overline{\eta}}) = T_{\overline{\sigma \eta}}$, where $\overline{\sigma \eta} = (\eta_1, \dots, \eta_{r-1}, \eta_0)$ is a cyclic permutation of $\overline{\eta}$. It is enough to show that, for all $\overline{\eta} \in (\mathcal{A}^{\Lambda_m})^r$, we have $$\mu(\psi(T_{\overline{\eta}})) = \int_{T_{ \overline{\eta}}} \exp \left( {\phi(x, \psi(x))} \right) d\mu(x).$$
By the equality $\psi(T_{\overline{\eta}}) = T_{\overline{\sigma \eta}}$, we have $$\mu(\psi(T_{\overline{\eta}})) = \int_X \mu( T_{\overline{\sigma \eta}} \, | \, {\mathcal{F}}_{\Lambda_m^c} ) \, d\mu(x)$$ For any $x \in X$, we know that $$\mathbf{1}_{T_{\overline{\sigma\eta}}}(\eta_1 x_{\Lambda_m^c} )
= \mathbf{1}_{T_{\overline{\eta}}}(\eta_0 x_{\Lambda_m^c} )$$ By this identity, as well as the DLR hypothesis and the defining property of a cocycle, we have the following manipulations: $$\begin{aligned}
\mu( T_{\overline{\sigma \eta}} \, | \, {\mathcal{F}}_{\Lambda_m^c} )(x) &= \left[ \sum_{\zeta \in {\mathcal{A}}^{\Lambda_m}} \exp( \phi(\eta_1 x_{\Lambda_m^c}, \zeta x_{\Lambda_m^c})) \mathbf{1}_X(\zeta x_{\Lambda_m^c}) \right]^{-1} \mathbf{1}_{T_{\overline{\sigma \eta}}}(\eta_1 x_{\Lambda_m^c}) \\
&= \left[ \sum_{\zeta \in {\mathcal{A}}^{\Lambda_m}} \exp( \phi(\eta_0 x_{\Lambda_m^c}, \zeta x_{\Lambda_m^c})) \mathbf{1}_X(\zeta x_{\Lambda_m^c}) \right]^{-1} \\
& \qquad \qquad \times \mathbf{1}_{T_{\overline{\eta}}}(\eta_0 x_{\Lambda_m^c}) \, \exp( \phi(\eta_0 x_{\Lambda_m^c}, \eta_1 x_{\Lambda_m^c})) \\
&= \mu( T_{\overline{\eta}} \, | \, {\mathcal{F}}_{\Lambda_m^c} )(x) \, \exp( \phi(\eta_0 x_{\Lambda_m^c}, \eta_1 x_{\Lambda_m^c}))\end{aligned}$$ Integrating this equation yields the result.
We have therefore done all the work required to prove the following:
\[main-thm-abs\] Let $X \subseteq {\mathcal{A}}^G$ be a subshift, $\phi$ a cocycle on $X$, and $\mu$ a DLR measure on $X$ with respect to $\phi$. Then $\mu$ is $(\phi, {\mathfrak{T}}_X)$-conformal.
By Lemma \[dlr\_conf\_subrel\_abs\], $\mu$ is $(\phi, {\mathfrak{T}}_{X,N})$-conformal for each $N$. The result is then immediate from Lemma \[fin\_imply\_full\].
Theorem \[main-thm-abs\] is the converse of the following result proven by Kimura ([@kimura-2015-thesis], Theorem 5.30), in the special case where $G={\mathbb{Z}}^d$ and the cocycle $\phi$ is induced by a potential, in the manner that we discuss in Proposition \[cocyclepot\] below.
\[fwd-cocycle\] Let $X \subseteq {\mathcal{A}}^G$ be a subshift, $\phi$ a cocycle on $X$, and $\mu$ a $(\phi, {\mathfrak{T}}_X)$-conformal measure on $X$. Then $\mu$ is DLR with respect to $\phi$.
The proof is a straightforward adaptation of Kimura’s methods. The rough idea is to show that two cylinder sets have conditional measures with the appropriate ratio by considering the holonomy that exchanges them, as in the proof of Lemma \[group\] above, then applying the conformal hypothesis. The main difference required to adapt the proof is that the version stated here concerns the DLR equations for an arbitrary measurable cocycle, not necessarily one induced by a potential.
Interactions {#intrxn_sec}
============
In this section, we show that, when a cocycle is induced by an interaction, the DLR equations for the cocycle reduce to those for the interaction.
\[intrxn-defn\] An interaction is a family $\Phi = (\Phi_{\Lambda})_{\Lambda \Subset G}$ of functions $\Phi_{\Lambda}: X \to {\mathbb{R}}$ such that for each $\Lambda \Subset G$, $\Phi_{\Lambda}$ is $\mathcal{F}_{\Lambda}$-measurable, and for all $\Lambda \Subset G$, $x \in X$, the *Hamiltonian series* $$H_{\Lambda}^{\Phi}(x) = \sum_{\substack{\Delta \Subset G \\ \Delta \cap \Lambda \neq \emptyset}} \Phi_{\Delta}(x)$$ converges in the sense that there exists a real number $H_{\Lambda}^{\Phi}(x)$ and, for every $\varepsilon > 0$, there exists some $F \Subset G$ such that, for all $F' \supseteq F$, $$\left| H_{\Lambda}^{\Phi}(x) - \sum_{\substack{\Delta \subseteq F' \\ \Delta \cap \Lambda \neq \emptyset}} \Phi_{\Delta}(x) \right| < \varepsilon$$
Let $\Phi$ be an interaction. For each $(x,y) \in {\mathfrak{T}}_X$, the series $$\sum_{\Lambda \Subset G} [\Phi_{\Lambda}(x) - \Phi_{\Lambda}(y)]$$ converges in the same sense as the Hamiltonian series. Moreover, the function $\phi_{\Phi}: {\mathfrak{T}}_X \to {\mathbb{R}}$ defined by $$\phi_{\Phi}(x,y) = \sum_{\Lambda \Subset G} [\Phi_{\Lambda}(x) - \Phi_{\Lambda}(y)]$$ is a cocycle on ${\mathfrak{T}}_X$.
Let $(x,y) \in {\mathfrak{T}}_X$ be such that $x_{\Delta^c} = y_{\Delta^c}$. We claim that $$\sum_{\Lambda \Subset G} [\Phi_{\Lambda}(x) - \Phi_{\Lambda}(y)] = H^{\Phi}_{\Delta}(x) - H^{\Phi}_{\Delta}(y)$$ with the equality understood in the sense of convergence discussed in the statement of the proposition. Indeed, choose $\varepsilon > 0$. By the definition of an interaction, there exists some $F \Subset G$ sufficiently large that whenever $F \subseteq F' \Subset G$, we have (noting that $\Phi_{E}(x) - \Phi_E(y) = 0$ when $E \cap \Delta = \emptyset$), $$\begin{aligned}
& \left| [H_{\Delta}^{\Phi}(x) - H_{\Delta}^{\Phi}(y)] - \sum_{E \subseteq F'} [\Phi_{E}(x) - \Phi_E(y) ] \right| \\
\leq & \left| H_{\Delta}^{\Phi}(x) - \sum_{\substack{E \subseteq F' \\ E \cap \Delta \neq \emptyset}} \Phi_{E}(x) \right| + \left| H_{\Delta}^{\Phi}(y) - \sum_{\substack{E \subseteq F' \\ E \cap \Delta \neq \emptyset}} \Phi_E(y) \right| \\
< & \, \varepsilon\end{aligned}$$ This establishes that the series converges, in the sense claimed, to a real number $\phi_{\Phi}(x,y) = H_{\Delta}^{\Phi}(x) - H_{\Delta}^{\Phi}(y)$. Moreover, this energy difference expression makes it obvious that $\phi_{\Phi}$ is a cocycle, concluding the proof.
We now observe that the DLR equations for the cocycle $\phi_{\Phi}$, in the sense of Definition \[dlr-eq-cocycle\], are equivalent to the classical DLR equations for the interaction $\Phi$. Indeed, if $\mu$ is a DLR measure with respect to $\phi_{\Phi}$, then for any $\Lambda \Subset G$, any Borel $A \subseteq X$, and $\mu$-a.e. $x \in X$, we have $$\begin{aligned}
\mu(A \,|\, \mathcal{F}_{\Lambda^c})(x) &= \sum_{\zeta \in {\mathcal{A}}^{\Lambda}} \left[ \sum_{\eta \in {\mathcal{A}}^{\Lambda}} \exp( \phi_{\Phi}(\zeta x_{\Lambda^c}, \eta x_{\Lambda^c})) \mathbf{1}_X(\zeta x_{\Lambda^c}) \right]^{-1} \mathbf{1}_A(\zeta x_{\Lambda^c}) \\
&= \sum_{\zeta \in {\mathcal{A}}^{\Lambda}} \left[ \sum_{\eta \in {\mathcal{A}}^{\Lambda}} \exp \left( H_{\Lambda}^{\Phi}(\zeta x_{\Lambda^c}) - H_{\Lambda}^{\Phi}(\eta x_{\Lambda^c}) \right) \mathbf{1}_X(\eta x_{\Lambda^c}) \right]^{-1} \mathbf{1}_A(\zeta x_{\Lambda^c}) \\
&= \frac{1}{Z_{\Lambda}^{\Phi}(x)} \sum_{\zeta \in {\mathcal{A}}^{\Lambda}} \exp \left( - H_{\Lambda}^{\Phi}(\zeta x_{\Lambda^c}) \right) \mathbf{1}_A(\zeta x_{\Lambda^c})\end{aligned}$$ where $$Z_{\Lambda}^{\Phi}(x) = \sum_{\eta \in {\mathcal{A}}^{\Lambda}} \exp \left( - H_{\Lambda}^{\Phi}(\eta x_{\Lambda^c}) \right) \mathbf{1}_X(\eta x_{\Lambda^c})$$ By Theorem \[main-thm-abs\], if $\mu$ satisfies these (classical) DLR equations for $\Phi$, then $\mu$ is $(\phi_{\Phi}, {\mathfrak{T}}_X)$-conformal.
Potentials {#potl_sec}
==========
In this section and the next, we restrict to finitely generated groups $G$ satisfying a certain uniform spherical growth condition, which we will need in order to construct a cocycle from a potential in a way that is compatible with interactions, in a sense to be made precise in §\[potl\_from\_intrxn\_sec\]. The condition is as follows. For a finite generating set $S \Subset G$, consider the balls $B_k$ of radius $k$ centered at the identity in the Cayley graph of $G$ with respect to $S$. We are concerned with the *spherical growth function* $|B_k \setminus B_{k-1}|$, which is a basic quantity studied in geometric group theory, discussed for instance in ([@delaharpe2000groups], §VI.A). Specifically, we require that, for each $n \geq 1$, we have $$\sup_{m \geq 1} \frac{|B_{m+n} \setminus B_{m+n-1}|}{|B_m \setminus B_{m-1}|} < +\infty.$$ We refer to the finiteness of this supremum as the spherical growth property. Note that if the supremum is finite for $n=1$ then in fact it is finite for all $n$.
Two different natural growth conditions on $G$ imply this spherical growth property, by easy calculations. The first is polynomial growth, which, by theorems of Gromov [@gromov1981groups] and Wolf [@wolf1968groups], Bass [@bass1972nilpotent], and Guivarc’h [@guivarch1970groups], implies that there exists $d \in {\mathbb{N}}$ such that, for any word metric on $G$, there exist $0 < c < C$ with $c \leq |B_n|/n^d \leq C$ for all $n \geq 1$. In fact, by a stronger result of Pansu [@pansu1983groups], we can take $C/c$ arbitrarily close to $1$ by taking the supremum only over $n$ sufficiently large. The spherical growth property then holds by an easy calculation. In particular, in ${\mathbb{Z}}^d$, the standard sequence of balls $B_n = {\mathbb{Z}}^d \cap [-n,n]^d$ is fine, as is the sequence of balls for any other word metric on ${\mathbb{Z}}^d$.
For groups of exponential growth, the spherical growth property holds if there exist $\alpha > 1$ and $0 < c < C$ with $c \leq |B_n|/\alpha^n \leq C$ for all $n \geq 1$, by a calculation very similar to the polynomial case. This property, which we might call *exact exponential growth*, is satisfied, for example, by a countable free group with the metric induced by the usual generating set. Unlike the polynomial case, however, exact exponential growth can fail for some groups of exponential growth, at least for some generating sets, and indeed we believe it can hold for one generating set and fail for another.
When we work over a group $G$ in this section and the next, we are therefore restricting to a group $G$ that satisfies the spherical growth property with respect to some generating set, and considering the geometry on $G$ with respect to that fixed generating set.
We now turn our attention to potentials.
For a function $f: X \to {\mathbb{R}}$ and $k \geq 1$, define the variation of $f$ on $B_{k-1}$ as $$v_{k-1} (f) := \sup \left\{ | f(y) - f(x) | \, \Big| \, x, y \in X, \, x_{B_{k-1}} = y_{B_{k-1}} \right\}.$$ It is convenient to define $B_{-1} = \emptyset$ so that $v_{-1}(f) = \| f \|_{\infty}$. We define the *shell norm* ${\left\lVert\cdot\right\rVert_{\mathrm{ShVar}}}$ by $${\left\lVertf\right\rVert_{\mathrm{ShVar}}} := \sum_{k = 0}^{\infty} | B_k \setminus B_{k-1} | v_{k-1} (f).$$ and the space ${\mathrm{ShReg}}(X)$ as the space of *shell-regular potentials*, i.e. functions $f:X \to {\mathbb{R}}$ with ${\left\lVertf\right\rVert_{\mathrm{ShVar}}} < \infty$. It is elementary to show that shell-regularity implies continuity, and that ${\mathrm{ShReg}}(X)$ is a Banach space.
In earlier work on subshifts over ${\mathbb{Z}}^d$ [@meyerovitch-2013-gibbs-eqm], the space of potentials under consideration is known as $\mathrm{SV}_d(X)$, the space of potentials with $d$-summable variation, defined by the norm $\| f \|_{SV_d} = \sum_{k=1}^{\infty} k^{d-1} v_{k-1}(f)$. This space is also known as $\mathrm{Reg}_{d-1}(X)$ [@muir2011gibbs]. With $B_n = {\mathbb{Z}}^d \cap [-n,n]^d$, we have $|B_k\setminus B_{k-1}| = 2^d d (1 + o(1)) k^{d-1}$. Thus, on ${\mathbb{Z}}^d$, we have ${\mathrm{ShReg}}(X) = \mathrm{SV}_d(X)$, with the identity a continuous linear map.
\[cocyclepot\] For $f \in {\mathrm{ShReg}}(X)$ and any $(x,y) \in {\mathfrak{T}}_X$, the series $$\sum_{g \in G} [f(y \cdot g) - f(x \cdot g)]$$ converges absolutely and defines a cocycle $\phi_f$ on ${\mathfrak{T}}_X$.
Fix $(x,y) \in {\mathfrak{T}}_X$, and let $n \geq 1$ be such that $x_{B_n^c} = y_{B_n^c}$. If $g \in G$ and $m \geq 1$ are such that $B_{m-1} \subseteq g^{-1} B_n^c$, then $(x \cdot g)|_{B_{m-1}} = (x \cdot g)|_{B_{m-1}}$ so $|f(y \cdot g) - f(x \cdot g)| \leq v_{B_{m-1}}(f)$. For $m \geq 1$ and $g \in B_k \setminus B_{k-1}$, the triangle inequality guarantees that $g^{-1} B_{m-1} \subseteq B_n^c$ if $k-n \geq m$. Since the shells $B_k \setminus B_{k-1}$ partition $G$, we then have that $$\begin{aligned}
\sum_{g \in G} | f(y \cdot g) - f(x \cdot g)| &\leq 2 | B_n | \| f \|_{\infty} + \sum_{k=n+1}^{\infty} | B_k \setminus B_{k-1} | v_{k-n - 1}(f) \\
&\leq 2 | B_n | \| f \|_{\infty} + \left( \sup_{k \geq 1} \frac{|B_{k+n} \setminus B_{k+n-1}|}{|B_{k} \setminus B_{k-1}|} \right) {\left\lVertf\right\rVert_{\mathrm{ShVar}}}\end{aligned}$$ so indeed the cocycle is well-defined by an absolutely convergent series.
Just as in the case of an interaction, this expression for the cocycle $\phi_f$ allows us to rewrite the DLR equations in a more classical form. Let $f \in {\mathrm{ShReg}}(X)$. It follows from a simple manipulation that for any $x, y \in {\mathfrak{T}}_X$, we have $$\exp(\phi_f(x,y)) = \lim_{m \to +\infty} \exp \left(\sum_{g \in B_m} [f(y \cdot g) - f(x \cdot g)] \right) = \lim_{m\to +\infty} \frac{\exp f_m(y)}{\exp f_m(x)}.$$ where $f_m(z) = \sum_{g \in B_m} f(z \cdot g)$. Now, let $\mu$ be a measure on $X$, and let $A \subseteq X$. If $\mu$ is a DLR measure with respect to $\phi_f$, then for $\mu$-a.e. $x \in X$, we have $$\begin{aligned}
\mu(A \, | \, {\mathcal{F}}_{\Lambda^c}) &= \sum_{\eta \in {\mathcal{A}}^{\Lambda}} \left[ \sum_{\zeta \in {\mathcal{A}}^{\Lambda}} \exp( \phi_f(\eta x_{\Lambda^c}, \zeta x_{\Lambda^c})) \mathbf{1}_X(\zeta x_{\Lambda^c}) \right]^{-1} \mathbf{1}_A(\eta x_{\Lambda^c}) \\
&= \sum_{\eta \in {\mathcal{A}}^{\Lambda}} \left[ \sum_{\zeta \in {\mathcal{A}}^{\Lambda}} \lim_{m\to +\infty} \frac{\exp f_m(\zeta x_{\Lambda^c})}{\exp f_m(\eta x_{\Lambda^c})}\mathbf{1}_X(\zeta x_{\Lambda^c}) \right]^{-1} \mathbf{1}_A(\eta x_{\Lambda^c})\\
&= \lim_{m \to \infty} \frac{ \sum_{\eta \in \mathcal{A}^{\Lambda}} e^{f_m(\eta x_{\Lambda^c})}
\mathbf{1}_A(\eta x_{\Lambda^c} )}
{\sum_{\zeta \in \mathcal{A}^{\Lambda} } e^{f_m(\zeta x_{\Lambda^c})} \mathbf{1}_X(\zeta x_{\Lambda^c}) }\end{aligned}$$ These are the DLR equations as found in Kimura [@kimura-2015-thesis]. Applying Theorem \[main-thm-abs\] therefore shows that any DLR measure with respect to a potential $f \in {\mathrm{ShReg}}(X)$ is necessarily $(\phi_f, {\mathfrak{T}}_X)$-conformal, providing the full converse for Kimura’s result described in the introduction.
Potentials induced by interactions, and vice versa {#potl_from_intrxn_sec}
==================================================
We have seen that the DLR property implies the conformal property for an arbitrary cocycle on the Gibbs relation, with Gibbs measures for interactions and for potentials as two special cases. These cases are not independent. In this section, we adapt the methods and results of Muir [@muir2011gibbs] and Ruelle [@ruelle-2004-thermo] to construct a potential from an interaction in various physically equivalent ways, and, for sufficiently regular potentials, to construct an interaction. The novelty in this section is in the greater generality of the group $G$, and in clarifying a condition on the support of an interaction necessary for the calculations to go through.
In this section, all interactions are translation-invariant, i.e. for any $\Lambda \Subset G$ and any $x \in X$, we require that $\Phi_{g^{-1} \Lambda} (x \cdot g) = \Phi_{\Lambda}(x)$. We recall a classical space of particularly well-behaved interactions:
For an interaction $\Phi$, let $$\| \Phi \|_B = \sum_{\substack{\Lambda \Subset G \\ e \in \Lambda}} \| \Phi_{\Lambda} \|_{\infty}$$ We define ${\mathcal{B}}$ as the normed space of *absolutely summable* interactions $\Phi$, i.e. those for which $\| \Phi \|_B < \infty$.
It is routine to check that $({\mathcal{B}}, \| \cdot \|_B)$ is a Banach space. Moreover, for an absolutely summable interaction $\Phi \in {\mathcal{B}}$, we in fact have absolute convergence of the series defining the cocycle, since for any $(x,y) \in {\mathfrak{T}}_X$ with $x_{\Delta^c} = y_{\Delta^c}$ for some $\Delta \Subset G$, we have $$\begin{aligned}
\sum_{\Lambda \Subset G} | \Phi_{\Lambda}(x) - \Phi_{\Lambda}(y)| &\leq 2 \sum_{\substack{\Lambda \Subset G \\ \Lambda \cap \Delta \neq \emptyset}} \| \Phi_{\Lambda} \|_{\infty} \\
&\leq 2 | \Delta | \sum_{\substack{\Lambda \Subset G \\ e \in \Lambda}} \| \Phi_{\Lambda} \|_{\infty} \\
&= 2 | \Delta | \| \Phi \|_B < \infty\end{aligned}$$
We introduce a family of linear maps that convert interactions into potentials.
Let $(a_{\Lambda})_{\Lambda \Subset G, \, e \in \Lambda}$ be a collection of nonnegative real coefficients such that, for each $\Lambda \Subset G$ with $e \in \Lambda$, we have $\sum_{g \in \Lambda} a_{g^{-1} \Lambda} = 1$. Then, for an interaction $\Phi$, define the potential $A_{\Phi}$ via $$A_{\Phi}(x) = - \sum_{\substack{\Lambda \Subset G \\ e \in \Lambda}} a_{\Lambda} \Phi_{\Lambda}(x)$$
The map $\Phi \mapsto A_{\Phi}$ is clearly linear. We refer to this map as the translate-weighting map determined by the weights $(a_{\Lambda})_{\Lambda \Subset G, e \in \Lambda}$.
Two important examples are the following.
- The *uniform map*, where $a_{\Lambda} \in
\{ 0, \frac{1}{|\Lambda|} \}$ for every nonempty $\Lambda \Subset G$. Muir uses the letter $A$ to denote this specific operator, i.e. $A(\Phi) = A_{\Phi}$.
- The class of *dictator maps*, where $a_{\Lambda} \in \{ 0, 1 \}$ for every $\Lambda \Subset G$. For instance, on ${\mathbb{Z}}^d$, Ruelle studies the operator for which $a_{\Lambda} = 1$ if and only if $0$ is the middle element, or more precisely the $\lfloor (|\Lambda| + 1)/2 \rfloor$-th element, of $\Lambda$ in lexicographic order. In [@muir2011gibbs], Muir refers to this operator as $\hat{A}$.
In Fact 7.8 in [@muir2011gibbs], it is claimed that $A_{\Phi} \in {\mathrm{ShReg}}(X)$ for every translate-weighting map and every $\Phi \in {\mathcal{B}}$. This claim is incorrect, as we demonstrate with an example below. However, the argument presented for this claim is correct in the case of what Muir calls “cubic-type” interactions. Here we reproduce a version of this proof for a broader class of interactions and give them a different name, suggested by the geometric reason for their necessity.
An interaction $\Phi$ is *full-dimensional* if there exists some $C > 0$ such that, for all $\Lambda \Subset G$ with $e \in \Lambda$ and $\Phi_{\Lambda} \not\equiv 0$, we have the bound $$\sup \{ |B_n| : \, n \in {\mathbb{N}}, \, \Lambda \cap B_{n-1}^c \neq \emptyset \} \leq C |\Lambda|$$
\[full-dim-shreg\] If $\Phi \in {\mathcal{B}}$ is full-dimensional, then $A_{\Phi} \in {\mathrm{ShReg}}(X)$, where $A_{\Phi}$ is the image of $\Phi$ under an arbitrary translate-weighting map.
We first estimate $v_{k-1}(A_{\Phi})$: $$\begin{aligned}
v_{k-1}(A_{\Phi}) &= \sup \left\{ \left| \sum_{\substack{\Lambda \Subset G \\ e \in \Lambda} } a_{\Lambda} [ \Phi_{\Lambda}(x) - \Phi_{\Lambda}(y) ] \right| \, : \, x, y \in X, \, x_{B_{k-1}} = y_{B_{k-1}} \right\} \\
&\leq 2 \sum_{\substack{\Lambda \Subset G \\ e \in \Lambda \\ \Lambda \cap B_{k-1}^c \neq \emptyset} } a_{\Lambda} \| \Phi_{\Lambda} \|_{\infty}\end{aligned}$$
We can now estimate the shell norm by an exchange of summations: $$\begin{aligned}
{\left\lVertA_{\Phi}\right\rVert_{\mathrm{ShVar}}} &\leq 2 \sum_{k=0}^{\infty} |B_k \setminus B_{k-1} | \sum_{\substack{\Lambda \Subset G \\ e \in \Lambda \\ \Lambda \cap B_{k-1}^c \neq \emptyset} } a_{\Lambda} \| \Phi_{\Lambda} \|_{\infty} \\
&= 2 \sum_{\substack{ \Lambda \Subset G \\ e \in \Lambda}} a_{\Lambda} \| \Phi_{\Lambda} \|_{\infty} \sum_{\substack{k \geq 0 \\ \Lambda \cap B_{k-1}^c \neq \emptyset}} |B_k \setminus B_{k-1}| \end{aligned}$$ Observe that $$\sum_{\substack{k \geq 0 \\ \Lambda \cap B_{k-1}^c \neq \emptyset}} |B_k \setminus B_{k-1}| = \sup \{ |B_n| : \, n \in {\mathbb{N}}, \, \Lambda \cap B_{n-1}^c \neq \emptyset \} \leq C | \Lambda|$$ so in fact $${\left\lVertA_{\Phi}\right\rVert_{\mathrm{ShVar}}} \leq 2C \sum_{\substack{ \Lambda \Subset G \\ e \in \Lambda}} a_{\Lambda} |\Lambda| \| \Phi_{\Lambda} \|_{\infty}$$
We need to rearrange this sum. For a given $\Lambda \Subset G$, consider the set of translates of $\Lambda$ containing the identity, denoted $T(\Lambda) = \{ g^{-1} \Lambda, \, g \in \Lambda \}$. For instance, in ${\mathbb{Z}}$, if $\Lambda=\{0,1\}$, then $T(\Lambda)= \{ \{ -1,0 \}, \{ 0,1 \} \}$. Let ${\mathcal{T}}$ denote the set of such sets of translates, i.e. ${\mathcal{T}}= \{ T(\Lambda) \, : \, \Lambda \Subset G, \, e \in \Lambda \}$. Note that ${\mathcal{T}}$ is a partition of the set $\{ \Lambda \Subset G, \, e \in \Lambda \}$. Observe furthermore that $|T| = |\Lambda|$ for any $\Lambda \in T$.
For any given $T \in {\mathcal{T}}$, the value $|\Lambda| \| \Phi_{\Lambda} \|_{\infty}$ is the same for any $\Lambda \in T$, i.e. any $\Lambda$ such that $T = T(\Lambda)$. so we denote it by $c_T$. We can then express the bound on ${\left\lVertA_{\Phi}\right\rVert_{\mathrm{ShVar}}}$ by summing over $T \in {\mathcal{T}}$, as follows: $$\begin{aligned}
\sum_{\substack{ \Lambda \Subset G \\ e \in \Lambda}} a_{\Lambda} |\Lambda| \| \Phi_{\Lambda} \|_{\infty} &= \sum_{T \in {\mathcal{T}}} \sum_{\Lambda \in T} a_{\Lambda} c_T \\
&= \sum_{T \in {\mathcal{T}}} c_T \sum_{\Lambda \in T} a_{\Lambda} \\
&= \sum_{T \in {\mathcal{T}}} c_T \\
&= \sum_{T \in {\mathcal{T}}} |\Lambda| \| \Phi_{\Lambda} \|_{\infty} \\
&= \sum_{T} \sum_{\Lambda \in T} \| \Phi_{\Lambda} \|_{\infty} \\
&= \| \Phi \|_B\end{aligned}$$
Thus ${\left\lVertA_{\Phi}\right\rVert_{\mathrm{ShVar}}} \leq 2C \| \Phi \|_B < \infty$.
The following example, due to Nishant Chandgotia (personal communication), shows that if $\Phi \in {\mathcal{B}}$ is not full-dimensional, then $A_{\Phi}$ can fail to be shell-regular.
Let $X = \{0,1\}^{\mathbb{Z}}$, with the standard metric on ${\mathbb{Z}}$, so $B_k=[-k,k]$. Define $\Phi = (\Phi_{\Lambda})_{\Lambda \Subset {\mathbb{Z}}}$ as follows: for any $i, j \in \mathbb{Z}$, $\Phi_{\{ i,j \} }(x) = \frac{1}{(j-i)^2}$ if $x_i = x_j = 1$ and $0$ otherwise; and $\Phi_{\Lambda} \equiv 0$ for all other $\Lambda \Subset G$. Clearly $\Phi$ is translation-invariant. We claim that $\Phi \in {\mathcal{B}}$ but $A_{\Phi} \notin {\mathrm{ShReg}}(X)$, where $A_{\Phi}$ is the image of $\Phi$ under the dictator map that ignores $\Lambda \Subset {\mathbb{Z}}$ unless $0 = \inf \Lambda$. Indeed, $\| \Phi \|_{\mathcal{B}} = 2 \sum_{j=1}^{\infty} \frac{1}{j^2} < \infty$, but $$v_k(A_{\Phi}) = \sum_{l = k+1}^{\infty}\frac{1}{l^2}\geq \frac{1}{k+1}$$ which implies that $${\left\lVertA_{\Phi}\right\rVert_{\mathrm{ShVar}}} \geq 2 \sum_{k=1}^{+\infty} \frac{1}{k+1} = +\infty$$
The next two propositions establish that for any full-dimensional interaction $\Phi \in {\mathcal{B}}$, the images $A_{\Phi}$ and $A'_{\Phi}$ of $\Phi$ under any two translate-weighting maps are equivalent in a similar sense to that described in ([@muir2011gibbs], p.118). That is, $A_{\Phi}$ and $A'_{\Phi}$ induce the same cocycle, so they have all of the same Gibbs measures (in either sense); and they have the same integral under any translate-invariant measure, so they have all of the same equilibrium measures, for a given notion of measure-theoretic entropy. (On non-amenable groups, such as the free group, various entropies that are equivalent in the amenable setting can fail to coincide [@bowen2017sofic].)
\[same\_cocycle\] Let $\Phi \in {\mathcal{B}}$ be full-dimensional. Then $\Phi$ and $A_{\Phi}$ induce the same cocycle, i.e. $\phi_{A_{\Phi}} = \phi_{\Phi}$, where $A_{\Phi}$ is the image of $\Phi$ under an arbitrary translate-weighting map, with weights $a_{\Lambda}$.
Suppose that $(x,y) \in {\mathfrak{T}}_X$ with $x_{\Delta^c} = y_{\Delta^c}$. Observe that $$\phi_{\Phi}(x,y) = \sum_{\substack{\Lambda \Subset G \\ \Lambda \cap \Delta \neq \emptyset}} \left[ \Phi_{\Lambda}(x) - \Phi_{\Lambda}(y) \right]$$ To compute $\phi_{A_{\Phi}}$, we first observe that, since $\Phi_{\Lambda}(x \cdot g) = \Phi_{g \Lambda}(x)$, we have the following convenient expression for $A_{\Phi}(x \cdot g)$: $$\begin{aligned}
A_{\Phi}(x \cdot g) &= - \sum_{\substack{\Lambda \Subset G \\ e \in \Lambda \\ \Lambda \cap g^{-1} \Delta \neq \emptyset}} a_{\Lambda} \Phi_{g \Lambda} (x) \\
&= - \sum_{\substack{\Lambda' \Subset G \\ g \in \Lambda' \\ \Lambda' \cap \Delta \neq \emptyset}} a_{g^{-1} \Lambda'} \Phi_{\Lambda'} (x)\end{aligned}$$ We then compute: $$\begin{aligned}
\phi_{A_{\Phi}}(x,y) &= \sum_{g \in G} \sum_{\substack{\Lambda \Subset G \\ g \in \Lambda \\ \Lambda \cap \Delta \neq \emptyset}} a_{g^{-1} \Lambda} [ \Phi_{\Lambda}(x) - \Phi_{\Lambda} (y) ] \\
&= \sum_{\substack{\Lambda \Subset G\\ \Lambda \cap \Delta \neq \emptyset}} \left( \sum_{g \in \Lambda} a_{g^{-1} \Lambda} \right) [ \Phi_{\Lambda}(x) - \Phi_{\Lambda} (y) ] \\
&= \phi_{\Phi}(x,y)\end{aligned}$$
Crucially, the interchange of infinite summations in the second equality from last was justified by the absolute convergence of the series defining the cocycles $\phi_{A_{\Phi}}$ and $\phi_{\Phi}$, implied by the regularity of $\Phi$ and $A_{\Phi}$.
Proposition \[same\_cocycle\] is similar to Theorem 5.42 in [@kimura-2015-thesis], which is stated for Ruelle’s operator $A$, using specifications rather than cocycles.
Let $\mu$ be a $G$-invariant measure on $X$, let $\Phi \in {\mathcal{B}}$ be full-dimensional. Let $A_{\Phi}$ be the image of $\Phi$ under an arbitrary translate-weighting map determined by weights $(a_{\Lambda})_{\Lambda \Subset G, \, e \in \Lambda}$. Then the integral $\int_X A_{\Phi} \, d\mu$ depends only on $\Phi$ and $\mu$, and not on the weights $a_{\Lambda}$.
As in the proof of Proposition \[full-dim-shreg\], for each finite $\Lambda \Subset G$ with $e \in \Lambda$, let $T(\Lambda) = \{ g^{-1} \Lambda \, | \, g \in \Lambda \}$. For any given $T$, the quantity $\int_X \Phi_{\Lambda} \, d\mu$ is constant as $\Lambda$ ranges over $T$, so we denote it by $b_T$. We now compute: $$\begin{aligned}
\int_X A_{\Phi} \, d\mu &= \int_X \sum_{T \in {\mathcal{T}}} \sum_{\Lambda \in T} a_{\Lambda} \Phi_{\Lambda} \, d\mu \\
&= \sum_{T \in {\mathcal{T}}} b_T \sum_{\Lambda \in T} a_{\Lambda} \\
&= \sum_{\substack{\Lambda \Subset G \\ e \in \Lambda}} \frac{1}{|\Lambda|} \int_X \Phi_{\Lambda} \, d\mu\end{aligned}$$ which does not depend on the weights $a_{\Lambda}$, and in addition clearly expresses the integral $\int_X A_{\Phi} \, d\mu$ as the average energy at the identity due to the interaction $\Phi$.
To justify exchanging the integral and the infinite sum over sets of translates $T$ in the second equality, observe that the sum converges absolutely to a continuous function, which is therefore bounded since $X$ is compact and thus integrable since $\mu$ is a probability measure. Indeed, let $|\Phi|$ be the interaction given by $|\Phi|_{\Lambda} = |\Phi_{\Lambda}|$. Then $|\Phi|$ is clearly still full-dimensional, with $\| |\Phi| \|_B = \| \Phi \|_B$, so $$\sum_{T \in {\mathcal{T}}} \sum_{\Lambda \in T} a_{\Lambda} |\Phi_{\Lambda}| = A_{|\Phi|} \in {\mathrm{ShReg}}(X)$$ by Proposition \[full-dim-shreg\].
Finally, we introduce a smaller Banach space ${\mathrm{VolReg}}(X)$ of *volume-regular functions*, defined analogously to ${\mathrm{ShReg}}(X)$ by a volume norm rather than a shell norm. That is, ${\mathrm{VolReg}}(X) = \{ f: X \to {\mathbb{R}}\, : \, {\left\lVertf\right\rVert_{\mathrm{VolVar}}} < \infty \}$ where we define $${\left\lVertf\right\rVert_{\mathrm{VolVar}}} := \sum_{k = 0}^{\infty} | B_k | v_{k-1} (f)$$ Volume-regularity clearly implies shell-regularity. The following result of Muir ([@muir2011gibbs], proof of Fact 7.6) is stated for ${\mathbb{Z}}^d$, with the name $\mathrm{Reg}_d(X)$ for ${\mathrm{VolReg}}(X)$, but is valid, with the same proof, on any finitely generated group (or indeed any countable group, with volume-regularity defined with respect to some exhausting sequence of finite sets, rather than balls as in the finitely generated case).
\[preimg\] Let $f \in {\mathrm{VolReg}}(X)$ be a volume-regular potential. Then there exists an absolutely summable $\Phi \in {\mathcal{B}}$ with $A_{\Phi} = f$ where $A_{\Phi}$ is the image of $\Phi$ under some dictator map.
In particular, any Gibbs measure (in either sense) for $f \in VolReg(X)$ is also a Gibbs measure for any potential $\Phi \in {\mathcal{B}}$ with $A_{\Phi} = f$, and vice versa. We remark that this applies in particular to any local potential, i.e. any potential $f$ such that for some $\Lambda \Subset G$, $f(x)$ is determined by $x_{\Lambda}$. The interaction $\Phi$ guaranteed in Theorem \[preimg\] then has bounded range.
Acknowledgments {#acknowledgments .unnumbered}
===============
We thank Brian Marcus and Tom Meyerovitch for many helpful and generous discussions throughout the course of this work. We also thank Rodrigo Bissacot for drawing our attention to the work of Kimura; Nishant Chandgotia for providing the example in §\[potl\_from\_intrxn\_sec\]; and Sebasti[á]{}n Barbieri for a very helpful conversation regarding the spherical growth property used in §§5–6.
[10]{}
Hyman Bass. . , 3(25):603–614, 1972.
Lewis Bowen. Examples in the entropy theory of sofic groups, 2018. Preprint, arXiv:1704.06349v3 \[math.DS\].
Pierre de la Harpe. . Chicago Lectures in Mathematics. Chicago, 2000.
Manfred Denker and Mariusz Urba[ń]{}ski. On the existence of conformal measures. , 328(2), December 1991.
R.L. Dobrushin. . , 2(4):292–301, 1969.
Jacob Feldman and Calvin C. Moore. Ergodic equivalence relations, cohomology, and von [Neumann]{} algebras. [I]{}. , 234(2):289–324, 1977.
Michael Gromov. . , 53:53–78, 1981.
Yves Guivarc’h. Croissance polynomiale et p[é]{}riodes des fonctions harmoniques. , 101:333–379, 1970.
O.E. Lanford III and David Ruelle. . , 13(3):194–215, 1969.
Alexander S. Kechris. , 2019. Preprint. <http://www.math.caltech.edu/~kechris/>.
Bruno Kimura. Gibbs measures on subshifts. Master’s thesis, University of S[ã]{}o Paulo, 2015.
Tom Meyerovitch. Gibbs and equilibrium measures for some families of subshifts. , 33:934–953, 2013.
Stephen R Muir. . PhD thesis, University of North Texas, 2011.
Pierre Pansu. . , 3:415–445, 1983.
S.J. Patterson. . , 136:241–273, 1976.
Karl Petersen and Klaus Schmidt. . , 349(7):2775–2811, 1997.
David Ruelle. . Cambridge, 2nd edition, 2004.
Omri M. Sarig. . , 19(6):1565–1593, 1999.
Omri M. Sarig. . Unpublished lecture notes available from the author’s webpage, 2009.
Joseph A. Wolf. . , 2:421–446, 1968.
|
{
"pile_set_name": "ArXiv"
}
|
**On Algebraic Functions**
.2in
N.D. Bagis
Stenimahou 5 Edessa
Pella 58200, Greece
[email protected]
$$$$
**Abstract**
> In this article we consider functions with Moebius-periodic rational coefficients. These functions under some conditions take algebraic values and can be recovered by theta functions and the Dedekind eta function. Special cases are the elliptic singular moduli, the Rogers-Ramanujan continued fraction, Eisenstein series and functions associated with Jacobi symbol coefficients.
**Keywords**: Theta functions; Algebraic functions; Special functions; Periodicity;
Known Results on Algebraic Functions
====================================
The elliptic singular moduli $k_r$ is the solution $x$ of the equation $$\frac{{}_2F_{1}\left(\frac{1}{2},\frac{1}{2};1;1-x^2\right)}{{}_2F_{1}\left(\frac{1}{2},\frac{1}{2};1;x^2\right)}
=\sqrt{r}$$ where $${}_2F_{1}\left(\frac{1}{2},\frac{1}{2};1;x^2\right)=\sum^{\infty}_{n=0}\frac{\left(\frac{1}{2}\right)^2_n}{(n!)^2} x^{2n}=\frac{2}{\pi}K(x)=\frac{2}{\pi}\int^{\pi/2}_{0}\frac{d\phi}{\sqrt{1-x^2\sin^2(\phi)}}$$ The 5th degree modular equation which connects $k_{25r}$ and $k_r$ is (see \[13\]): $$k_rk_{25r}+k'_rk'_{25r}+2^{5/3} (k_rk_{25r}k'_rk'_{25r})^{1/3}=1$$ The problem of solving (3) and find $k_{25r}$ reduces to that solving the depressed equation after named by Hermite (see \[3\]): $$u^6-v^6+5u^2v^2(u^2-v^2)+4uv(1-u^4v^4)=0$$ where $u=k^{1/4}_{r}$ and $v=k^{1/4}_{25r}$.\
The function $k_r$ is also connected to theta functions from the relations $$k_r=\frac{\theta^2_2(q)}{\theta^2_3(q)}, \textrm{ where } \theta_2(q)=\theta_2=\sum^{\infty}_{n=-\infty}q^{(n+1/2)^2} \textrm { and } \theta_3(q)=\theta_3=\sum^{\infty}_{n=-\infty}q^{n^2}$$ $q=e^{-\pi\sqrt{r}}$.\
Hence a closed form solution of the depressed equation is $$k_{25r}=\frac{\theta^2_2(q^5)}{\theta^2_3(q^5)}$$ But this is not satisfactory.\
For example in the case of $\pi$ formulas of Ramanujan (see \[12\] and related references), one has to know from the exact value of $k_r$ the exact value of $k_{25r}$ in radicals. (Here we mention the concept that when $r$ is positive rational then the value of $k_r$ is algebraic number). Another example is the Rogers-Ramanujan continued fraction (RRCF) which is $$R(q)=\frac{q^{1/5}}{1+}\frac{q}{1+}\frac{q^2}{1+}\frac{q^3}{1+}\ldots$$ (see \[4\],\[5\],\[6\],\[8\],\[9\],\[10\],\[13\],\[14\],\[16\],\[21\]), the value of which depends form the depressed equation.\
If we know the value of (RRCF) then we can find the value of $j$-invariant from Klein’s equation (see \[19\],\[8\] and Wolfram pages ’Rogers Ramanujan Continued fraction’): $$j_r=-\frac{\left(R^{20}-228 R^{15}+494 R^{10}+228 R^5+1\right)^3}{R^5 \left(R^{10}+11 R^5-1\right)^5} \textrm{ , where } R=R(q^2)$$ One can also prove that Klein’s equation (8) is equivalent to depressed equation (4).\
Using the 5th degree modular equation of Ramanujan $$R(q^{1/5})^5=R(q)\frac{1-2R(q)+4R(q)^2-3R(q)^3+R(q)^4}{1+3R(q)+4R(q)^2+2R(q)^3+R(q)^4}$$ and (8) we can find the value of $j_{r/25}$ and hence from the relation $$j_r=\frac{256(k^2_r+k'^4_{r})^3}{(k_rk'_r)^4} .$$ $k_{r/25}$. Knowing $k_{r}$ and $k_{r/25}$, we have evaluated $k_{25r}$ (see \[7\]) and give relations of the form $$k_{25r}=\Phi(k_r,k_{r/25})\textrm{ and }k_{25^nr}=\Phi_n(k_r,k_{r/25}), n\in\bf N\rm$$ Hence when we know the value of $R(q)$ in radicals we can find $k_r$ and $k_{25r}$ in radicals and the opposite.\
\
Also in \[3\] and Wikipedia ’Bring Radical’ one can see how the depressed equation can used for the extraction of the solution of the general quintic equation $$ax^5+bx^4+cx^3+dx^2+ex+f=0$$ The above equation can solved exactly with theta functions and in some cases in radicals.\
The same holds and with the sextic equation (see \[8\]) $$\frac{b^2}{20a}+bY+aY^2=cY^{5/3}$$ which have solution $$Y=Y_r=\frac{b}{250a}\left(R(q^2)^{-5}-11-R(q^2)^5\right)\textrm{, }q=e^{-\pi\sqrt{r}}\textrm{, }r>0$$ and $r$ can evaluted from the constants using the relation $j_r=250\frac{c^3}{a^2b}$, in order to generate the solution.\
The Ramanujan-Dedekind eta function is defined as $$\eta(\tau)=\prod^{\infty}_{n=1}(1-q^n)\textrm{, }q=e^{i\pi\tau}\textrm{, }\tau=\sqrt{-r}$$ The $j$-invariant can be expresed in terms of Ramanujan-Dedekind eta function as $$j_r=\left[\left(q^{-1/24}\frac{\eta(\tau)}{\eta(2\tau)}\right)^{16}+16\left(q^{1/24}\frac{\eta(2\tau)}{\eta(\tau)}\right)^{8}\right]^3$$ The Ramanujan-Dedekind eta function is (see \[11\],\[22\]) $$\eta(\tau)^8=\frac{2^{8/3}}{\pi^4}q^{-1/3}(k_r)^{2/3}(k'_r)^{8/3}K(k_r)^4$$ There are many interesting things one can say about algebricity and special functions.\
In this article we examine Moebius-periodic functions. If the Taylor coefficients of a function are Moebius periodic, then we can evaluate prefectly these functions, by taking numerical values. This can be done using the Program Mathematica and the routine ’Recognize’. By this method we can find values coming from the middle of nowhere. However they still remaining conjectures. Also it is great challenge to find the polynomials and modular equations of these Moebius-periodic functions and united them with a general theory. Many various scientists are working for special functions (mentioned above) such as the singular moduli ($j$-invariant) and the related to them Hilbert polynomials, (RRCF), theta functions, Dedekind-Ramanujan $\eta$ and other similar to them. In \[7\] is presented a way to evaluate the fifth singular moduli and the Rogers-Ramanujan continued fraction with the function $w_r=\sqrt{k_rk_{25r}}$. This function can replace the classical singular moduli in the case of Rogers-Ramanujan continued fraction and Klein’s invariant.\
We are concern to construct a theory of such functions and characterize them.
The Main Theorem
================
We begin by giving a definition and a conjecture which will help us for the proof of the Main Theorem.\
\
**Definition 1.**\
Let $a$,$p$ be positive rational numbers with $a<p$ and $q=e^{-\pi\sqrt{r}}$, $r>0$. We call ”agiles” the quantities $$[a,p;q]:=\prod^{\infty}_{n=0}(1-q^{pn+a})(1-q^{pn+p-a})$$\
The ”agiles” have the following very interesting conjecture-property.\
\
**Conjecture.**\
If $q=e^{-\pi\sqrt{r}}$, $r$ is positive rational and $a,b$ are positive rationals with $a<p$, then $$[a,p;q]^{*}:=q^{p/12-a/2+a^2/(2p)}[a,p;q]=\textrm{Algebraic Number}$$\
Assuming the above unproved property we will show the following\
\
**Main Theorem.**\
Let $f$ be a function analytic in $(-1,1)$. Set by Moebius theorem bellow $X(n)$ to be $$X(n)=\frac{1}{n}\sum_{d|n}\frac{f^{(d)}(0)}{\Gamma(d)}\mu\left(\frac{n}{d}\right)$$ If $X(n)$ is $T$ periodic sequence, rational valued and catoptric in the every period-interval i.e. if for every $n\in\bf N\rm$ is $a_k=X(k+nT)=X(k)$ with $a_T=0$ we have $a_1=a_{T-1}, a_2=a_{T-2},\ldots,a_{(T-1)/2}=a_{(T+1)/2}$, then exist rational a number $A$, such that $$q^Ae^{-f(q)}=\textrm{Algebraic Number} ,$$ in the points $q=e^{-\pi\sqrt{r}}$, $r$ positive rational.\
The number $A$ is given from $$A=\sum^{\left[\frac{T-1}{2}\right]}_{j=1}\left(-\frac{j}{2}+\frac{j^2}{2T}+\frac{T}{12}\right)X(j)$$\
For to prove the Main Theorem we will use the next known (see \[2\]) Moebius inversion Theorem\
\
**Theorem.** (Moebius inversion Theorem)\
If $f(n)$ and $g(n)$ are arbitrary arithmetic functions, then $$\sum_{d|n}f(d)=g(n)\Leftrightarrow f(n)=\sum_{d|n}g(d)\mu\left(\frac{n}{d}\right)$$\
The Moebius $\mu$ function is defined as $\mu(n)=0$ if $n$ is not square free, and $(-1)^r$ if $n$ have $r$ distinct primes.\
\
Hence some values are $\mu(1)=1$, $\mu(3)=-1$, $\mu(15)=1$, $\mu(12)=0$, etc.\
For to prove the Main Theorem we will use the next\
\
**Lemma 1.**\
If $|x|<1$ $$\log\left(\prod^{\infty}_{n=1}\left(1-x^n\right)^{X(n)}\right)=-\sum^{\infty}_{n=1}\frac{x^n}{n}\sum_{d|n}X(d)d$$\
**Proof.**\
It is $|x|<1$, hence $$\log\left(\prod^{\infty}_{n=1}\left(1-x^n\right)^{X(n)}\right)=\sum^{\infty}_{n=1}X(n)\log\left(1-x^n\right)=$$ $$=-\sum^{\infty}_{n=1}X(n)\sum^{\infty}_{m=1}\frac{x^{mn}}{m}=-\sum^{\infty}_{n,m=1}\frac{x^{nm}}{nm}X(m)m=-\sum^{\infty}_{n=1}\frac{x^n}{n}\sum_{d|n}X(d)d .$$\
**Proof of Main Theorem.**\
From Taylor expansion theorem we have $$e^{-f(x)}=\exp\left(-\sum^{\infty}_{n=1}\frac{f^{(n)}(0)}{n!}x^n\right)$$ From the Moebius inversion theorem exists $X(n)$ such that $$X(n)=\frac{1}{n}\sum_{d|n}\frac{f^{(d)}(0)}{\Gamma(d)}\mu\left(\frac{n}{d}\right)$$ or equivalent $$\frac{f^{(n)}(0)}{n!}=\frac{1}{n}\sum_{d|n}X(d)d$$ Hence from Lemma 1 $$e^{-f(x)}=\exp\left(-\sum^{\infty}_{n=1}\frac{x^n}{n}\sum_{d|n}X(d)d\right)=\prod^{\infty}_{n=1}\left(1-x^n\right)^{X(n)}$$ and consequently because of the periodicity and the catoptric property of $X(n)$, we get $$e^{-f(x)}=\prod^{\left[\frac{T-1}{2}\right]}_{j=1}[j,T;x]^{X(j)}$$ which is a finite product of ”agiles” and from the Conjecture exist $A$ rational such that (21) hold, provided that $x=q^{-\pi\sqrt{r}}$ and $r$ positive rational.\
\
**Examples.**\
**1)** For $X(n)=\left(\frac{n}{G}\right)$, where $G=2^mg_1^{m_1}g_2^{m_2}\ldots g_s^{m_s}$, $m,m_1,\dots,m_s$ not negative integers, $m\neq1$ and $g_1<g_2<\ldots<g_s$ primes of the form $1\textrm{mod}4$, then exist $A$ rational such that $$q^A\prod^{\infty}_{n=1}(1-q^n)^{\left(\frac{n}{G}\right)}=q^A\prod^{\left[\frac{G-1}{2}\right]}_{j=1}[j,G,q]^{X(j)}=\textrm{Algebraic}$$ when $q=e^{-\pi\sqrt{r}}$, $r$ positive rational.\
A special case is $G=5$ which gives the Rogers-Ramanujan continued fraction. More precisely is $X=\{1,-1,-1,1,0,...\}$ and evaluations can given. $$q^{1/5}\prod^{\infty}_{n=1}(1-q^n)^{\left(\frac{n}{5}\right)}=R(q)=q^{1/5}\frac{1}{1+}\frac{q}{1+}\frac{q^2}{1+}\ldots$$
**2)** If $X=\{1,1,0,1,1,0,\ldots\}$, then $T=3$ and $A=-\frac{1}{12}$. Hence we get that if $q=e^{-\pi}$, then $$q^{-1/12}e^{-f(q)}=\sqrt[12]{81 \left(885+511 \sqrt{3}-3 \sqrt{174033+100478 \sqrt{3}}\right)}$$
**3)** If $X=\{1,1,1,1,0,1,1,1,1,0,\ldots\}$, then $T=5$ and $A=\frac{-1}{6}$ we get that\
i) If $q=e^{-\pi\sqrt{2}}$, then $q^{-1/6}e^{-f(q)}$, is root of $$3125+250 v^6-20 v^{10}+v^{12}=0$$ We can solve the above equation observing that is of the form (13): $$3125+250Y_r^6+Y_r^{12}=j_r^{1/3}Y_r^{10},\eqno{(eq)}$$ where $j_r$ is the $j$-invariant. Hence $$q^{-1/6}e^{-f(q)}=\sqrt[6]{Y_{1/2}}$$ then see \[21\]: $$R(e^{-2\pi\sqrt{2}})=\frac{\sqrt{5(g+1)+2g\sqrt{5}}-\sqrt{5g}-1}{2}$$ where $$(g^3-g^2)/(g+1)=(\sqrt{5}+1)/2$$ One can use the duplication formula of RRCF (see \[16\]) to find $R(e^{-\pi\sqrt{2}})$ in radicals and hence the value of $Y_{1/2}$ in radicals.\
ii) If $q=e^{-2\pi}$, then $$q^{-1/6}e^{-f(q)}=\sqrt[6]{Y_{1}}=\sqrt{\frac{5}{2}+\frac{5 \sqrt{5}}{2}}$$ ...etc\
If $q=e^{-\pi\sqrt{r}}$ $$q^{-1/6}e^{-f(q)}=\sqrt[6]{Y_{r/4}}$$
The Representation of $e^{-f(q)}$
=================================
We give in Theorem 1 bellow the representation of a Moebius periodic function $f$ in terms of known functions.\
For $|q|<1$ the Jacobi theta functions are $$\vartheta(a,b;q):=\sum^{\infty}_{n=-\infty}(-1)^nq^{an^2+bn}$$ One can evaluate the agiles by the theta function $$M(c,q):=\sum^{\infty}_{n=0}c^nq^{n(n+1)/2}=\frac{1}{1+}\frac{-cq}{1+}\frac{-c(q-q^2)}{1+}\frac{-cq^3}{1+}\frac{-c(q^4-q^2)}{1+}\ldots$$ then from \[11\] a way to express the agiles is $$[a,p;q]=\frac{M(-q^{-a},q^p)-q^aM(-q^a,q^p)}{\eta(\tau p)}$$ however we shall use the general theta functions evaluation see relation (32) bellow for more concentrated forms. The reader can change from one form to the other.\
\
A first result in the agiles also given in \[11\] was the evaluation of the duplication formula $$\frac{[a,p;q^2]^{*}}{[a,p;q]^{*}}=\tau^{*}(a,p;q),$$ for which if $a,b$ are positive reals and $n$ integer, then $$\tau^{*}(a,p;q)=\tau^{*}(np\pm a,p;q)$$\
**Theorem 1.**\
For every $f$ analytic in $(-1,1)$ with $$X(n)=\frac{1}{n}\sum_{d|n}\frac{f^{(d)}(0)}{\Gamma(d)}\mu\left(\frac{n}{d}\right)$$ periodic-symmetric with period $T$ (Moebius periodic) and real-valued, then hold $$e^{-f(q)}=\eta(T\tau)^{-\sum^{\left[\frac{T-1}{2}\right]}_{j=1}X(j)}\prod^{\left[\frac{T-1}{2}\right]}_{j=1}\vartheta\left(\frac{T}{2},\frac{T-2j}{2};q\right)^{X(j)}$$ for every $|q|<1$. In case that $X(j)$ are rational, then $q^Ae^{-f(q)}$ is algebraic when $q=e^{-\pi\sqrt{r}}$, $r$ positive rational.\
\
**Proof.**\
Use the expansion found in \[11\]: $$[a,p;q]=\frac{1}{\eta(p\tau)}\sum^{\infty}_{n=-\infty}(-1)^nq^{pn^2/2+(p-2a)n/2}=\frac{1}{\eta(p\tau)}\vartheta\left(\frac{p}{2},\frac{p-2a}{2};q\right)$$ along with relation (25).\
\
**Theorem 2.**\
If $X(n)$ is real $T$-periodic and catoptric, then $$\sum^{\infty}_{n=1}\frac{nX(n)q^n}{1-q^n}=-q\frac{d}{dq}\log\left(\eta(T\tau)^{-\sum^{\left[\frac{T-1}{2}\right]}_{j=1}X(j)}\prod^{\left[\frac{T-1}{2}\right]}_{j=1}\vartheta\left(\frac{T}{2},\frac{T-2j}{2};q\right)^{X(j)}\right)$$ **Proof.**\
If $$X(n)=\frac{1}{n}\sum_{d|n}\frac{f^{(d)}(0)}{\Gamma(d)}\mu(n/d)$$ then it holds $$\int\frac{1}{q}\sum^{\infty}_{n=1}\frac{nX(n)q^n}{1-q^n}dq=f(q)$$ From Theorem 1 we get the result.\
\
**Remark.**\
If $R(a,b,p;q)=q^C\frac{[a,p;q]}{[b,p;q]}$ denotes a Ramanujan quantity (see\[5\]), then we have the next closed form evaluation, with theta functions $$\frac{R'\left(X_p,q\right)}{R\left(X_p,q\right)}=\frac{C}{q}+\frac{d}{dq}\log\left(\prod^{\left[\frac{p-1}{2}\right]}_{j=1}\vartheta\left(\frac{p}{2},\frac{p-2j}{2},q\right)^{X_p(j)}\right)$$ by using (33).
Examples and Applications
=========================
**4)** The Jacobi symbol $\left(\frac{n}{5}\right)$ is 5 periodic and symmetric, hence $$\sum^{\infty}_{n=1}\left(\frac{n}{5}\right)\frac{nq^n}{1-q^n}=-q\frac{d}{dq}\log\left(\frac{\vartheta(5/2,3/2;q)}{\vartheta(5/2,1/2;q)}\right)=-q\frac{d}{dq}\log(q^{-1/5}R(q)) .$$\
From Example 3 and the above relation (35), one can prove that $$\sum^{\infty}_{n=1}\frac{nq^n}{1-q^n}-\sum^{\infty}_{n=1}\frac{5nq^{5n}}{1-q^{5n}}=-q\frac{d}{dq}\log\left(\frac{\vartheta(5/2,1/2;q)\vartheta(5/2,3/2;q)}{\eta\left(5\tau \right)^{2}}\right)$$ and $$\frac{1}{6}+\sum^{\infty}_{n=1}\frac{nq^n}{1-q^n}-5\sum^{\infty}_{n=1}\frac{nq^{5n}}{1-q^{5n}}=-\frac{q}{6}\frac{d}{dq}\log\left(Y\left(\sqrt{q}\right)\right)$$ In view of \[5\] relation (92) and the expansion of $L_1(q)$ in the same paper, we get $$-\frac{q}{6}\frac{d}{dq}\log\left(Y\left(q^{1/2}\right)\right)=\frac{-q^{1/2}}{12}\frac{Y'\left(q^{1/2}\right)}{Y\left(q^{1/2}\right)}
=-\frac{1}{6}-\frac{K[r]^2}{6 \pi^2}+\frac{a(r) K[r]^2}{\pi^2 \sqrt{r}}+\frac{5 K[25 r]^2}{6 \pi^2}-$$ $$-\frac{a(25 r) K[25 r]^2}{\pi^2 \sqrt{r}}-\frac{K[r]^2 k_r^2}{6 \pi^2}+\frac{5 K[25 r]^2 k_{25 r}^2}{6\pi^2}$$ where $a(r)$ is the elliptic alpha function (see \[17\]) and $K[r]=K(k_r)$ is the complete elliptic integral of the first kind at singular values. Hence for certain $r$ we can find special values of $Y'\left(e^{-\pi\sqrt{r}}\right)$.\
From (36) and (37) we get the following evaluation for the theta function $$\theta=\frac{\vartheta(5,1;q)^6\vartheta(5,3;q)^6}{q^2\eta\left(10\tau \right)^{12}}=R(q^2)^{-5}-11-R(q^2)^5$$ and in view of \[9\] we get the following, similar to inverse elliptic nome theorem\
\
**Theorem 3.** $$\frac{-1}{5}\int^{\theta}_{+\infty}\frac{dt}{t^{1/6}\sqrt{125+22t+t^2}}=\frac{1}{5\sqrt[3]{4}}B(k_{4r},1/6,2/3)$$ and $$\theta=H\left(k_{4r}\right)$$ where $$\frac{-1}{5}\int^{G(x)}_{+\infty}\frac{dt}{t^{1/6}\sqrt{125+22t+t^2}}=x\textrm{ and }H(x)=G\left(\frac{B\left(x,\frac{1}{6},\frac{2}{3}\right)}{5\sqrt[3]{4}}\right)$$ Also $$\frac{d}{dr}B(k_r^2,1/6,2/3)=-\frac{\pi}{2}\sqrt[3]{4}\frac{q^{1/6}\eta\left(\tau\right)^4}{\sqrt{r}}$$ and $\theta^{1/6}$ is root of $(eq)$.\
\
For example with $r=1/5$, then $\theta=5\sqrt{5}$ and $$k_{4/5}=\frac{2-\sqrt{2-4\sqrt{-2+\sqrt{5}}}}{2+\sqrt{2-4\sqrt{-2+\sqrt{5}}}}$$ $$5\sqrt{5}=H\left(\frac{2-\sqrt{2-4\sqrt{-2+\sqrt{5}}}}{2+\sqrt{2-4\sqrt{-2+\sqrt{5}}}}\right) .$$\
Continuing we have, if $X(n)=\left(\frac{n}{G_0}\right)$ have period $p$ and $G_0=2^{m_0}p_1^{m_1}p_2^{m_2}\ldots p_s^{m_s}$, with $p_j$-primes of the form $1(mod4)$, $m_s,s,j=0,1,2,\ldots$ and $m_0\neq 1$, then $$\sum^{\infty}_{n=1}\left(\frac{n}{G_0}\right)\frac{nq^n}{1-q^n}=-q\frac{d}{dq}\log\left(\prod^{\left[\frac{G_0-1}{2}\right]}_{j=1}\vartheta\left(\frac{G_0}{2},\frac{G_0-2j}{2};q\right)^{\left(\frac{j}{G_0}\right)}\right)$$ Also\
\
**Conjecture 2.**\
If $g$ is perfect square and $p_1<p_2<\ldots<p_{\lambda}$ are the distinct primes in the factorization of $g$, then $$\prod^{\infty}_{n=1}(1-q^n)^{\left(\frac{n}{g}\right)}
=\eta(\tau)\prod^{\lambda}_{i=1}\eta(p_i\tau)^{-1}\prod_{i<j}\eta(p_ip_j\tau)^1\prod_{i<j<k}\eta(p_ip_jp_k\tau)^{-1}\ldots$$ and $$\prod^{\infty}_{n=1}(1-q^n)^{-\left(\frac{n}{g}\right)}\frac{d}{dq}\prod^{\infty}_{n=1}(1-q^n)^{\left(\frac{n}{g}\right)}=\sum^{\infty}_{n=1}\left(\frac{n}{g}\right)\frac{n q^n}{1-q^n}=$$ $$=-q\frac{d}{dq}\log\left[\eta(g\tau)^{-\sum^{\left[\frac{g-1}{2}\right]}_{j=1}\left(\frac{j}{g}\right)}\prod^{\left[\frac{g-1}{2}\right]}_{j=1}\vartheta\left(\frac{g}{2},\frac{g-2j}{2};q\right)^{\left(\frac{j}{g}\right)}\right]=$$ $$=-q^{-1}\left[L(q)-\sum^{\lambda}_{i=1}p_iL(q^{p_i})+\sum_{i<j}p_ip_jL(q^{p_ip_j})-\ldots\right]$$ where $\eta(p\tau)$ and $L(q^p)$ can evaluated explicitly from (17) and $$1-24\sum^{\infty}_{n=1}\frac{nq^n}{1-q^n}
=\frac{6}{\pi\sqrt{r}}+4\frac{K^2[r]\left(-6 \alpha(r)+\sqrt{r}\left(1+k^2_r\right)\right)}{\pi^2 \sqrt{r}},$$ $\alpha(r)$ is the elliptic alpha function, $k_r$ the elliptic singular moduli.\
\
We also have\
\
**Theorem 4.**\
If $p$ is prime then $$\frac{\pi^{2}\sqrt{r}}{4K[r]^2}\left[-1+p-24q\frac{d}{dq}\log\left(\eta(p\tau)^{-\frac{p-1}{2}}\prod^{\left[\frac{p-1}{2}\right]}_{j=1}\vartheta\left(\frac{p}{2},\frac{p-2j}{2};q\right)\right)\right]=$$ $$=6 \alpha(r)-\sqrt{r}\left(1+k^2_r\right)
+m_{p^2r}^2\left(-6 \alpha(p^2r)+p\sqrt{r}\left(1+k^2_{p^2r}\right)\right).$$ where $m_{n^2r}=\frac{K(k_{n^2r})}{K(k_r)}$ is the multiplier and is algebraic valued function when $r,n$ are in $\bf Q^{*}_{+}\rm$, (see \[13\],\[17\]).
The corespondence of theta functions and singular modulus
=========================================================
In \[11\] we have shown that if $m$ is integer and $q=e^{-\pi\sqrt{r}}$, $r>0$ then $$\sum^{\infty}_{n=-\infty}q^{n^2+2mn}=q^{-m^2}\sqrt{\frac{2K[r]}{\pi}}$$ which is a classical result (see \[3\]). Also we have shown that $$\sum^{\infty}_{n=-\infty}q^{n^2+(2m+1)n}=2^{5/6}q^{-(2m+1)^2/4}\frac{(k_{11}k_{12}k_{21})^{1/6}}{k_{22}^{1/3}}\sqrt{\frac{K[r]}{\pi}}$$ where $k_{11}=k_r$, $k_{12}=\sqrt{1-k_{11}^2}$, $k_{21}=\frac{2-k_{11}^2-2k_{12}}{k_{11}^2}$, $k_{22}=\sqrt{1-k_{12}^2}$.\
The above relations (47) and (48) give us evaluations of all $$\sum^{\infty}_{n=-\infty}q^{n^2+mn}$$ with $m$ integer.\
We define $k_i(x)$ the inverse function of the singular modulus $k_x$. Then it must holds from relation (1) $$k_i(x)=\left(\frac{K\left(\sqrt{1-x^2}\right)}{K(x)}\right)^2$$ From all the above algebraic properties of the theta functions of the previous paragraphs one might think what will happen if we replace $r$ with $k_i\left(\frac{m}{n}\right)$, where $m,n$ positive integers? Is there a nome that says that theta functions are algebraic functions of the singular modulus? The answer is ”yes” for a given theta function $$[a,p;q]^{*}=\frac{q^{\frac{p}{12}-\frac{a}{2}+\frac{a^2}{2p}}}{\eta(p\tau)}\sum^{\infty}_{n=-\infty}(-1)^nq^{pn^2/2+(p-2a)n/2}=$$ $$=\frac{q^{\frac{p}{12}-\frac{a}{2}+\frac{a^2}{2p}}}{\eta(p\tau)}\vartheta\left(\frac{p}{2},\frac{p-2a}{2};q\right),$$ there exists a unique algebraic function $Q_{\{a,p\}}(x)$ such that $$[a,p;e^{-\pi\sqrt{k_i(x)}}]=Q_{\{a,p\}}(x)$$ and $Q_{\{a,p\}}(x)$ has the property that is root of a polynomial of a fixed degree depending only on $a$ and $p$.\
For example the theta function $[1,4,q]$ has $$Q_{\{1,4\}}(x)=\sqrt[12]{\frac{4(1-x^2)}{x}}$$ Also the function $[1/2,4,q]$ has $$Q_{\{1/2,4\}}(x)=\sqrt[48]{\frac{4(1-x)^4(2+x-2\sqrt{1+x})^{12}}{x^{13}(1+x)^2}}$$ Hence one may lead to the evaluations $$[1,1/2,q]^{*}=\sqrt[12]{\frac{4(1-k_r^2)}{k_r}}$$ and $$\left[1/2,4,q\right]^{*}=\sqrt[48]{\frac{4\left(1-k_r\right)^4\left(2+k_r-2\sqrt{1+k_r}\right)^{12}}{k_r^{13}\left(1+k_r\right)^2}}$$ for every $r>0$. Other numerical results can also show us that this is conjecturaly true. The function $Q_{\{a,p\}}(x)$ is always algebraic and its degree is the same for all the values of $q_x=e^{-\pi\sqrt{k_i(x)}}$, $x$ positive rational and $$[a,p;q]^{*}=Q_{\{a,p\}}(k_r),\textrm{ }q=e^{-\pi\sqrt{r}}, \forall r>0.$$ Hence finding the expansion of a theta function only requires the knowlege of $Q_{\{a,p\}}(x)$. Instant cases of $Q$ can be found for rational values of $x$ with the routines ’Recognize’ or ’RootApproximant’ with program Mathematica. An example of evaluation is $$\left([1,3;e^{-\pi\sqrt{k_i(1/5)}}]^{*}\right)^6=\frac{1}{90} [-182-\sqrt{689224-148230\cdot 3^{2/3} \sqrt[3]{10}}+$$ $$+\sqrt{2 \left(92571934 \sqrt{\frac{2}{344612-74115\cdot 3^{2/3} \sqrt[3]{10}}}+74115\cdot 3^{2/3} \sqrt[3]{10}+689224\right)}]$$ which is root of the equation $$45 x^4+364 x^3-21870 x^2-885735=0$$
$$$$
**References**
.2in
\[1\]: M. Abramowitz and I.A. Stegun. ’Handbook of Mathematical Functions’. Dover Publications, New York. 1972.
\[2\]: T. Apostol. Introduction to Analytic Number Theory. Springer Verlang, New York, Berlin, Heidelberg, Tokyo, 1974.
\[3\]: J.V. Armitage W.F. Eberlein. ’Elliptic Functions’. Cambridge University Press. (2006)
\[4\]: N.D. Bagis. ’Parametric Evaluations of the Rogers-Ramanujan Continued Fraction’. International Journal of Mathematics and Mathematical Sciences. Vol (2011)
\[5\]: N.D. Bagis. ’Generalizations of Ramanujan’s Continued Fractions’.\
arXiv:11072393v2 \[math.GM\] 7 Aug 2012.
\[6\]: N.D. Bagis. ’The $w$-modular function and the evaluation of Rogers-Ramanujan continued fraction’. International Journal of Pure and Applied Mathematics. Vol 84, No 1, 2013, 159-169.
\[7\]: N.D. Bagis. ’Evaluation of the fifth degree elliptic singular moduli’. arXiv:1202.6246v2\[math.GM\](2012)
\[8\]: N.D. Bagis. ’On a general sextic equation solved by the Rogers-Ramanujan continued fraction’. arXiv:1111.6023v2\[math.GM\](2012)
\[9\]: N.D. Bagis. ’Generalized Elliptic Integrals and Applications’.\
arXiv:1304.2315v1\[math.GM\] 4 Apr 2013
\[10\]: N.D. Bagis and M.L. Glasser. ’Integrals related with Rogers-Ramanujan continued fraction and $q$-products’. arXiv:0904.1641. 10 Apr 2009.
\[11\]: N.D. Bagis and M.L. Glasser. ’Jacobian Elliptic Functions, Continued Fractions and Ramanujan Quantities’. arXiv:1001.2660v1 \[math.GM\] 2010.
\[12\]: N.D. Bagis and M.L. Glasser. ’Conjectures on the evaluation of alternative modular bases and formulas approximating $1/\pi$’. Journal of Number Theory, Elsevier. (2012)
\[13\]: Bruce C. Berndt. ’Ramanujan‘s Notebooks Part III’. Springer Verlag, New York (1991)
\[14\]: Bruce C. Berndt. ’Ramanujan’s Notebooks Part V’. Springer Verlag, New York, Inc. (1998)
\[15\]: Bruce C. Berndt and Aa Ja Yee. ’Ramanujans Contributions to Eisenstein Series, Especially in his Lost Notebook’. (page stored in the Web).
\[16\]: Bruce C. Berndt, Heng Huat Chan, Sen-Shan Huang, Soon-Yi Kang, Jaebum Sohn and Seung Hwan Son. ’The Rogers-Ramanujan Continued Fraction’. J. Comput. Appl. Math., 105 (1999), 9-24.
\[17\]: J.M. Borwein and P.B. Borwein. ’Pi and the AGM’. John Wiley and Sons, Inc. New York, Chichester, Brisbane, Toronto, Singapore. (1987)
\[18\]: D. Broadhurst. ’Solutions by radicals at Singular Values $k_N$ from New Class Invariants for $N\equiv3\;\; mod\;\; 8$’. arXiv:0807.2976 \[math-ph\], (2008).
\[19\]: W. Duke. ’Continued fractions and Modular functions’. Bull. Amer. Math. Soc. (N.S.), 42 (2005), 137-162.
\[20\]: I.S. Gradshteyn and I.M. Ryzhik. ’Table of Integrals, Series and Products’. Academic Press (1980).
\[21\]: Soon-Yi Kang. ’Ramanujan’s formulas for the explicit evaluation of the Rogers-Ramanujan continued fraction and theta functions’. Acta Arithmetica. XC.1 (1999)
\[22\]: E.T. Whittaker and G.N. Watson. ’A course on Modern Analysis’. Cambridge U.P. (1927)
\[23\]: N.D. Bagis. ’On Generalized Integrals and Ramanujan-Jacobi Special Functions’. arXiv:1309.7247
|
{
"pile_set_name": "ArXiv"
}
|
---
bibliography:
- 'IEEEabrv.bib'
- 'references.bib'
---
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We study the anisotropic two-dimensional Hubbard model at and near half filling within a functional renormalization group method, focusing on the structure of momentum-dependent correlations which grow strongly upon approaching a critical temperature from above. We find that a finite nearest-neighbor interchain hopping is not sufficient to introduce a substantial momentum dependence of single-particle properties along the Fermi surface. However, when a sufficiently large second-nearest neighbor inter-chain hopping is introduced, the system is frustrated and we observe the appearance of so-called “hot spots”, specific points on the Fermi surface around which scattering becomes particularly strong. We compare our results with other studies on quasi-one-dimensional systems.'
author:
- Daniel Rohe
- Antoine Georges
title: |
Strong correlations and formation of “hot spots”\
in the quasi-one-dimensional Hubbard model at weak coupling
---
Introduction
============
The one-band Hubbard model serves as a minimal model for various correlated electron systems, since it is capable of capturing a number of non-trivial phenomena which are due to the interplay between kinetic and potential energy. In one dimension, numerous theoretical methods are available which have led to a thorough understanding of the low-energy physics.[@Gia_book] In higher dimensions, however, rigorous statements are scarce and many controversies remain. It is therefore natural to ask the question how the cross-over from one to two or higher dimensions takes place. Furthermore, since the discovery of quasi-one-dimensional organic conductors and superconductors we have access to materials which are realizations of this physical situation. In these compounds many interesting observations were made during the last two decades, some of which are still calling for a conclusive theoretical description.[@Bou_99]
In this work we will consider the evolution of a model system near half-filling, upon increasing the dimensionality via an increase in the perpendicular kinetic-energy coupling between one-dimensional chains. To tackle this question, we employ a functional renormalization group (fRG) technique, which provides a rigorous framework for the computation of low-energy properties starting from a microscopic model.[@Sal_book] While this method reduces to the well-known g-ology RG in one dimension, it has been applied successfully to the two-dimensional case, where the angular dependence of the coupling function along the Fermi surface needs to be taken into account.[@Hal_00] By allowing for anisotropic hopping parameters we are in principle able to access the complete region from the one-dimensional to the two-dimensional case. In this work we will fix the degree of anisotropy and study the behavior of the system as a function of other parameters.
The dimensional cross-over in quasi-one-dimensional systems (and the intimately related phenomenon of “deconfinement”, i.e the Mott insulator to metal transition induced by increasing the inter-chain kinetic energy) have been investigated in recent years within several theoretical approaches. There is consensus on some aspects, but some disagreements between these studies do remain (mainly due to the different theoretical tools which have been employed, and the different regimes of parameters which have been investigated). The issue as such was raised by experimental studies on Beechgard salts. In particular, the optical spectroscopy experiments of Vescoli et al. revealed an insulator-to-metal transition as a function of increasing interchain hopping parameter, which changes with chemical composition [@Ves_98]. At the same time, Bourbonnais and Jérome discussed these results within a scenario where a one-dimensional Mott insulator evolves into a metallic state when the inter-chain hopping reaches the order of the Mott gap.[@Bou_98].
Subsequently, several model calculations where made to substantiate and verify this concept. Biermann et al. employed an extension of dynamical mean-field theory (DMFT), the so-called chain-DMFT, which replaces the original problem by a chain self-consistently coupled to a bath, while taking into account the full intra-chain momentum dependence (\[\], see also \[\]). They do indeed find a transition from an insulating to a metallic state when the interchain hopping is increased, as well as a crossover from a Luttinger liquid to a Fermi liquid at fixed interchain hopping, when the temperature is decreased. Essler and Tsvelik considered this problem starting from a one-dimensional Mott insulator and using a resummed expansion in the inter-chain hopping [@Ess_02; @Ess_05]. Using this approach, they suggested that the metallic phase does not develop immediately with a large Fermi surface resembling the non-interacting one. In contrast, close enough to the Mott insulating phase, Fermi surface pockets appear in specific locations, while large parts of the would-be Fermi surface remain gapped due to the influence of the one-dimensional Mott gap. Since they use a particular dispersion in the interchain direction which is particle-hole symmetric, they eventually find that this pocket Fermi liquid becomes unstable towards an ordered state, however at much lower temperatures as compared to the Mott gap. The location of these pockets is such that the neighbourhood of the point $k_a=k_b=\pi/2$ (with $k_a$ the momentum along the chain direction and $k_b$ perpendicular to it) is gapped out and not part of the Fermi surface. This point thus corresponds to a “hot spot”, at which the scattering rate is very large (and can even lead to a complete suppression of quasiparticles at this point). While the chain-DMFT studies of Ref. \[\] did not observe this phenomenon (perhaps because of the range of coupling or temperature), more recent studies of an anisotropic spinless model using chain-DMFT did observe a partial destruction of the Fermi surface with hot spots at the same location [@Ber_06].
From the weak-coupling standpoint, early calculations suggested the occurrence of hot spots from a simple perturbative calculations of the scattering rate[@Zhe_95]. Within a renormalization group treatment, Duprat and Bourbonnais found that for a finite and fixed value of the interchain hopping the influence of strong spin-density-wave fluctuations can lead to anisotropic scattering rates along a quasi-one-dimensional Fermi surface, leading to the emergence of hot spots[@Dup_01]. However, the locations of cold and hot regions are exactly exchanged with respect to the findings by Essler and Tsvelik, with the hot spots found at $k_b=0$ and $k_b=\pm\pi$ in Ref. \[\]. It should be noted that these results were obtained for a system away from half filling, meaning that umklapp processes are suppressed in the RG treatment. In the present work, we focus on the half-filled case or its immediate vicinity, and use a functional RG technique which does take umklapp processes into account.
Thus, there exists a consensus that “hot spots” might form in quasi one-dimensional systems, but the various treatments do not agree on their location. It is tempting to speculate that this simply reflects the different locations of these hot regions in the weak and strong coupling limits. At any rate, there are some compelling experimental indications for strongly anisotropic scattering rates in quasi one-dimensional organic conductors, as pointed out early on in Ref. \[\] and dicussed further in the conclusion.
Model
=====
We consider the one-band Hubbard model: $$H = \sum_{\bj,\bj'} \sum_{\sg}
t_{\bj\bj'} \, c^{\dag}_{\bj\sg} c^{\phantom{\dag}}_{\bj'\sg} +
U \sum_{\bj} n_{\bj\up} n_{\bj\down}$$ with a local repulsion $U>0$ and hopping amplitudes $t_{\bj\bj'} = -t_a$ between nearest neighbors in the a-direction along the chains, $t_{\bj\bj'} = -t_b$ between nearest neighbors in the b-direction perpendicular to the chains, and $t_{\bj\bj'} = -t_{b}^{'}$ between second-nearest neighbors in the b-direction. The corresponding dispersion relation reads
$$\eps_{\bk} = -2t_a\cos k_a -2t_b \cos k_b - 2t_{b}^{'} \cos 2k_b .$$
At half-filling and $t_{b}^{'}=0$ the non-interacting Fermi surface is perfectly nested. The introduction of a finite $t_{b}^{'}$ will allow us to study the effects of deviation from this perfect nesting condition. It is important to note that we do *not* linearise the dispersion in the chain direction, as is commonly done in RG treatments originating from the 1d g-ology setup.[@Dup_01] Therefore, the Fermi surface is perfectly nested *only* at half filling. Away from half filling perfect nesting is destroyed, even without a finite value of $t_{b}^{'}$. In the following all energies are given in terms of $t_a$ which we set to unity.
Method
======
In direct analogy to the technique applied in reference \[\] we use the Wick-ordered version of the fRG to compute the one-loop flow of the interaction vertex and the two-loop flow of the self energy, as depicted in Fig. 1. The internal lines without slash in the Feynman diagrams correspond to the bare propagator $$D^{\Lam}(k) = \frac{\Theta(\Lam - |\xi_{\bk}|)}
{ik_0 - \xi_{\bk}} \; ,$$ where $\xi_{\bk} = \eps_{\bk} - \mu$ and $\Lam > 0$ is the cutoff; the lines with slash correspond to $\partial_{\Lam} D^{\Lam}$, which is proportional to $\delta(\Lam - |\xi_{\bk}|)$.
We parametrize the interaction vertex $\Gamma$ by its static values on a reduced number of points/patches on the non-interacting Fermi surface, as illustrated in Fig. 2. It is thus parametrised by a momentum-dependent singlet (triplet) component $\Gamma_{s (t)}(k_1,k_2,k_3)$, where the $k_i$ constitute a discrete set of momenta on the Fermi surface. We stress that this does not correspond to treating a finite system, since internal integrations are done in the thermodynamic limit.
In the present work, we do not directly compute the flow of the self-energy, but rather infer from the properties of the two-loop diagram what the influence of a strongly renormalized vertex function on the self energy will be. In the case of the two-dimensional Hubbard model this calculation has been done explicitly,[@Roh_05] giving us confidence with respect to this reasoning.
Results
=======
$t_{b}^{'}=0$ - perfect nesting
-------------------------------
At $t_{b}^{'}=0$ and $\mu=0$ the non-interacting Fermi surface is perfectly nested with nesting vector $\mb{Q}=(\pi,\pi)$ and defines the so-called Umklapp surface as illustrated in Fig. 3. For $t_{b}=0$ the problem reduces to the one-dimensional half-filled Hubbard model. In this case the divergence of Umklapp couplings at low energies signals the onset of the Mott insulating phase.[@Bou_04] When we analyze the one-loop flow of the interaction vertex for finite values of the interchain hopping $t_{b}$, we find that the divergence of Umklapp processes persists at all finite values of $t_{b}$, which we chose to be $t_b=0.1t_a$ throughout this work. Namely the one-dimensional Umklapp couplings connect to two-dimensional Umklapp processes of the type $(k_F,k_F^\prime) \rightarrow (k_F+{\mb Q},k_F^\prime-{\mb Q})$ with momentum transfer $\mb{Q} = (\pi,\pi)$. The crucial point is that this divergence is nearly perfectly homogeneous along the Fermi surface. Due to the feedback of the interaction onto the self energy via the two-loop diagram, this implies that there are no isolated hot spots at which the dominant scattering processes are dominant compared to other regions on the Fermi surface. Instead, [*the whole Fermi surface*]{} is ”hot”. Along with Umklapp processes the interaction develops divergences in the Cooper channel, also owing to the importance to scattering with wave vector $\mb{Q}=(\pi,\pi)$. The behavior of the coupling function illustrates this very clearly. In the left plot of figure \[nested\] we display the singlet component of the interaction function $\Gamma_S(\mb{k},-\mb{k},-\mb{k}')$ on the Fermi surface as a function of $k_b$ and $k'_b$ at the end of the flow, in direct analogy to the analysis presented in reference \[\]. The bare interaction is chosen in such a way that the temperature is slightly above the pairing temperature, at which the flow of the vertex function diverges. The interaction is homogeneously peaked along the lines $k'_b = \pi - k_b$ and $k'_b = -\pi - k_b$. We name these lines ”Peierls lines” since they correspond to scattering processes in which a momentum $\mb{Q}$ is exchanged between the two incoming particles and $\mb{Q}$ is the generalization of $2k_F$ in one dimension at half filling and perfect nesting.
We thus see that *all* points on the Fermi surface are equally strongly affected by strong correlations appearing in the Cooper channel, which may eventually lead to a phase where *all* one-particle states along the Fermi surface exhibit a pseudogap. Since we look at a system at half filling it is essential to consider Umklapp processes, as mentioned above. We therefore extend the analysis offered in reference \[\] and show in figure \[nested\] in the right plot the interaction function $\Gamma_S(k_{b}^R,k_{b}^R,k_{b}^L)$ on the Fermi surface in the so-called twin-Umklapp channel, meaning that both incoming momenta are identical, and $k_{b}^R (k_{b}^L)$ are momenta corresponding to right (left) movers in the standard terminology familiar from the one-dimensional case. We see that along the lines $k_{b}^L = k_{b}^R - \pi$ and $k_{b}^L = k_{b}^R +
\pi$ the coupling function in this channel is also homogeneously peaked, confirming the conclusion drawn on the basis of the Cooper channel. We recall that in contrast to the one-dimensional case the system will eventually undergo a transition into an antiferromagnetic state at zero temperature. Here, however, we are concerned with finite-temperature precursor effects, which may legitimately be compared.
These results are distinct from those obtained in other RG calculations[@Dup_01]. There, perfect nesting is artificially introduced due to a linearization of the dispersion in the chain direction, and Umklapp processes are neglected. The divergent couplings are then found only in the Peierls section of the Cooper channel for sufficiently small $t_{b}^{'}$, leading to isolated hot spots at $k_b=0$ and $k_b=\pm\pi$.
$t_{b}^{'}\neq0$ - effects of frustration
-----------------------------------------
### $\mu=0$ - half filling
A finite second-nearest neighbor hopping $t_{b}^{'}$ will destroy the nesting condition for all wave vectors on the Fermi surface, except for a few special points. We set the chemical potential to $\mu=0$ and use $t_b=0.1$ and $t_{b}^{'}=0.1t_b$. In this case the non-interacting system essentially remains half filled and the frustrated Fermi surface intersects the Umklapp surface at $k_b=\pm\pi/4$ and $k_b=\pm3\pi/4$, as shown in Fig. \[fsnn.eps\]. Then, at arbitrarily low energies Umklapp scattering of the type $(k_F,k_F) \rightarrow
(k_F+{\mb Q},k_F-{\mb Q})$ is possible if and only if $k_F$ is located at the intersection between the non-nested Fermi surface and the Umklapp surface. The resulting RG flow of the interaction vertex shows a dominant divergence of the couplings corresponding to exactly these processes. This can best be seen in figure \[frustrated\], where the coupling function is shown in analogy to figure \[nested\]. In contrast to the case $t'=0$ the coupling function in both channels shows a strongly peaked behavior which is not homogeneous along the Peierls lines, but which is peaked at the points where the Fermi surface intersects the Umklapp surface. This is reminiscent of a scenario in which there exist so-called hot spots, that is special points at which the scattering rate is particularly large or a pseudogap may appear in the spectral function, in analogy to the case of a two-dimensional system.[@Roh_05] Note that for the frustrated system the Peierls lines defined above in the plots of the vertex function correspond to scattering processes with wave vector $\mb{Q}$ *only* at hot spots. Elsewhere the momentum transfer on the Fermi surface is incommensurate.
### $\mu \neq 0$ - slightly doped system
Upon changing the chemical potential the hot spots mentioned above move along the Fermi surface. When $\mu$ is increased, the two points in each quadrant eventually merge until the Fermi surface touches the Umklapp surface at $(\pm\pi/2,\pm\pi/2)$. In figure \[edp\] we show plots for the vertex function in analogy to figures \[nested\] and \[frustrated\]. Indeed, the vertex function along the Peierls lines in both Umklapp and Cooper channel exhibits a strong increase in the diagonal region, corroborating the identification of the hot spots as originating from the points where the Fermi surface intersects the Umklapp surface. For even larger values of the chemical potential these points do not exist anymore and thus Umklapp processes will not feed back into the self energy. Similarly, upon decreasing $\mu$ the eight hot spots move towards the axis and eventually merge to form four hot spots located at $k_b = 0$ and $k_b = \pi$. Once more the structure of the vertex function reflects this, as can be seen in figure \[hdp\], although for the parameters chosen here the variation along the Fermi surface is somewhat weaker. For small enough values of $\mu$ the hot spots will again disappear.
Discussion and Conclusion
=========================
In summary, we have studied a quasi one-dimensional model of coupled chains at and near half-filling, using an fRG technique and focusing on the appearance of “hot” regions on the Fermi surface. In the presence of perfect nesting, we have found that the whole Fermi surface is hot. In contrast, in the presence of frustration ($t'_b\neq 0$), isolated hot spots appear. The mechanisms for the formation of these hot spots is that the effective couplings (vertex functions) in the various channels become large in an anisotropic manner. The location of the hot spots corresponds to the intersection of the Umklapp surface with the Fermi surface. In general, there are eight hot spots, i.e. two in each quadrant of the Brillouin zone. Their precise location depends on the doping level and on the ratio $t'_b/t_b$. These results are perfectly consistent with previous weak-coupling fRG studies of the frustrated two-dimensional case[@Roh_05]. The location of these hot spots do not agree however with a previous weak-coupling RG study of a quasi one-dimensional model \[\]. As we have seen, a proper treatment of Umklapp processes, which are a key ingredient to the mechanism described in the present work, is essential. Because Ref.\[\] was motivated by the strongly metallic regime, it did not explicitly include these processes in the RG treatment, besides a mere renormalisation of the forward scattering amplitude. There are naturally also some limitation to our fRG calculation. First, it is valid only in the weak-coupling regime. Second, the argument that a strongly peaked interaction can create hot spots on the Fermi surface relies on the low-dimensional properties of the model, the reason being that with increasing dimension the feedback of the interaction onto the one-electron self-energy via the two-loop diagram weakens and is eventually washed out. The method does take transverse Umklapp processes into account, which is essential to the main mechanism and observed features. This can already be achieved by a much simpler RPA calculation. However, the fRG not only provides an exact starting point relying on rigorous statements, it also modifies the RPA results. The critical scales are lower and in contrast to RPA the properties in Cooper and Umklapp channels are different at equal momentum transfer.
A different location of the hot spots was also found by strong-coupling techniques, such as the resummation of the expansion in the inter-chain hopping of Ref. \[\] and the recent chain-DMFT treatment of Ref. \[\]. This is less surprising, and it is tempting to speculate that the hot spot location may be determined by the regions in momentum space where the effective couplings are big in the weak coupling limit, while it is associated with regions in which the inter-chain kinetic energy is small in the strong coupling limit. Future studies at intermediate coupling are needed in order to elucidate this point and provide a consistent picture of how the location of the hot spots evolve from weak to strong coupling.
The possibility that electron-electron Umklapp scattering may account for the emergence of hot spots along a quasi-1d Fermi surface was first suggested by Chaikin in order to explain magic angles in the magnetoresistance data of Beechgaard salts[@Cha_92]. Here we have shown on the basis of a microscopic model and a functional renormalization group approach how such a situation may arise. Angular dependent magnetoresistance oscillation experiments only provide indirect evidence for the formation of hot spots however, through a momentum dependence of the scattering rate on the Fermi surface which requires theoretical modelisation. Obviously, direct angular-resolved spectroscopy experiments (e.g photoemission), although very difficult to perform on quasi one-dimensional organic conductors, would be highly desirable in order to probe these effects experimentally.
We are grateful to T. Giamarchi and C. Bourbonnais for valuable discussions. Support for this work was provided by CNRS and Ecole Polytechnique, and by the E.U. “Psi-k $f$-electron” Network under contract HPRN-CT-2002-00295.
[99]{}
See. e.g. T. Giamarchi, *Quantum Physics in One Dimension*, Oxford University Press (2004)
For reviews on this subject see e.g. C. Bourbonnais and D. Jérome, in *Advances in Synthetic Metals, Twenty Years of Progress in Science and Technology*, edited by P. Bernier, S. Lefrant, and G. Bidan (Elsevier, New York 1999), pp. 206, arXiv: cond-mat/9903101; D. Jérome, Chem. Rev. [**104**]{}, 5565 (2004), and references therein
M. Salmhofer, [*Renormalization*]{} (Springer, Berlin, 1998).
C.J. Halboth and W. Metzner, Phys. Rev. [**61**]{}, 7364 (2000); Phys. Rev. Lett. [**85**]{}, 5162 (2000).
V. Vescoli et al., Science [**281**]{}, 1181 (1998)
C. Bourbonnais and D. Jerome, Science [**281**]{}, 1155 (1998)
S. Biermann, A. Georges, A. Lichtenstein, and T. Giamarchi, Phys. Rev. Lett. [**87**]{}, 276405 (2001)
E. Arrigoni, Phys. Rev. Lett [**83**]{}, 128 (1999)
F.H.L. Essler and A.M. Tsvelik, Phys. Rev. B [**65**]{}, 115117 (2002)
F.H.L. Essler and A.M. Tsvelik, Phys. Rev. B [**71**]{}, 195116 (2005)
A.T. Zhelezniak and V.M. Yakovenko, Synth. Met. [**70**]{}, 1005 (1995)
R. Duprat and C. Bourbonnais, Eur. Phys. J. B [**21**]{}, 219 (2001)
C. Berthod, T. Giamarchi, S. Biermann and A. Georges, arXiv: cond-mat/0602304
P.M. Chaikin, Phys. Rev. Lett. [**69**]{}, 2831 (1992)
D. Rohe and W. Metzner, Phys. Rev. B [**71**]{}, 115116 (2005)
C. Bourbonnais and R. Duprat, J. Phys. IV France [**114**]{}, 3 (2004)
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'In this paper we consider the capacity of the cognitive radio (CR) channel in a fading environment under a “low interference regime". This capacity depends critically on a power loss parameter, $\alpha$, which governs how much transmit power the CR dedicates to relaying the primary message. We derive a simple, accurate approximation to $\alpha$ which gives considerable insight into system capacity. We also investigate the effects of system parameters and propagation environment on $\alpha$ and the CR capacity. In all cases, the use of the approximation is shown to be extremely accurate. Finally, we derive the probability that the “low interference regime" holds and demonstrate that this is the dominant case, especially in practical CR deployment scenarios.'
author:
- '\'
bibliography:
- 'IEEEabrv.bib'
- 'jovicic.bib'
title: On the Statistics of Cognitive Radio Capacity in Shadowing and Fast Fading Environments
---
Introduction
============
The key idea behind the deployment of cognitive radio (CR) is that greater utilization of spectrum can be achieved if they are allowed to co-exist with the incumbent licensed primary users (PUs) provided that they cause minimal interference. The CRs must therefore learn from the radio environment and adapt their parameters so that they can co-exist with the primary systems. The CR field has proven to be a rich source of challenging problems. A large number of papers have appeared on various aspects of CR, namely spectrum sensing [@Ghasemi1], fundamental limits of spectrum sharing [@Ghasemi], information theoretic capacity limits [@devroye; @Maric; @Jovicic; @Jiang] etc.
The 2 user cognitive channel [@devroye; @Maric; @Jovicic; @Jiang] consists of a primary and a secondary user. It is very closely related to the classic 2 user interference channel, see [@Kramer] and references therein.
The formulation of the CR channel is due to Devroye *et al.* [@devroye]. In this channel, the CR has a non-causal knowledge of the intended message of the primary and by employing dirty paper coding [@Costa] at the CR transmitter it is able to circumvent the primary user’s interference to its receiver. However, the interference from the CR to the primary receiver remains and has the potential to cause a rate loss to the primary.
In recent work, Jovicic and Viswanath [@Jovicic] have studied the fundamental limits of the capacity of the CR channel. They show that if the CR is able to devote a part of its power to relaying the primary message, it is possible to compensate for the rate loss to the primary via this additional relay. They have provided exact expressions for the PU and CR capacity of a 2 user CR channel when the CR transmitter sustains a power loss by devoting a fraction, $\alpha$, of its transmit power to relay the PU message. Furthermore, they have provided an exact expression for $\alpha$ such that the PU rate remains the same as if there was no CR interference. It should be stressed here that their system model is such that at the expense of CR transmit power, the PU device is always able to maintain a constant data rate. Hence, we focus on CR rate, $\alpha$ and their statistics. They also assume that the PU receiver uses a single user decoder. Their result holds for the so called low interference regime when the received SNR of the CR transmission is lesser at the primary receiver (i.e., interference from CR to PU) than at the CR receiver. The authors in [@Wu] also arrived at the same results in their parallel but independent work.
The Jovicic and Viswanath study is for a static channel, i.e., the direct and cross link gains are constants. In a system study, these gains will be random and subject to distance dependent path loss and shadow fading. Furthermore, the channel gains also experience fast fading. As the channel gains are random variables, the power loss parameter, $\alpha$, is also random.
In this paper we focus on the power loss, $\alpha$, the capacity of the CR channel and the probability that the “low interference regime” holds. The motivation for this work arises from the fact that maximum rate schemes for the CR in the low interference regime [@Jovicic] and the achievable rate schemes for the high interference regime [@Maric; @Jiang] are very different. Hence, it is of interest to identify which scenario is the most important. To attack this question we propose a simple, physically based geometric model for the CR, PU layout and compute the probability of the low interference regime. Results are obviously limited to this particular model but provide some insight into reasonable deployment scenarios. Since the results show the low interference regime can be dominant, it is also of interest to characterize CR performance via the $\alpha$ parameter. In this area we make the following contributions:
- Assuming lognormal shadowing, Rayleigh fading and path loss effects we derive the probability that the “low interference regime" holds.
- In the same fading environment we derive an approximation for $\alpha$ and its statistics. This extremely accurate approximation leads to simple interpretations of the effect of system parameters on the capacity.
- Using the statistics of $\alpha$ we investigate the mean rate loss of the CR and the cumulative distribution function (CDF) of the CR rates. For both the above we show their dependence on the propagation parameters.
- We also show how the mean value of $\alpha$ varies with the CR transmit power and therefore the CR coverage area.
This paper is organized as follows: Section II describes the system model. Section III derives the probability that the “low interference regime" holds and in Section IV an approximation for $\alpha$ is developed. Section V presents analytical and simulation results and some conclusions are given in Section VI.
System Model
============
Consider a PU receiver in the center of a circular region of radius $R_p$. The PU transmitter is located uniformly in an annulus of outer radius $R_p$ and inner radius $R_{0}$ centered on the PU receiver. It is to be noted that we place the PU receiver at the center only for the sake of mathematical convenience (see Fig. \[fig\_1\]). The use of the annulus restricts devices from being too close to the receiver. This matches physical reality and also avoids problems with the classical inverse power law relationship between signal strength and distance [@Mai1]. In particular, having a minimum distance, $R_0$, prevents the signal strength from becoming infinite as the transmitter approaches the receiver. Similarly, we assume that a CR receiver is uniformly located in the same annulus. Finally, a CR transmitter is uniformly located in an annulus centered on the CR receiver. The dimensions of this annulus are defined by an inner radius, $R_0$, and an outer radius, $R_c$. Following the work of Jovicic and Viswanath [@Jovicic], the four channel gains which define the system are denoted $p, g, f, c$. In this paper, these complex channel gains include shadow fading, path-loss and Rayleigh fast fading effects. To introduce the required notation we consider the link from the CR transmitter to the PU receiver, the CP link. For this link we have: $$\label{linkdef}
|f|^2=\Gamma_{cp}|\tilde{f}|^2,$$ where $|\tilde{f}|^2$ is an exponential random variable with unit mean and $\Gamma_{cp}$ is the link gain. The link gain comprises shadow fading and distance dependent path loss effects so that, $$\label{signal}
\Gamma_{cp}=A_cL_{cp}r_{cp}^{-\gamma},$$ where $A_c$ is a constant, $L_{cp}=10^{\tilde{X}_{cp}/10}$ is lognormal, $\tilde{X}_{cp}$ is zero mean Gaussian and $r_{cp}$ is the link distance. The standard deviation which defines the lognormal is $\sigma$ (dB) and $\gamma$ is the path loss exponent. For convenience, we also write $L_{cp}=e^{X_{cp}}$ so that $X_{cp}=\beta \tilde{X}_{cp}$, $\beta=\ln(10)/10$ and $\sigma_{sf}^2$ is the variance of $X_{cp}$. Hence, for the CP link we have: $$\label{linkgain}
|f|^2=A_ce^{X_{cp}}r_{cp}^{-\gamma}|\tilde{f}|^2.$$ The other three links are defined similarly where $\tilde{p},
\tilde{g}, \tilde{c}$ are standard exponentials, $X_{pp}, X_{pc},
X_{cc},$ are Gaussians with the same parameters as $X_{cp}$ and $r_{pp}, r_{pc}, r_{cc}$ are link distances. However, for the links involving PU transmitter we assume a constant $A_p$ in the model of link gains. The parameters $A_p$ and $A_c$ are constant and all links are assumed independent. The remaining parameters required are the transmit powers of the PU/CR devices, given by $P_p$/$P_c$, and the noise powers at the PU/CR receivers, given by $N_p$/$N_c$.
For fixed channel coefficients, $p, g, f$ and $c$, Jovicic and Viswanath [@Jovicic] compute the highest rate that the CR can achieve subject to certain constraints. A key constraint is that the PU must not suffer any rate degradation due to the CR and this is achieved by the CR dedicating a portion, $\alpha$, of its transmit power to relaying the PU message. The parameter, $\alpha$, is therefore central to determining the CR rate. Furthermore, the results in [@Jovicic] are valid in the “low interference regime" defined by $a<1$ where: $$\label{defa}
a=\frac{\sqrt{N_c}\sqrt{\Gamma_{cp}}|\tilde{f}|}{\sqrt{N_p}\sqrt{\Gamma_{cc}}|\tilde{c}|}
=\frac{\sqrt{N_c}e^{X_{cp}/2}r_{cp}^{-\gamma/2}|\tilde{f}|}{\sqrt{N_p}e^{X_{cc}/2}r_{cc}^{-\gamma/2}|\tilde{c}|}.$$ In this regime, the highest CR rate is given by $$\label{CRRate}
R_{CR}=\log_2\Bigg(1+\frac{|c|^2(1-\alpha)P_c}{N_c}\Bigg),$$ with the power loss parameter, $\alpha$, defined by $$\label{alpha}
\alpha=\frac{|s|^2}{|t|^2}\Bigg[\frac{\sqrt{1+|t|^2(1+|s|^2)}-1}{1+|s|^2}\Bigg]^2,$$ where $|s|=\sqrt{P_p}\sqrt{\Gamma_{pp}}|\tilde{p}|N_p^{-1/2}$ and $|t|=\sqrt{P_c}\sqrt{\Gamma_{cp}}|\tilde{f}|N_p^{-1/2}$. Note that the definitions of $\alpha$ and $R_c$ are conditional on $a<1$. Since $a$ is a function of $\tilde{f}$ and $\tilde{c}$ we see that both $\tilde{f}$ and $\tilde{c}$ are conditional exponentials.
![System model.[]{data-label="fig_1"}](drawing.eps){width="0.75\columnwidth"}
The low interference regime
===========================
The low interference regime is defined by $a<1$, where $a$ is defined in (\[defa\]). The probability, $P(a<1)$, depends on the distribution of $r_{cc}/r_{cp}$. Using standard transformation theory [@pap], some simple but lengthy calculations show that the CDF of $r_{cc}/r_{cp}$ is given by (\[ratioofdis\]).
$$\label{ratioofdis}
P\bigg(\frac{r_{cc}}{r_{cp}}<x\bigg) = \left\{ \begin{array}{ll} 0 &
\textrm{$x\leq\frac{R_0}{R_p}$}\\\\
\frac{0.5x^2(R_p^2-R_0^4x^{-4})-R_0^2(R_p^2-R_0^2x^{-2})}{(R_c^2-R_0^2)(R_p^2-R_0^2)}
&
\textrm{$\frac{R_0}{R_p}<x\leq\frac{R_c}{R_p}$}\\\\
\frac{0.5(R_c^4-R_0^4)-R_0^2(R_c^2-R_0^2)+(x^2R_p^2-R_c^2)(R_c^2-R_0^2)}{x^2(R_c^2-R_0^2)(R_p^2-R_0^2)} &
\textrm{$\frac{R_c}{R_p}<x\leq1$}\\\\
1-\frac{0.5R_c^4x^2+0.5R_0^4x^2-R_0^2R_c^2}{(R_c^2-R_0^2)(R_p^2-R_0^2)}
& \textrm{$1<x\leq\frac{R_c}{R_0}$}\\\\
1 & \textrm{$x>\frac{R_c}{R_0}$}
\end{array} \right.$$
The CDF in (\[ratioofdis\]) can be written as:$$\label{simpratioofdis}
P\bigg(\frac{r_{cc}}{r_{cp}}<x\bigg) = c_{i0}x^{-2}+c_{i1}+c_{i2}x^2
\quad \textrm{$i=1,2,3,4,5$}$$ where $\Delta=(R_c^2-R_0^2)(R_p^2-R_0^2)$, $c_{10}=0$, $c_{11}=0$, $c_{12}=0$, $c_{20}=0.5R_0^4/\Delta$, $c_{21}=-R_0^2R_p^2/\Delta$, $c_{22}=0.5R_p^4/\Delta$, $c_{30}=0.5(R_0^4-R_c^4)/\Delta$, $c_{31}=R_p^2(R_c^2-R_0^2)/\Delta$, $c_{32}=0$, $c_{40}=-0.5R_c^4/\Delta$, $c_{41}=1+R_0^2R_c^2/\Delta$, $c_{42}=-0.5R_0^4/\Delta$, $c_{50}=0$, $c_{51}=1$ and $c_{52}=0$.
Now $P(a<1)=P(a^2<1)$ can be written as $P(Y<Ke^XZ^{-\gamma})$ where $Y={|\tilde{f}|^2}/{|\tilde{c}|^2}$, $K=N_p/N_c$, $X=X_{cc}-X_{cp}$ and $Z=r_{cc}/r_{cp}$. Thus the required probability is: $$\begin{aligned}
\label{lowintprob}
P(Y<Ke^XZ^{-\gamma})&{}={}&P(Z<K^{1/\gamma}e^{X/\gamma}Y^{-1/\gamma})\nonumber\\
&{}={}&E[P(Z<K^{1/\gamma}e^{X/\gamma}Y^{-1/\gamma}|X,Y)]\nonumber\\
&{}={}&E[P(Z<W|W)]\nonumber\\
&{}={}&\int_0^\infty P(Z<w)f_W(w)dw,\end{aligned}$$ where $W=K^{1/\gamma}e^{X/\gamma}Y^{-1/\gamma}$ and $f_W(.)$ is the PDF of $W$. Note that $P(Z<w)$, given in (\[simpratioofdis\]), only contains constants and terms involving $w^{\pm2}$. Hence, we need the following: $$\label{genint}
\int_\theta^\kappa\!\!w^{2m}f_W(w)dw=\int\!\!\int\!
(Ke^xy^{-1})^{2m/\gamma}f_{X,Y}(x,y)dxdy,$$ where $m=-1,0,1$ and $f_{X,Y}(.)$ is the joint PDF of $X,Y$. Now, since $W=K^{1/\gamma}e^{X/\gamma}Y^{-1/\gamma}$, the limits $\theta\leq w\leq \kappa$ in (\[genint\]) imply the following limits for $x$: $$\ln(\theta^{\gamma} K^{-1}y)\leq x\leq \ln(\kappa{^\gamma} K^{-1}y).$$ Let $\ln(\theta^{\gamma} K^{-1}y)=A$ and $\ln(\kappa{^\gamma}
K^{-1}y)=B$, then noting that $f_{X,Y}(x,y)=f_X(x)f_Y(y)$, the integral in (\[genint\]) becomes: $$\begin{aligned}
\label{genint2}
\int_\theta^\kappa w^{2m}f_W(w)dw&{}={}&\int_0^\infty
K^{2m/\gamma}y^{-2m/\gamma}f_Y(y)\nonumber\\&&{\times}\:\int_{A}^{B}
e^{2mx/\gamma}f_X(x)dxdy.\end{aligned}$$ Since $X\sim\mathcal{N}(0,2\sigma_{sf}^2)$, the inner integral in (\[genint2\]) becomes: $$\begin{aligned}
\label{genint3}
\int_{A}^{B}
\!\!\!e^{2mx/\gamma}&{}f_X(x){}&dx=\exp\Bigg(\frac{4m^2\sigma_{sf}^2}{\gamma^2}\Bigg)\nonumber\\&&{\times}\:
\Bigg[\Phi\Bigg(\frac{B-\frac{4m\sigma_{sf}^2}{\gamma}}{\sqrt{2}\sigma_{sf}}\Bigg)-
\Phi\Bigg(\frac{A-\frac{4m\sigma_{sf}^2}{\gamma}}{\sqrt{2}\sigma_{sf}}\Bigg)\Bigg],\nonumber\\\end{aligned}$$ where $\Phi$ is the CDF of a standard Gaussian. Since $f_Y(y)$ is the density function of the ratio of two standard exponentials, it is given by [@Ghasemi]: $$\label{ratioexp}
f_Y(y)=\frac{1}{(1+y)^2}, \qquad y\geq 0$$ Using (\[genint3\]) and (\[ratioexp\]), the total general integral in (\[genint\]) becomes: $$\begin{aligned}
\label{genintfinal}
\int_\theta^\kappa\!\!w^{2m}f_W(w)dw&{}={}&\int_0^\infty
\!\!K^{2m/\gamma}y^{-2m/\gamma}(1+y)^{-2}\exp\Bigg(\frac{4m^2\sigma_{sf}^2}{\gamma^2}\Bigg)\nonumber\\&&{\times}\:
\Bigg[\Phi\Bigg(\frac{B-\frac{4m\sigma_{sf}^2}{\gamma}}{\sqrt{2}\sigma_{sf}}\Bigg)-
\Phi\Bigg(\frac{A-\frac{4m\sigma_{sf}^2}{\gamma}}{\sqrt{2}\sigma_{sf}}\Bigg)\Bigg]dy\nonumber\\
&{}\triangleq{}&I(m,\theta,\kappa).\end{aligned}$$ Substituting (\[simpratioofdis\]) and (\[genintfinal\]) in (\[lowintprob\]) gives $P(a<1)$ as: $$\begin{aligned}
\label{lowintfinal}
P(a<1)&{}={}&P(Y<Ke^XZ^{-\gamma})\nonumber\\
&{}={}&\sum_{i=2}^5c_{i0}I(-1,\theta_i,\kappa_i)+c_{i1}I(0,\theta_i,\kappa_i)+c_{i2}I(1,\theta_i,\kappa_i)\nonumber\\
&{}={}&\sum_{i=2}^5\sum_{j=0}^2c_{ij}I(j-1,\theta_i,\kappa_i).\end{aligned}$$ Finally, it can be seen from the limits given in (\[ratioofdis\]) that $\kappa_i=\theta_{i+1}$. Hence, the final expression for the probability of occurrence of the low interference regime is: $$\begin{aligned}
\label{lowintfinal1}
P(a<1)&{}={}&\sum_{i=2}^5\sum_{j=0}^2c_{ij}I(j-1,\theta_i,\theta_{i+1}),\end{aligned}$$ where the $c_{ij}$ were defined after (\[simpratioofdis\]), $I(j-1,\theta_i,\theta_{i+1})$ is given in (\[genintfinal\]), $\theta_2=R_0/R_p$, $\theta_3=R_c/R_p$, $\theta_4=1$, $\theta_5=R_c/R_0$ and $\theta_6=\infty$. Hence, $P(a<1)$ can be derived in terms of a single numerical integral. For numerical convenience, (\[genintfinal\]) is rewritten using the substitution $v=y(y+1)^{-1}$ so that a finite range integral over $0<v<1$ is used for numerical results: $$\begin{aligned}
\label{genintfinalsim}
\int_\theta^\kappa\!\!\!w^{2m}&{}f_W(w)dw{}&\:=\int_0^1
\!\!\!K^{2m/\gamma}\Big(\frac{v}{1-v}\Big)^{-2m/\gamma}\exp\Bigg(\frac{4m^2\sigma_{sf}^2}{\gamma^2}\Bigg)\nonumber\\&&{\times}\:
\Bigg[\Phi\Bigg(\frac{B-\frac{4m\sigma_{sf}^2}{\gamma}}{\sqrt{2}\sigma_{sf}}\Bigg)-
\Phi\Bigg(\frac{A-\frac{4m\sigma_{sf}^2}{\gamma}}{\sqrt{2}\sigma_{sf}}\Bigg)\Bigg]dv\nonumber\\
&{}\triangleq{}&I(m,\theta,\kappa),\end{aligned}$$ where $\ln(\theta^{\gamma} K^{-1}\frac{v}{1-v})=A$ and $\ln(\kappa{^\gamma} K^{-1}\frac{v}{1-v})=B$. Further simplification of $(\ref{genintfinal})$ appears difficult but the result in (\[genintfinalsim\]) is stable and rapid to compute. A comparison of simulated and analytical results is shown in Fig. \[fig2\]. It can the seen that the analytical formula given in (\[genintfinalsim\]) perfectly matches the simulation results.
![Probability of occurrence of the low interference regime as a function of shadow fading variance, $\sigma$ (dB). The ratio $R_p/R_c$ is taken as 10.[]{data-label="fig2"}](fig4.eps){width="0.95\columnwidth"}
An Approximation For The Power Loss Parameter
=============================================
In this section we focus on the power loss parameter, $\alpha$, which governs how much of the transmit power the CR dedicates to relaying the primary message. The exact distribution of $\alpha$ appears to be rather complicated, even for fixed link gains (fixed values of $\Gamma_{cp},\Gamma_{pc},\Gamma_{pp}$ and $\Gamma_{cc}$). Hence, we consider an extremely simple approximation based on the idea that $|s|\times|t|$ is usually small and $|s|\times|t|>>|t|$. This approximation is motivated by the fact that the CP link is usually very weak compared to the PP link. This is because the CRs will employ much lower transmit powers than the PU. With this assumption it follows that $|t|^2(1+|s|^2)$ is small and we have: $$\begin{aligned}
\label{alphasimplify}
\sqrt{\alpha}&{}={}&\frac{|s|}{|t|}\Bigg[\frac{\big(1+|t|^2(1+|s|^2)\big)^{1/2}-1}{1+|s|^2}\Bigg]\nonumber\\
&{}\approx{}&\frac{|s|}{|t|}\Bigg[\frac{1/2|t|^2(1+|s|^2)}{1+|s|^2}\Bigg]\nonumber\\
&{}={}&\frac{|s||t|}{2}\nonumber\\
&{}={}&\sqrt{\alpha_{approx}}.\end{aligned}$$ Expanding $\alpha_{approx}$ we have: $$\label{alphaapprox}
\alpha_{approx}= \frac{A_pA_cP_p P_c}{4
N_p^2}e^{(X_{pp}+X_{cp})}r_{pp}^{-\gamma}r_{cp}^{-\gamma}|{\tilde{p}}|^2|{\tilde{f}}|^2.$$
This approximation is very effective for low values of $\alpha_{approx}$, but is poor for larger values since $\alpha_{approx}$ is unbounded whereas $0<\alpha<1$. To improve the approximation, we use the conditional distribution of $\alpha_{approx}$ given that $\alpha_{approx}<1$. This conditional variable is denoted, ${\hat{\alpha}}$. The exact distribution of ${\hat{\alpha}}$ is difficult for variable link gains. However, the approximation has a simple representation which leads to considerable insight into the power loss and how it relates to system parameters. For example $\alpha_{approx}$ is proportional to $|s|^2|t|^2$ so that high power loss may be caused by high values of $|s|$ or $|t|$ or moderate values of both. Now $|s|$ and $|t|$ relate to the PP and CP links respectively. Hence the CR is forced to use high power relaying the PU message when the CP link is strong. This is obvious as the relay action needs to make up for the strong interference caused by the CR. The second scenario is that the CR has high $\alpha$ when the PP link is strong. This is less obvious, but here the PU rate is high and a substantial relaying effort is required to counteract the efforts of interference on a high rate link. This is discussed further in Section V. It is worth noting that the condition $|s||t|>>|t|$ holds good only for some specific values of channel parameters. Hence, although it is motivated by a sensible physical scenario, it certainly needs checking. Results in Figs. \[fig3\], \[fig6\] and \[fig4\] show that it works very well.
For fixed link gains, the distribution of ${\hat{\alpha}}$ is: $$\begin{aligned}
\label{begin}
P(\alpha_{approx}<x|\alpha_{approx}<1)&{}={}&P(\hat{\alpha}<x)\nonumber\\
&{}={}&\frac{P(\alpha_{approx}<x)}{P(\alpha_{approx}<1)}.\end{aligned}$$ Thus, to compute the distribution function of $\hat{\alpha}$ we need to determine $P(\alpha_{approx}<x)$ which can be written as $$\label{firststep}
P(\alpha_{approx}<x)=P(|s|^2|t|^2<4x).$$ In the analytical approximation below we assume that $|s|^2$ and $|t|^2$ are exponential, i.e., we ignore the conditioning on $a<1$. The conditioning can be handled exactly but results suggest that a simple exponential approximation is satisfactory. Let $E(|s|^2)=\mu_s$, $E(|t|^2)=\mu_t$ with $\mu_s=P_p\Gamma_{pp}/N_p$ and $\mu_t=P_c\Gamma_{cp}/N_p$. Further, suppose that $U$ and $V$ represent i.i.d. standard exponentials, then we have $$\begin{aligned}
\label{approxcdf}
P(\alpha_{approx}<x)&{}={}&P\bigg(UV<\frac{4x}{\mu_s\mu_t}\bigg)\nonumber\\
&{}={}&E_V\bigg(P\bigg(U<\frac{4x}{V\mu_s\mu_t}\bigg)\bigg)\nonumber\\
&{}={}&E_V\bigg(1-\exp\bigg(\frac{-4x}{V\mu_s\mu_t}\bigg)\bigg)\nonumber\\
&{}={}&1-\int_0^\infty\exp\bigg(\frac{-4x}{v\mu_s\mu_t}-v\bigg)dv\nonumber\\
&{}={}&1-\sqrt{\frac{16x}{\mu_s\mu_t}}K_1\bigg(\sqrt{\frac{16x}{\mu_s\mu_t}}\bigg),\end{aligned}$$ where $K_1(.)$ represents the modified Bessel function of the second kind and the integral in (\[approxcdf\]) can be found in [@int]. Using the expression given in (\[approxcdf\]), the CDF of $\hat{\alpha}$ follows from (\[begin\]). Note that the CDF of $R_c$ can easily be obtained in the form of a single numerical integral for fixed powers.
Results
=======
![PDFs of $\log_{10}(\alpha)$ and its approximation $\log_{10}(\hat{\alpha})$.[]{data-label="fig3"}](fig2.eps){width="0.95\columnwidth"}
![Mean value of the power loss parameter, $\alpha$, as a function of the ratio $\frac{R_c}{R_p}$.[]{data-label="fig5"}](fig5.eps){width="0.95\columnwidth"}
![Comparison of the exact and analytical CDFs of the power loss factor on a logarithmic scale for fixed link gains. Results are shown for 5 drops.[]{data-label="fig6"}](fig3.eps){width="0.95\columnwidth"}
![CDF of the CR rates with the exact $\alpha$ and the approximate $\hat{\alpha}$.[]{data-label="fig4"}](fig1.eps){width="0.97\columnwidth" height="63.5mm"}
![Mean value of the CR rate loss as a function of $\gamma$.[]{data-label="fig7"}](fig7.eps){width="0.95\columnwidth"}
![Variation of the mean CR rate with the power inflation factor, $\beta$.[]{data-label="fig8"}](fig6.eps){width="0.99\columnwidth"}
In the results section, the default parameters are $\sigma=8$ dB, $\gamma=3.5$, $R_0=1$, $R_c=100$ m, $R_p=1000$ m and $N_p=N_c=P_p=P_c=1$. The parameter $A_p$ is determined by ensuring that the link PP has an SNR $\geq5$ dB 95% of the time in the absence of any interference. Similarly, assuming that both PU and CR devices have same threshold power at their cell edges, the constant $A_c=A_p(R_p/R_c)^{-\gamma}$. Unless otherwise stated these parameters are used in the following.
Low interference regime
-----------------------
In Fig. \[fig2\] we show that the low interference regime, $a<1$, is the dominant scenario. For typical values of $\gamma\in[3,4]$ and $\sigma\in[6,12]$ dB we find that $P(a<1)$ is usually well over 90%. Figure \[fig2\] also verifies the analytical result in (\[lowintfinal\]).
The relationship between $P(a<1)$ and the system parameters is easily seen from (\[defa\]) which contains the term $\big(r_{cc}/r_{cp}\big)^{\gamma/2}\exp\big((X_{cc}-X_{cp})/2\big)$. When $R_c<<R_p$, this term decreases dramatically as $\gamma$ increases and as $\sigma$ increases the term increases. Also, as $R_c$ increases $r_{cc}/r_{cp}$ tends to increase which in turn increases $P(a<1)$. When $R_c\approx R_p$ the low and high interference scenarios occur with similar frequency. This may be a relevant system consideration if CRs were to be introduced in cellular bands where the cellular hot spots, indoor micro-cells and CRs will have roughly the same coverage radius. Note that $a$ is independent of the transmit power, $P_c$. These conclusions are all verified by simulations which are omitted for reasons of space.
Statistics of the power loss parameter, $\alpha$
------------------------------------------------
Figures 3-5 all focus on the properties of $\alpha$. Figure \[fig3\] shows that the probability density function (PDF) of $\alpha$ is extremely well approximated by the PDF of $\hat{\alpha}$. In Fig. \[fig5\] we see that $E(\alpha)$ increases with increasing values of $R_c/R_p$ and decreasing values of $\gamma$. This can be seen from (\[alphaapprox\]) where $\alpha_{approx}$ contains a $(r_{pp}r_{cp})^{-\gamma}$ term which increases as $\gamma$ decreases. The increase of $E(\alpha)$ with $R_c$ follows from the corresponding increase in $P_c$ to cater for larger $R_c$ values. In Fig. \[fig5\] we have limited $R_c/R_p$ to a maximum of $30\%$ as beyond this value the high interference regime is also present with a non-negligible probability. In Fig. \[fig6\] we see the analytical CDF in (\[approxcdf\]) verified by simulations for five different scenarios of fixed link gains (simply the first five simulated values of $\Gamma_{pp}$ and $\Gamma_{cp}$). Note that in the different curves each correspond to a random drop of the PU and CR transmitters. This fixes the distance and shadow fading terms in the link gains in (\[signal\]), thereby the remaining variation in (\[linkdef\]) is only Rayleigh. By computing a large number of such CDFs and averaging them over the link gains a single CDF can be constructed. This approach can be used to find the PDF of $\hat{\alpha}$ as shown in Fig. \[fig3\]. Note that the curves in Fig. \[fig6\] do not match exactly since the analysis is for $\hat{\alpha}$ and the simulation is for $\alpha$.
CR rates
--------
Figures 6-8 focus on the CR rate $R_{CR}$. Figure \[fig4\] demonstrates that the use of $\hat{\alpha}$ is not only accurate for $\alpha$ but also leads to excellent agreement for the CR rate, $R_{CR}$. This agreement holds over the whole range and for all typical parameter values. Figure \[fig7\] shows the % loss given by $[R_{CR}(\alpha=0)-R_{CR}(\alpha)]/[R_{CR}(\alpha=0)]\%$. The loss decreases as $\gamma$ increases, as discussed above, and increases with $\sigma$. From (\[alphaapprox\]) it is clear that increasing $\sigma$ lends to larger values of $\exp(X_{pp}+X_{cp})$ which in turn increases $\alpha$ and the rate loss. Note that the rate loss is minor for $\sigma\in[8-10]$ dB with $R_c=R_p/10$. In a companion paper [@ICC], we show that the interference to the PU increases with $\sigma$ and decreases with $\gamma$. These results reinforce this observation, i.e., when the PU suffers more interference ($\sigma$ is larger) the CR has to devote a higher part of its power to the PU. Consequently the percentage rate loss is higher.
Finally, in Fig. \[fig8\] we investigate the gains available to the CR through increasing transmit power. The original transmit power, $P_c$, is scaled by $\beta$ and the mean CR rate is simulated over a range of $\beta$ values. Due to the relaying performed by the CR, the PU rate is unaffected by the CR for any values of $\beta$ and so the CR is able to boost its own rate with higher transmit power. Clearly the increased value of $\alpha$ for higher values of $\beta$ is outweighed by the larger $P_c$ value and so the CR does achieve an overall rate gain. In a very coarse way these results suggest that multiple CRs may be able to co-exist with the PU since the increased interference power might be due to several CRs and the rate gain might be spread over several CRs. Of course, this conclusion is speculative as the analysis is only valid for a single CR.
Conclusion
==========
In this paper we derive the probability that the “low interference regime” holds and demonstrate the conditions under which this is the dominant scenario. We show that the probability of the low interference regime is significantly influenced by the system geometry. When the CR coverage radius is small relative to the PU radius, the low interference regime is dominant. When the CR coverage radius approaches a value similar to the PU coverage radius, the low and high interference regimes both occur with roughly equal probability. In addition we have derived a simple, accurate approximation to $\alpha$ which gives considerable insight into the system capacity. The $\alpha$ approximation shows that CR rates are reduced by large CR coverage zones, small values of $\gamma$ and large values of $\sigma$. Finally, we have shown that the CR can increase its own rate with higher transmit powers, although the relationship is only slowly increasing as expected.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We consider the problem of estimating the parameters in a pairwise graphical model in which the distribution of each node, conditioned on the others, may have a different parametric form. In particular, we assume that each node’s conditional distribution is in the exponential family. We identify restrictions on the parameter space required for the existence of a well-defined joint density, and establish the consistency of the neighbourhood selection approach for graph reconstruction in high dimensions when the true underlying graph is sparse. Motivated by our theoretical results, we investigate the selection of edges between nodes whose conditional distributions take different parametric forms, and show that efficiency can be gained if edge estimates obtained from the regressions of particular nodes are used to reconstruct the graph. These results are illustrated with examples of Gaussian, Bernoulli, Poisson and exponential distributions. Our theoretical findings are corroborated by evidence from simulation studies.'
author:
- |
Shizhe Chen, Daniela Witten, and Ali Shojaie\
Department of Biostatistics, University of Washington, Box 357232,\
Seattle, WA 98195-7232\
bibliography:
- 'paper-ref.bib'
date: 'August 1st, 2014'
title: Selection and Estimation for Mixed Graphical Models
---
**Keywords:** compatibility; conditional likelihood; exponential family; high-dimensionality; model selection consistency; neighbourhood selection; pairwise Markov random field.
Introduction {#intro}
============
In this paper, we consider the task of learning the structure of an undirected graphical model encoding pairwise conditional dependence relationships among random variables. Specifically, suppose that we have $p$ random variables represented as nodes of the graph $G=(V,E)$, with the vertex set $V= \{1, \ldots, p \}$ and the edge set $E \subseteq V \times V $. An edge in the graph indicates a pair of random variables that are conditionally dependent given all other variables. The problem of reconstructing the graph from a set of $n$ observations has attracted a lot of interest in recent years, especially when $p>n$ and $p(p-1)/2$ edges must be estimated from $n$ observations.
Many authors have studied the estimation of high-dimensional undirected graphical models in the setting where [the distribution of each node, conditioned on all other nodes, has the same parametric form]{}. In particular, Gaussian graphical models have been studied extensively (see e.g. @meinshausen2006, @yuan2007, @friedman2008, @rothman2008, @wainwright2008, @peng2009, @ravikumar2011), and have been generalized to account for non-normality and outliers (see e.g. @miyamura2006, @finegold2011, @vogel2011, @sun2012). Others have considered the setting in which all node-conditional distributions are Bernoulli (@lee2006, @hofling2009, and @ravikumar2010), multinomial (@jalali2011), Poisson [@allen2012], or any univariate distribution in the exponential family [@yang2012; @yang2013].
In this paper, we seek to estimate a graphical model in which the variables are of different types. Here, the type of a node refers to the parametric form of its distribution, conditioned on all other nodes. For instance, the variables might include DNA nucleotides, taking binary values, and gene expression measured using RNA-sequencing, taking non-negative integer values. We could model the first set of nodes as Bernoulli, which means that each of their distributions, conditional on the other nodes, is Bernoulli; similarly, we could model the second set as Poisson. We assume that the type of each node is known a priori, and refer to this setup as a mixed graphical model.
In the low-dimensional setting, @lauritzen1996 studied a special case of the mixed graphical model, known as the conditional Gaussian model, in which each node is either Gaussian or Bernoulli. More recent work has focused on the high-dimensional setting. @lee2012 proposed two algorithms for reconstructing conditional Gaussian models using a group lasso penalty. @cheng2013 modified this approach by using a weighted $\ell_1$-penalty.
A related line of research considers semi-parametric or non-parametric approaches for estimating conditional dependence relationships (@liu2009, @xue2012, @fellinghauer2013, @voorman2014), among which @fellinghauer2013 is specifically proposed for mixed graphical models. However, despite their flexibility, these non-parametric methods are often less efficient than their parametric counterparts, if the type of each node is known.
In this paper, we propose an estimator and develop theory for the parametric mixed graphical model, under a much more general setting than existing approaches (e.g. @lee2012). We allow the conditional distribution of each node to belong to the exponential family. Unlike @yang2012, nodes may be of different types. For instance, within a single graph, some nodes may be Bernoulli, some may be Poisson, and some may be exponential.
In parallel efforts, @yang2014 recently presented general results on strong compatibility for mixed graphical models for which the node-conditional distributions belong to the exponential family, and for which the graph contains only two types of nodes. We instead consider the setting where the graph can contain more than two types of nodes, and provide specific requirements for strong compatibility for some common distributions.
A Model for Mixed Data {#model}
======================
Conditionally-Specified Models for Mixed Data {#condmodel}
---------------------------------------------
We consider the pairwise graphical model [@wainwright2006], which takes the form $$p(x) \propto \exp \left\{ \sum\limits_{s=1}^p f_s(x_s) + \sum\limits_{s = 2 }^p \sum\limits_{t < s } f_{ts} (x_s,x_t) \right\},
\label{PGM}$$ where ${x}={(x_1, ..., x_p)}^\T$ and $f_{ts}= 0$ for $\{t,s\} \notin E$. Here, $f_s(x_s)$ is the node potential function, and $f_{st} (x_s,x_t)$ the edge potential function. We further simplify the pairwise interactions by assuming that $f_{st} (x_s,x_t) = \theta_{st} x_s x_t = \theta_{ts} x_s x_t$, so that we can write the parameters associated with edges in a symmetric square matrix $\Theta = (\theta_{st})_{p \times p}$ where the diagonal elements equal zero. The joint density can then be written as $$p({x}) = \exp \left\{ \sum\limits_{s=1}^p f_s(x_s) + \frac{1}{2} \sum\limits_{s=1}^{p}\sum\limits_{t \neq s} \theta_{ts} x_s x_t - A(\Theta, \alpha) \right\},
\label{joint}$$ where $A(\Theta,\alpha)$ is the log-partition function, a function of $\Theta$ and $\alpha$. Here $\alpha$ is a $ K \times p$ matrix of parameters involved in the node potential functions: that is, $f_s(x_s)$ involves $\alpha_s$, the $s$th column of $\alpha$. $K$ is some known integer. For $\{s,t\} \notin E$, the edge potentials satisfy $\theta_{st}=\theta_{ts}=0$. We define the neighbours of the $s$th node as $N(x_s)=\{t: \theta_{st}=\theta_{ts}\neq 0 \}$.
In principle, given a parametric form for the joint density , we can estimate the conditional dependence relationships among the $p$ variables, and hence the edges in the graph. But this approach requires the calculation of the log-partition function $A(\Theta, \alpha)$, which is often intractable. To overcome this, we instead use the framework of conditionally-specified models [@besag1974]: we specify the distribution of each node conditional on the others, and then combine the $p$ conditional distributions to form a single graphical model. This approach has been widely used in estimating high-dimensional graphical models where all nodes are of the same type [@meinshausen2006; @ravikumar2010; @allen2012; @yang2012]. However, as we will discuss in Section \[compatibility\], a conditionally-specified model may not correspond to a valid joint distribution.
Define ${x}_{-s}={(x_1,...,x_{s-1}, x_{s+1},...,x_p)}^\T$. We consider conditional densities of the form $$p(x_s \mid {x}_{-s}) = \exp \left\{ f_s(x_s) + \sum\limits_{ t\neq s} \theta_{ts} x_t x_s - D_s(\eta_s) \right\},
\label{cond}$$ where $\eta_s=\eta_s(\Theta_s, x_{-s}, \alpha_s)$ is a function of $\alpha_s$, $x_{-s}$, and $\Theta_{s}$, and $\Theta_s$ is the $s$th column of $\Theta$ without the diagonal element. Suppose $f_s(x_s)= \alpha_{1s} x_s + \alpha_{2s} x_s^2/2+ \sum_{k = 3}^{K} \alpha_{ks} B_{ks}(x_s)$, where $\alpha_{ks}$ is a parameter, which could be 0, and $B_{ks}(x_s)$ is a known function for $k=3,\ldots,K$. Under this assumption, belongs to the exponential family.
The assumed form of $f_s(x_s)$ is quite general. We now consider some special cases of corresponding to commonly-used distributions in the exponential family, for which $f_s(x_s)$ takes a very simple form. In the following examples, we assume that $\eta_s(\Theta_s, x_{-s}, \alpha_s)=\alpha_{1s} + \sum_{t: \; t \neq s} \theta_{ts} x_t$.
The conditional density is Gaussian and $\alpha_{2s}=-1$: $$p(x_s \mid {x}_{-s})=\exp \left\{ -\frac{1}{2} x_s^2 + \eta_s x_s - \frac{1}{2} \eta_s^2 -\frac{1}{2}\log(2\pi) \right\}, \quad x_s \in \mathcal{R},
\label{cond:Gaussian}$$ where $f_s(x_s) = \alpha_{1s} x_{s} - x_s^2/2 $ and $D_s(\eta_s)= \eta_s^2/2+\log(2\pi)/2$.
The conditional density is Bernoulli. Instead of coding $x_s$ as $\{0,1\}$, we code $x_s$ as $\{-1,1\}$. This yields the conditional density $$p(x_s\mid {x}_{-s})=\exp \left\{ \eta_s x_s - D_s(\eta_s) \right\}, \quad x_s \in \{ -1, 1\},
\label{cond:Bernoulli}$$ where $f_s(x_s)=\alpha_{1s} x_s$ and $D_s(\eta_s)=\log\{ \exp(\eta_s)+\exp(-\eta_s )\}$.
The conditional density is Poisson: $$p(x_s \mid {x}_{-s})=\exp \left\{ \eta_s x_s -\log(x_s !)- D_s(\eta_s)\right\}, \quad x_s \in \{0, 1, \ldots \} ,
\label{cond:Poisson}$$ where $f_s(x_s)=\alpha_{1s} x_s-\log(x_s!)$ and $D_s(\eta_s)=\exp(\eta_s)$.
The conditional density is exponential: $$p(x_s \mid {x}_{-s})=\exp \left\{ \eta_s x_s - D_s(\eta_s)\right\}, \quad x_s \in \mathcal{R}^{+},
\label{cond:exp}$$ where $f_s(x_s)=\alpha_{1s} x_s$ and $D_s(\eta_s)=-\log(-\eta_s)$.
These four examples have been studied in the context of conditionally-specified graphical models in which all nodes are of the same type (@besag1974, @meinshausen2006, @ravikumar2010, @allen2012, @yang2012).
In what follows, we will consider the conditionally-specified mixed graphical model, with conditional distributions given by (\[cond\]), in which each node can be of a different type. This class of mixed graphical models is not closed under marginalization: for instance, given a graph composed of Gaussian and Bernoulli nodes, integrating out the Bernoulli nodes leads to a conditional density that is a mixture of Gaussians, which does not belong to the exponential family.
Compatibility of Conditionally-Specified Models {#compatibility}
-----------------------------------------------
Under what circumstances does the conditionally-specified model with node-conditional distributions given in correspond to a well-defined joint distribution? We first adapt and restate a definition from @wang2008, which applies to any conditional density.
A non-negative function $g$ is capable of generating a conditional density function $p(y \mid {x})$ if $$p(y \mid {x})=\frac{g(y, {x})}{\int g(y, {x}) dy}.$$ Two conditional densities are said to be compatible if there exists a function $g$ that is capable of generating both conditional densities. When $g$ is a density, the conditional densities are called strongly compatible. \[compat:defn\]
The following proposition relates Definition \[compat:defn\] to the conditional density in . Its proof, and those of other statements in this paper, are available in the Supplementary Material.
Let $ {x}={(x_1,...,x_p)}^\T$ be a random vector. Suppose that for each $x_s$, the conditional density takes the form of . If $\theta_{st}=\theta_{ts}$, then the conditional densities are compatible. Furthermore, any function $g$ that is capable of generating the conditional densities is of the form $$g( {x})\propto \exp \left\{ \sum\limits_{s=1}^p f_s(x_s) + \frac{1}{2}\sum\limits_{s=1}^{p}\sum\limits_{t\neq s} \theta_{ts} x_s x_t \right\}.
\label{g}$$ \[compat:prop\]
Under the conditions of Proposition \[compat:prop\], if we further assume that $g$ in is integrable, then by Definition \[compat:defn\], the conditional densities of the form (\[cond\]) are strongly compatible. Proposition \[compat:prop\] indicates that, provided that is a valid joint distribution, we can arrive at it via the conditional densities in . This justifies the conditionally-specified modeling approach taken in this paper. Proposition \[compat:prop\] is closely related to Section 4$\cdot$3 in @besag1974 and Proposition 1 in @yang2012, with small modifications. More general theory is developed in @wang2008.
We now return to the four examples –. Lemma \[lmm:compat\] summarizes the conditions under which a conditionally-specified model with [non-degenerate]{} conditional distributions of the form – leads to a valid joint distribution.
If $\theta_{st}=\theta_{ts}$, the subset of conditions with a dagger ($\dagger$) in Table \[tab1\] is necessary and sufficient for the conditional densities in – to be compatible. Moreover, the complete set of conditions in Table \[tab1\] is necessary and sufficient for the conditional densities in – to be strongly compatible. \[lmm:compat\]
[To simplify the presentation of the conditions for the Gaussian nodes, in Table \[tab1\] it is assumed]{} that $J$ is the index set of the Gaussian nodes. Without loss of generality, we further assume that the nodes are ordered such that $J=\{1,\ldots,m\}$, and define $$\Theta_{JJ}= {\begin{pmatrix}}\alpha_{2 1} & \theta_{1 2} & \cdots & \theta_{1 m} \\
\theta_{2 1} & \alpha_{2 2} &\cdots &\theta_{2 m} \\
\vdots & \vdots & \ddots & \vdots \\
\theta_{m 1} & \theta_{m 2} & \cdots & \alpha_{2 m}
{\end{pmatrix}}.
\label{Thetajj}$$
---------- ----------------------- --------------------- --------------------------------------- --------------------------------------------------------
Gaussian $\Theta_{JJ} \prec 0$ $\theta_{ts}=0$ $\theta_{ts}=0^\dagger$ $\theta_{ts} \in \mathcal{R}^\dagger$
$\theta_{ts}\leq 0$ [$\theta_{ts}\leq 0^\dagger$]{} $\theta_{ts} \in \mathcal{R}^\dagger$
$ \theta_{ts}\leq 0^\dagger$ $ \sum_{s \in I}|\theta_{st}| < - \alpha_{1t}^\dagger$
$\theta_{ts} \in \mathcal{R}^\dagger$
---------- ----------------------- --------------------- --------------------------------------- --------------------------------------------------------
: Restrictions on the parameter space required for compatibility or strong compatibility of the conditional densities in –
\[tab1\]
The column specifies the type of the $s$th node, and the row specifies the type of the $t$th node. Conditions marked with a dagger ($\dagger$) are necessary and sufficient for the conditional densities in – to be compatible, and the complete set of conditions is necessary and sufficient for the conditional densities to be strongly compatible. For compatibility to hold for a Gaussian node $x_s$, $\alpha_{2s}<0 $ is also required. Here $\Theta_{JJ}$ is as defined in , and $I$ denotes the set of Bernoulli nodes.
Table 1 reveals the set of restrictions on the parameter space that must hold in order for the conditional densities in – to be compatible or strongly compatible. The diagonal entries of this table were previously studied in [@besag1974]. In general, strong compatibility imposes more restrictions on the parameter space than compatibility. For instance, compatibility does not place any restrictions on edges between two Poisson nodes, but for strong compatibility to hold, the edge potentials must be negative. Compatibility and strong compatibility even restrict the relationships that can be modeled using the conditional densities –: for instance, no edges are possible between Gaussian and exponential nodes, or between Gaussian and Poisson nodes.
To summarize, given conditional densities of the form –, existence of a joint density imposes substantial constraints on the parameter space, and thus limits the flexibility of the corresponding graph. However, we will see in Section \[nojd\] that it is possible to consistently estimate the structure of a graph even when the requirements for compatibility or strong compatibility are violated, i.e., even in the absence of a joint density.
While Table \[tab1\] only examines conditionally-specified models composed of the conditional densities in –, the estimator proposed in Section \[pf\] and the theory developed in Sections \[theory\] and \[nojd\] apply to other types of conditional densities of the form .
Estimation Via Neighbourhood Selection {#pf}
======================================
Estimation
----------
We now present a neighbourhood selection approach for recovering the structure of a mixed graphical model, by maximizing penalized conditional likelihoods node-by-node. A similar approach has been studied in the setting where all nodes in the graph are of the same type [@meinshausen2006; @ravikumar2010; @allen2012; @yang2012].
Recall from Section \[condmodel\] that $f_s(x_s)= \alpha_{1s} x_s + \alpha_{2s} x_s^2/2+ \sum_{k = 3}^{K} \alpha_{ks} B_{ks}(x_s)$. We now simplify the problem by assuming that $\alpha_{ks}$ is known, and possibly zero, for $k\geq 2$. Let $X$ denote an $n \times p$ data matrix, with the $i$th row given by $x^{(i)}$. From now on, we use an asterisk to denote the true parameter values. We estimate $\Theta^*_s$ and $\alpha^*_{1s}$, the parameters for the $s$th node, as $$\underset{{\Theta_s \in \mathcal{R}^{p-1},\ \alpha_{1s} \in \mathcal{R}}}{\text{{arg min}}} \quad -\ell_s (\Theta_s, \alpha_{1s}; {X} ) +\lambda_n \| \Theta_s \|_1,
\label{pl}$$ where $\ell_s (\Theta_s, \alpha_{1s} ; {X} )= \sum\limits_{i=1}^{n} \log p(x_s^{(i)}\mid {x}^{(i)}_{-s})/n$; recall that the conditional density $p(x_s^{(i)}\mid {x}^{(i)}_{-s})$ is defined in . Finally, we define the estimated neighbourhood of $x_s$ to be $\hat{N}(x_s)= \{t: \hat{\theta}_{ts}\neq 0\}$, where $\hat{\Theta}_s$ solves , and $\hat\theta_{ts}$ is the element corresponding to an edge with the $t$th node.
In practice, to avoid a situation in which variables of different types are on different scales, we may wish to modify in order to allow a different weight for the $\ell_1$-penalty on each coefficient. We define a weight vector $w$ equal to the empirical standard errors of the corresponding variables: $w={(\hat{\sigma}_1, ... ,\hat{\sigma}_{s-1}, \hat{\sigma}_{s+1}, ..., \hat{\sigma}_{p})}^\T$. Then can be replaced with $$\underset{{\Theta_s \in \mathcal{R}^{p-1},\ \alpha_{1s} \in \mathcal{R}}}{\text{arg min}} \quad -\ell_s (\Theta_s, \alpha_{1s}; {X} ) +\lambda_n \| {\text{diag}(w)} \Theta_s \|_1.
\label{pl.weighted}$$ The analysis in Sections \[theory\] and \[nojd\] uses for simplicity, but could be generalized to with additional bookkeeping. Both and can be easily solved (see e.g. @friedman2010).
In the joint density , the parameter matrix $\Theta$ is symmetric, i.e., $\theta_{st}=\theta_{ts}$, but the neighbourhood selection method does not guarantee symmetric estimates: for instance, it could happen that $\hat{\theta}_{st}=0$ but $\hat{\theta}_{ts}\neq 0$. Our analysis in Section \[selection\] shows that we can exploit the asymmetry in $\hat{\theta}_{st}$ and $\hat{\theta}_{ts}$ when $x_s$ and $x_t$ are of different types, in order to obtain more efficient edge estimates.
Tuning
------
In order to select the value of the tuning parameter $\lambda_n$ in (\[pl\]), we use the Bayesian information criterion (@zou2007, @peng2009, @voorman2014), which takes the form $$\textsc{bic}_s({\lambda_n}) = - 2n \ell_s (\hat{\Theta}_s, \hat{\alpha}_{1s}; X) + \log(n) \| \hat{\Theta}_s \|_0,$$ [where $\| \hat{\Theta}_s \|_0$ is the number of non-zero elements in $\hat{\Theta}_{s}$ for a given value of $\lambda_n$. ]{} We allow a different value of $\lambda_n$ for each node type. For instance, to select $\lambda_n$ for the Poisson nodes, we choose the value of $\lambda_n$ such that $\textsc{bic}_s({\lambda_n})$, summed over the Poisson nodes, is minimized. We evaluate the performance of this approach for tuning parameter selection in Section \[TP\].
Neighbourhood Recovery and Selection With Strongly Compatible Conditional Distributions {#theory}
=======================================================================================
Neighbourhood Recovery {#subsec:jd}
-----------------------
In this subsection we show that if the conditional distributions in are strongly compatible, as they will be under conditions discussed in Section \[compatibility\], then under some additional assumptions, the true neighbourhood of each node is consistently selected using the neighbourhood selection approach proposed in Section \[estimation\]. Here we rely heavily on results from @yang2012, who consider a related problem in which all nodes are of the same type.
In the following discussion, we assume that $p>n$ for simplicity. For any $s$, let $\Delta_s$ denote the set of indices for elements of $(\Theta_s^\T, \alpha_{1s})^\T$ that correspond to non-neighbours of the $s$th node, and let $Q_s^*=-\nabla^2 \ell_s(\Theta_s^*, \alpha_{1s}^*; {X})$ be the negative Hessian of $\ell_s(\Theta_s,\alpha_{1s}; {X})$ with respect to $(\Theta_s^\T, \alpha_{1s})^\T$, evaluated at the true values of the parameters. Below we suppress the subscript $s$ for simplicity, and we remind the reader that all quantities are related to the conditional density of the $s$th node. We express $Q^*$ in blocks: $$Q^*={\begin{pmatrix}}Q^*_{\Delta^c \Delta^c} & Q^*_{\Delta^c \Delta} \\ Q^*_{\Delta \Delta^c} & Q^*_{\Delta \Delta} {\end{pmatrix}}.$$
There exists a positive number $a$ such that $$\underset{l \in \Delta}{\max} \| Q^*_{l\Delta^c} (Q^*_{\Delta^c \Delta^c})^{-1}\|_1 \leq 1-a .$$ \[irrep\]
Assumption \[irrep\] limits the association between the neighbours and non-neighbours of the $s$th node: if the association is too high, then it is not possible to select the correct neighbourhood. This type of assumption is standard for variable selection consistency of $\ell_1$-penalized estimators (see e.g. @meinshausen2006, @zhao2006, @wainwright2009, @ravikumar2010, @ravikumar2011, @yang2012, @lee2013).
There exists $ \Lambda_{1} > 0$ such that the smallest eigenvalue of $Q^*_{\Delta^c \Delta^c}$, $\Lambda_{\min} (Q^*_{\Delta^c \Delta^c})$, is greater than or equal to $\Lambda_{1}.$ Also, there exists $ \Lambda_{2} < \infty$ such that the largest eigenvalue of $\sum_{i=1}^{n} {x}^{(i)}_{0} ({x}^{(i)}_{0})^\T/n$, $\Lambda_{\max}\left\{ \sum_{i=1}^{n} {x}^{(i)}_{0} ({x}^{(i)}_{0})^\T/n \right\}$, is less than or equal to $\Lambda_{2}$, where ${x}_{0}=(x_{-s}^\T,1)^\T$. \[dep\]
The lower bound in Assumption \[dep\] is needed to prevent singularity among the true neighbours, which would prevent neighbourhood recovery. The bound on the largest eigenvalue of the sample covariance matrix is needed to prevent a situation where most of the variance in the data is due to a single feature. Similar assumptions are made in @zhao2006, @meinshausen2006, @wainwright2009, @ravikumar2010, @yang2012.
The log-partition function $D(\cdot)$ of the conditional density $p(x_s \mid x_{-s})$ is third-order differentiable, and there exist $\kappa_2$ and $\kappa_3$ such that $| D^{''}(y)|\leq \kappa_2 $ and $|D^{'''}(y)|\leq \kappa_3$ for $y \in \{y: y \in \mathcal{D}, \ |y| \leq M \delta_1 \log p\}$, where $\mathcal{D}$ is the support of $D(\cdot)$. \[D\]
The two quantities $\kappa_2$ and $\kappa_3$ are functions of $p$. The quantity $\delta_1$ is a constant to be chosen in Proposition \[prop.e1\]. The constant $M$ is a sufficiently large constant that plays a role in Assumption \[tuning.range\].
Assumption \[D\] controls the smoothness of the log-partition function $D(\cdot)$ for conditional densities of the form . Recall from Section \[condmodel\] that the log-partition function of the node $x_s$ is $D(\eta_s)$, where $\eta_s$ equals $\alpha_{1s}+\sum_{t \neq s} \theta_{ts} x_t$. To apply Assumption \[D\] to $D(\eta_s)$, we will need to bound $\sum_{t \neq s} \theta_{ts} x_t$, so that $|\eta_s| \leq M \delta_1 \log(p)$.
In order to obtain such a bound, we need another assumption.
Assume that, for $ \ t = 1,...,p$, (i) $|E(x_t)|\leq \kappa_m$, (ii) $E(x_t^2)\leq \kappa_v$, and (iii) $$\underset{u:|u|\leq 1}{\max} \left. \frac{\partial^2 A}{\partial \alpha^2_{1t}} \right|_{\alpha^*_{1t}+u} \leq \kappa_h, \quad \underset{u:|u|\leq 1}{\max} \left. \frac{\partial^2 A}{\partial \alpha^2_{2t}} \right|_{\alpha^*_{2t}+u} \leq \kappa_h.$$ \[A\]
Assumption \[A\] controls the moments of each node, as well as the local smoothness of the log-partition function $A$ in . Given Assumption \[A\], the following propositions on the marginal behaviour of random variables hold; see Propositions 3 and 4 in @yang2012.
Define the event $$\xi_1 = \left( \displaystyle \max_{\substack{i\in \{1,...,n\}; t \in\{1,...,p\} }}|x^{(i)}_t| < \delta_1 \log p\right).$$ Assuming $p>n$, $\text{pr}(\xi_1)\geq 1- c_1 p^{-\delta_1+2}, $ where $c_1=\exp(\kappa_m + \kappa_h/2) $. \[prop.e1\]
Define the event $$\xi_2 = \left[ \displaystyle \max_{\substack{ t \in\{1,...,p\} }}\left\{\frac{1}{n}\sum\limits_{i=1}^{n}(x^{(i)}_t)^2 \right\}< \delta_2 \right],$$ where $\delta_2 \geq 1$. If $\delta_2 \leq \min (2\kappa_v/3, \kappa_h+\kappa_v )$, and $n\geq 8 \kappa_h^2 \log p /\delta_2^2 $, then $\text{pr}(\xi_2)\geq 1-\exp(-c_2 \delta_2^2 n), $ where $c_2=1/(4\kappa_h^2)$. \[prop.e2\]
We now present three additional assumptions that relate to the node-wise regression in (\[pl\]).
The minimum of edge potentials related to node $x_s$, $ {\min}_{t\in N(x_s)} |\theta_{ts}|$, is larger than $10 (d+1)^{1/2} \lambda_n/\Lambda_{1} $, where $d$ is the number of neighbours of $x_s$. \[thetamin\]
\[tuning.range\] The tuning parameter $\lambda_n$ is in the range $$\left[\frac{8(2-a)}{a} \left\{\delta_2 \kappa_2 \frac{\log (2p)}{n}\right\}^{1/2},
\min\left\{ \frac{2(2-a)}{a} \kappa_2 \delta_2 M , \frac{a \Lambda_{1}^2 (d+1)^{-1}}{288(2-a) \kappa_2 \Lambda_{2} }, \frac{\Lambda_{1}^2 (d+1)^{-1}}{12 \Lambda_{2}\kappa_3 \delta_1 \log p} \right\}\right].
\label{tuning.range:eq}$$
Of the three quantities in the upper bound of $\lambda_n$, $\Lambda_{1}^2/ \{12\Lambda_{2} (d+1)\kappa_3 \delta_1 \log p\}$ is usually the smallest because of the $\log p$ in the denominator.
The sample size $n$ is no smaller than $8 \kappa_h^2\log p/\delta_2^2$, and also the range of feasible $\lambda_n$ in Assumption \[tuning.range\] is non-empty, i.e., $$\label{sample:eq}
n\geq \frac{ 96^2 (2-a)^2 \Lambda^2_{2} }{a^2 \Lambda_{1}^4} (d+1)^2 \kappa_2 \kappa^2_3 \delta_1^2 \delta_2 \log (2p) (\log p)^2.$$ \[n\]
Assumptions \[thetamin\], \[tuning.range\], and \[n\] specify the minimum edge potential, the range of the tuning parameter, and the minimum sample size, required for Theorem \[thm\] to hold, that is, for our neighbourhood selection approach to achieve model selection consistency. Similar assumptions are made in related work [@yang2012].
\[remark3\] Suppose that $n=\Omega \{ (d+1)^2 \log^{3+\epsilon}(p) \}$ for $\epsilon>0$, $\lambda_n = c \{ \log(p)/n \}^{1/2}$ for some constant $c$, [and $\kappa_2$ and $\kappa_3$ are $O(1)$]{}. Then Assumptions \[tuning.range\] and \[n\] are satisfied asymptotically as $n$ and $p$ tend to infinity. Similar rates appear in @meinshausen2006 [@ravikumar2010; @yang2012].
Suppose that the joint density (\[joint\]) exists and Assumptions \[irrep\] – \[n\] hold for [the $s$th node]{}. Then with probability at least $1-c_1 p^{-\delta_1+2}-\exp(-c_2 \delta_2^2 n)-\exp(-c_3 \delta_3 n)$, for some constants $c_1, c_2, c_3$, $\delta_2 \leq \min (2\kappa_v/3, \kappa_h+\kappa_v )$, and $\delta_3=1/(\kappa_2 \delta_2)$, the [estimator from]{} (\[pl\]) recovers the true neighbourhood of $x_s$ exactly, so that $\hat{N}(x_s)=N(x_s)$. \[thm\]
Theorem \[thm\] shows that the probability of successful recovery converges to 1 [exponentially fast with the sample size $n$]{}. We note that the number of neighbours $d$ appears in Assumptions \[thetamin\]–\[n\]. As $d$ increases, the minimum edge potential for each neighbour increases, the upper range for $\lambda_n$ decreases, and the required sample size increases. Therefore, we need the true graph $G$ to be sparse, $d=o(n)$, in order for Theorem 1 to be meaningful.
The quantities $\delta_2 \kappa_2$ and $\delta_1\kappa_3$ appear in the upper bound of $\lambda_n$ and the minimum sample size . The fact that $\kappa_2$ and $\delta_2$ appear together in a product implies that we can relax the restriction on $\delta_2$ if $\kappa_2$ is small. The same applies to $\delta_1$ and $\kappa_3$.
For certain types of nodes, Theorem \[thm\] holds with a less stringent set of assumptions. For a Gaussian node, the second-and-higher order derivatives of $D(\cdot)$ are always bounded, i.e., $\kappa_2=1$ and $\kappa_3=0$. This has profound effects on the theory, as illustrated in Corollary \[col1\].
Suppose that the joint density (\[joint\]) exists and Assumptions \[irrep\]–\[thetamin\] hold [for a Gaussian node, $x_s$. If ]{} $$\lambda_n \in \left[\frac{8(2-a)}{a} \left\{\delta_2 \frac{\log (2p)}{n}\right\}^{1/2}, \frac{2(2-a)}{a} \delta_2 M \right], \quad n \geq \frac{8 \kappa_h^2\log p}{\delta_2^2},$$ then with probability at least $1-\exp(-c_2 \delta_2^2 n)-\exp(-c_3 \delta_3 n)$, for some constants $c_2, c_3$, $\delta_2 \leq \min (2\kappa_v/3, \kappa_h+\kappa_v )$, and $\delta_3=1/ \delta_2$, the [estimator from]{} recovers the true neighbourhood of $x_s$ exactly, so that $\hat{N}(x_s)=N(x_s)$. \[col1\]
Combining Neighbourhoods to Estimate the Edge Set {#selection}
-------------------------------------------------
The neighbourhood selection approach may give asymmetric estimates, in the sense that $t \in \hat{N}(x_s)$ but $ s \notin \hat{N}(x_t)$. To deal with this discrepancy, two strategies for estimating a single edge set were proposed in @meinshausen2006, and adapted in other work: $$\hat{E}_{\text{and}}= \left\{ (s,t): s \in \hat{N}(x_t) \ \text{and} \ t \in \hat{N}(x_s) \right\}, \quad
\hat{E}_{\text{or}}= \left\{ (s,t): s \in \hat{N}(x_t) \ \text{or} \ t \in \hat{N}(x_s) \right\}.
$$ When the $s$th and $t$th nodes are of the same type, there is no clear reason to prefer the edge estimate from $\hat{N}(x_s)$ over the one from $\hat{N}(x_t)$, and so the choice of the intersection rule, $\hat{E}_{\text{and}}$, versus the union rule, $\hat{E}_{\text{or}}$, is not crucial [@meinshausen2006].
When the $s$th and $t$th nodes are of different types, however, the choice of neighbourhood matters. We now take a closer look at this with examples of Gaussian, Bernoulli, exponential and Poisson nodes as in –. Quantities $c_1$, $c_2$, and $c_3$ in Theorem \[thm\] are the same regardless of the node type, while the values of $\kappa_2$ and $\kappa_3$ depend on the type of node being regressed on the others in . We fix $B_1= \kappa_3 \delta_1$ for Bernoulli, Poisson and exponential nodes. For a Gaussian node, this quantity will always equal zero, since $D(\eta_s)=\eta_s^2/2+\log(2\pi)/2$ and hence $D^{'''}(\eta_s)=0=\kappa_3$. Furthermore, we fix $B_2 = 1/ \delta_3= \delta_2 \kappa_2$ for all four types of nodes. With $B_1$ and $B_2$ fixed, the minimum sample size and the feasible range of the tuning parameter for Bernoulli, Poisson and exponential nodes are exactly the same, [as these quantities involve only $B_1$ and $B_2$. In particular, from Assumption \[tuning.range\], the range of feasible $\lambda_n$ is $[ 8(2-a)\{\log (2p) B_2 /n\}^{1/2} / a, \Lambda_{1}^2 /\{12\Lambda_{2}(d+1) B_1 \log p\} ],$ and from Assumption \[n\], the minimum sample size is $96^2 (2-a)^2 \Lambda^2_{2} (d+1)^2 B_2 B_1^2 \log (2p) (\log p)^2 / (a^2 \Lambda_{1}^4)$. These bounds]{} are more restrictive than the corresponding bounds for Gaussian nodes in Corollary \[col1\]. We now derive a lower bound for the probability of successful neighbourhood recovery for each node type.
\[ex:gau\] [If $x_s$ is a Gaussian node, then]{} the log-partition function is $D(\eta_s)=\eta_s^2/2+\log(2\pi)/2$. It follows that $D^{''}(\eta_s)=1=\kappa_2$. Thus, $\delta_2=B_2$. By Corollary \[col1\], a lower bound for the probability of successful neighbourhood recovery is $$\text{pr}\{\hat{N}(x_s)=N(x_s)\} \geq 1-\exp(-c_2 B_2^2 n)-\exp(-c_3 n /B_2).
\label{prob:Gaussian}$$
\[ex:bin\] [If $x_s$ is a Bernoulli node, then]{} the log-partition function is $D(\eta_s)=\log\{ \exp(-\eta_s) +\exp(\eta_s)\}$, so that $|D^{''}(\eta_s)| \leq 1$ and $|D^{'''}(\eta_s)| \leq 2$. Consequently, $\delta_2=B_2$, and $\delta_1= B_1/\kappa_3=B_1/2$. By Theorem \[thm\], a lower bound for the probability of successful neighbourhood recovery is $$\text{pr}\{\hat{N}(x_s)=N(x_s)\} \geq 1-c_1 p^{-B_1/2+2}-\exp(-c_2 B_2^2 n)-\exp(-c_3 n/B_2).
\label{prob:Bernoulli}$$
\[ex:poi\] [If $x_s$ is a Poisson node, then]{} the log-partition function is $D(\eta_s)=\exp (\eta_s)$, so $D^{''}(\eta_s)=D^{'''}(\eta_s)= \exp (\eta_s)$. To bound $D^{''}(\eta_s)$ and $D^{'''}(\eta_s)$, we need to bound $\exp(\eta_s)$. Recall from Table \[tab1\] that strong compatibility requires that $\theta_{ts} x_t \leq 0$ when $x_t$ is Gaussian, Poisson or exponential. Therefore, an upper bound for $\exp(\eta_s)$ is $$\exp ( \eta_s ) \leq \exp\left( \alpha_{1s} + \sum\limits_{t \in I} |\theta_{ts}|\right) \equiv b_P,
\label{b:Poisson}$$ with $I$ the set of Bernoulli nodes. Therefore, $\kappa_2=\kappa_3= b_P$, and so $\delta_2=B_2/b_P$ and $\delta_1=B_1/b_P$. By Theorem \[thm\], a lower bound on the probability of successful neighbourhood recovery is $$\text{pr}\{\hat{N}(x_s)=N(x_s)\} \geq 1-c_1 p^{- B_1/b_P+2}-\exp(-c_2 B_2^2 n/b_P^2)-\exp(-c_3 n/B_2).
\label{prob:Poisson}$$
\[ex:exp\] [If $x_s$ is an exponential node, then]{} the log-partition function is $D(\eta_s)=-\log (-\eta_s)$, so $D^{''}(\eta_s)=\eta_s^{-2}$ and $D^{'''}(\eta_s)=-2\eta_s^{-3}$. Furthermore, $$\label{newnum}
\eta_s = \alpha_{1s} + \sum_{t \neq s} \theta_{ts} x_t \leq \alpha_{1s} + \sum_{t \in I} \theta_{ts} x_t { \leq \alpha_{1s} + \sum_{t \in I} |\theta_{ts}| < 0},$$ with $I$ the set of Bernoulli nodes. In , the first inequality follows from the requirement for compatibility from Table \[tab1\] that $\theta_{ts} x_t \leq 0$ when $x_t$ is Gaussian, Poisson or exponential; the second inequality follows from the fact that Bernoulli nodes are coded as $+1$ and $-1$; and the third inequality follows from the Bernoulli-exponential entry in Table \[tab1\]. Therefore, it follows that $$|\eta_s| { \geq \left|\alpha_{1s} + \sum_{t \in I} |\theta_{ts}| \right| } \geq |\alpha_{1s}| - \sum_{t \in I} |\theta_{ts}| \equiv b_E. \label{b:exp}$$ As a result, $|D^{''}(\eta_s)|$ and $|D^{'''}(\eta_s)|$ are bounded by $\kappa_2=b_E^{-2}$ and $\kappa_3=2b_E^{-3}$, respectively. For fixed $B_1$ and $B_2$, we have $\delta_2=b_E^2 B_2$ and $\delta_1=B_1 b_E^3/2$. By Theorem \[thm\], a lower bound for the probability of successful neighbourhood recovery is $$\text{pr}\{\hat{N}(x_s)=N(x_s)\} \geq 1-c_1 p^{- b_E^3 B_1/2+2}-\exp(-c_2 b_E^4 B_2^2 n)-\exp(-c_3 n/B_2) .
\label{prob:exp}$$
Examples \[ex:gau\]-\[ex:exp\] reveal that the neighbourhood of a Gaussian node is easier to recover than the neighbourhood of the other three types of nodes: the first requires a smaller minimum sample size when $p$ is large, allows for a wider range of feasible tuning parameters, and has in general a higher probability of success. As a result, the neighbourhood of the Gaussian node should be used when estimating an edge between a Gaussian node and a non-Gaussian node.
Which neighbourhood should we use to estimate an edge between two non-Gaussian nodes? There are no clear winners: while can be evaluated given knowledge of $c_1$, $c_2$, and $c_3$, and also require knowledge of the unknown quantities $b_E$ and $b_P$, which are functions of unknown quantities $\Theta_s$ and $\alpha_{1s}$ in and . One possibility is to insert a consistent estimator for these parameters (see e.g. @vandeGeer2008, @bunea2008) in order to obtain a consistent estimator for $b_P$ or $b_E$. This leads to the following lemma.
Suppose $\tilde{\Theta}_s$ and $\tilde{\alpha}_{1s}$ are consistent estimators of the true parameters in the conditional densities and . Let $I$ be the index set of the Bernoulli nodes.
1\. If $x_s$ is a Poisson node and $\tilde{b}_P= \exp( \tilde{\alpha}_{1s} + \sum_{t \in I} |\tilde{\theta}_{ts}|)$, then $$1-c_1 p^{- B_1/\tilde{b}_P+2}-\exp(-c_2 B_2^2 n/\tilde{b}^2_P)-\exp(-c_3 n/B_2)
\label{estprob:Poisson}$$ is a consistent estimator of a lower bound for $\text{pr}\{ \hat{N}(x_s) = N(x_s)\}$.
2\. If $x_s$ is an exponential node and $\tilde{b}_E= |\tilde{\alpha}_{1s}| - \sum_{t \in I} |\tilde{\theta}_{ts}|$, then $$1-c_1p^{- \tilde{b}_E^3 B_1/2+2}-\exp(-c_2 \tilde{b}_E^4 B_2^2 n)-\exp(-c_3 n/B_2)
\label{estprob:exp}$$ is a consistent estimator of a lower bound for $\text{pr}\{ \hat{N}(x_s) = N(x_s)\}$. \[lmm:selection\]
Therefore, by inserting consistent estimators of $\Theta_s$ and $\alpha_{1s}$ into or , we can reconstruct an edge by choosing the estimate with the highest probability of correct recovery according to , , and . The rules are summarized in Table \[tab2\]. The results in this section illustrate a worst case scenario for recovery of each neighbourhood, in that Theorem \[thm\] provides a lower bound for the probability of successful neighbourhood recovery.
------------------------- -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Selection rules
Poisson & Exponential Choose Poisson if $\tilde{b}_E^2 \tilde{b}_P < 1$ and $\tilde{b}_E^3 \tilde{b}_P < 2$. Choose exponential if $\tilde{b}_E^2 \tilde{b}_P > 1$ and $\tilde{b}_E^3 \tilde{b}_P > 2$.
Poisson & Bernoulli Choose Poisson if $\tilde{b}_P < 1$. Choose Bernoulli if $\tilde{b}_P > 2$.
Exponential & Bernoulli Choose exponential if $\tilde{b}_E \geq 1$. Choose Bernoulli if $\tilde{b}_E <1 $.
------------------------- -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
: Neighbourhood to use in estimating an edge between two non-Gaussian nodes of different types
\[tab2\] When the conditions in this table are not met, there is no clear preference in terms of which neighbourhood to use.
Neighbourhood Recovery and Selection with Partially-Specified Models {#nojd}
====================================================================
In Section \[theory\], we showed that the neighbourhood selection approach of Section \[estimation\] can recover the true graph when each node’s conditional distribution is of the form , provided that the conditions for strong compatibility are satisfied. In this section, we consider a partially-specified model in which some of the nodes are assumed to have conditional distributions of the form , and we make no assumption on the conditional distributions of the remaining nodes. We will show that in this setting, neighbourhoods of the nodes with conditional distributions of the form (3) can still be recovered.
Here the neighbourhood of $x_s$ is defined based upon its conditional density, , as $N^{0}(x_s)=\{t: \theta_{ts}\neq 0\}$. Assumption \[A\] in Section \[subsec:jd\] is inappropriate since we no longer assume that all $p$ nodes have conditional densities of the form , and consequently we are not assuming a particular form for the joint density. Therefore, we make the following assumption to replace Propositions \[prop.e1\] and \[prop.e2\].
Assume that (i) $\text{pr}(\xi_1)\geq 1- c_1 p^{-\delta_1+2},$ (ii) $\text{pr}(\xi_2)\geq 1-\exp(-c_2 \delta_2^2 n ).$ \[A2\]
Suppose that the $s$th node has conditional density , and that Assumptions \[irrep\] – \[D\] and \[thetamin\] – \[A2\] hold. Then with probability at least $1-c_1 p^{-\delta_1+2}-\exp(-c_2 \delta_2^2 n)-\exp(-c_3 \delta_3 n) $, for some constants $c_1, c_2, c_3$, and $\delta_3=1/(\kappa_2 \delta_2)$, the [estimator from]{} recovers the true neighbourhood of $x_s$ exactly, so that $\hat{N}(x_s)=N^{0}(x_s)$. \[thm2\]
The proof of Theorem \[thm2\] is similar to that of Theorem \[thm\], and is thus omitted. Theorem \[thm2\] indicates that our neighbourhood selection approach can recover the neighbourhood of any node for which the conditional density is of the form , provided that Assumption \[A2\] holds. This means that in order to recover an edge between two nodes using our neighbourhood selection approach, it suffices for one of the two nodes’ conditional densities to be of the form . Consequently, we can model relationships that are far more flexible than those outlined in Table \[tab1\], e.g. an edge between a Poisson node and a node that takes values on the whole real line.
Although Theorem \[thm2\] allows us to go beyond some of the restrictions in Table \[tab1\], it is still restricted in that it only guarantees recovery of an edge between two nodes for which at least one of the node-conditional densities is exactly of the form . In future work, we could generalize Theorem \[thm2\] to the case where is simply an approximation to the true node-conditional distribution.
Numerical Studies {#simulation}
=================
Data Generation {#generate}
---------------
We consider mixed graphical models with two types of nodes, and $m=p/2$ nodes per type, for Gaussian-Bernoulli and Poisson-Bernoulli models. We order the nodes so that the Gaussian or Poisson nodes precede the Bernoulli nodes.
For both models, we construct a graph in which the $j$th node for $j=1,\ldots,m$ is connected with the adjacent nodes of the same type, as well as the $(m+j)$th node of the other type, as shown in Fig. \[NEWFIG\]. This encodes the edge set $E$. For $(i,j) \in E$ and $i<j$, we generate the edge potentials $\theta_{ij}$ and $\theta_{ji}$ as $$\theta_{ij} = \theta_{ji}= y_{ij} r_{ij}, \; \text{pr}(y_{ij}= 1) =\text{pr}(y_{ij}= -1) =0.5, \; \ r_{ij} \sim \text{Unif}(a,b).
\label{generate.theta}$$ We set $\theta_{ij} = \theta_{ji}=0$ if $(i,j) \notin E$. Section \[generation\_detail\] in the Supplementary Material lists additional steps to ensure strong compatibility of the conditional distributions. Values of $a$ and $b$ in , as well as the parameters of $f_s(x_s)$ in the conditional density , are specified in Sections \[probrecovery\]–\[PB\].
\[scale=.8,auto=left, every node/.style=[circle,fill=blue!20]{}\]
(g1) at (1,5) [1]{}; (g2) at (3,5) [2]{}; (g3) at (5,5) [$\cdots$]{}; (g4) at (7,5) [$m$]{}; (b1) at (1,3) [$m+1$]{}; (b2) at (3,3) [$m+2$]{}; (b3) at (5,3) [$\cdots$]{}; (b4) at (7,3) [$2m$]{}; /in [g1/g2,g2/g3,g3/g4,g1/b1,g2/b2, g4/b4, b1/b2, b2/b3, b3/b4]{} () – ();
To sample from the joint density $p( {x})$ in without calculating the log-partition function $A$, we employ a Gibbs sampler, as in @lee2012. Briefly, we iterate through the nodes, and sample from each node’s conditional distribution. To ensure independence, after a burn-in period of 3000 iterations, we select samples from the chain 500 iterations apart from each other.
Probability of Successful Neighbourhood Recovery {#probrecovery}
------------------------------------------------
In Section \[subsec:jd\] we saw that the probability of successful neighbourhood recovery for neighbourhood selection converges to 1 exponentially fast with the sample size. And in Section \[selection\] we saw that the estimates from the Gaussian nodes are superior to those from the Bernoulli nodes, in the sense that a smaller sample size is needed in order to achieve a given probability of successful recovery. We now verify those findings empirically. Here, successful neighbourhood recovery is defined to mean that the estimated and true edge sets of a graph or a sub-graph are identical.
We set $a=b=0\cdot$3 in so that Assumption \[thetamin\] is satisfied, and generate one Gaussian-Bernoulli graph for each of $p=60$, $p=120$, and $p=240$. We set $\alpha_{1s}=0$ and $\alpha_{2s}= -1$ in for Gaussian nodes, and $\alpha_{1s}=0$ for Bernoulli nodes . For each graph, $100$ independent data sets are drawn from the Gibbs sampler. We perform neighbourhood selection using the estimator from , with the tuning parameter $\lambda_n$ set to be a constant $c$ times $\{\log (p)/n\}^{1/2}$, so that it is on the scale required by Assumption \[tuning.range\], as illustrated in Remark \[remark3\].
In order to achieve successful neighbourhood recovery as the sample size increases, the value of $c$ must be in a range matching the requirement of Assumption \[tuning.range\]. [We explored a range of values of $c$, and in Fig. \[fig1\] we show the probability of successful neighbourhood recovery for $c=2.6$.]{} For ease of viewing, we display separate empirical probability curves for the Gaussian-Gaussian, Bernoulli-Bernoulli, and Bernoulli-Gaussian subgraphs. Panels (a) and (b) are estimates obtained by regressing the Gaussian nodes onto the others, and panels (c) and (d) are the estimates from regressing the Bernoulli nodes onto the others. We see that the probability of successful recovery increases to 1 once the scaled sample size exceeds the threshold required in Assumption \[n\] and Corollary \[col1\]. Furthermore, [panels (b) and (c)]{} agree with the conclusions of Section \[selection\]: neighbourhood recovery using the regression of a Gaussian node onto the others requires fewer samples than recovery using the regression of a Bernoulli node onto the others.
Comparison to Competing Approaches {#TP}
----------------------------------
In this section, we compare the proposed method to alternative approaches on a Gaussian-Bernoulli graph. We limit the number of nodes to $p=40$ in order to facilitate comparison with the computationally intensive approach of @lee2012. We generate $100$ random graphs with $a=$0$\cdot$3 and $b=$0$\cdot$6 in (\[generate.theta\]), and we set $\alpha_{1s}=0$ and $\alpha_{2s}= -1$ in for Gaussian nodes and $\alpha_{1s}=0$ for Bernoulli nodes . Twenty independent samples of $n=200$ observations are generated from each graph. We evaluate the performance of each approach by computing the number of correctly estimated edges as a function of the number of estimated edges in the graph. Results are averaged over 20 data sets from each of 100 random graphs, for a total of 2000 simulated data sets.
Seven approaches are compared in this study: 1) our proposal for neighbourhood selection in the mixed graphical model; 2) penalized maximum likelihood estimation in the mixed graphical model [@lee2013]; 3) weighted $\ell_1$-penalized regression in the mixed graphical model, as proposed by @cheng2013; 4) graphical random forests [@fellinghauer2013]; 5) neighbourhood selection in the Gaussian graphical model [@meinshausen2006], where we use an $\ell_1$-penalized linear regression to estimate the neighbourhood of all nodes; 6) the graphical lasso [@friedman2008], which treats all features as Gaussian; and 7) neighbourhood selection in the Ising model [@ravikumar2010], where we use $\ell_1$-penalized logistic regression on all nodes after dichotomizing the Gaussian nodes by their means. The first four methods are designed for mixed graphical models, with @lee2012 and @cheng2013 specifically proposed for Gaussian-Bernoulli networks. In contrast, the last three methods ignore the presence of mixed node types. For methods based on neighbourhood selection, we use the union rule of @meinshausen2006 to reconstruct the edge set from the estimated neighbourhoods, with one exception: to estimate the Gaussian-Bernoulli edges for our proposed method, we use the estimates from the Gaussian nodes, as suggested by the theory developed in Section \[selection\].
Due to its high computational cost, the method of @lee2012 is run on 250 data sets from 50 graphs rather than 2000 data sets from 100 graphs.
\
The left-hand panel of Fig. \[fig2\] displays results for Bernoulli-Bernoulli and Gaussian-Gaussian edges, and the right-hand panel displays results for edges between Gaussian and Bernoulli nodes.
The curves in Fig. \[fig2\] correspond to the estimated graphs as the tuning parameter for each method is varied. Recall from Section \[tuning\] that our proposal involves a tuning parameter $\lambda_n^{G}$ for the $\ell_1$-penalized linear regressions of the Gaussian nodes onto the others, and a tuning parameter $\lambda_n^{B}$ for the $\ell_1$-penalized logistic regressions of the Bernoulli nodes onto the others. The triangles in Fig. \[fig2\] show the average performance of our proposed method with the tuning parameters $\hat{\lambda}_n^{B}$ and $\hat{\lambda}_n^{G}$ selected using <span style="font-variant:small-caps;">bic</span> summed over the Bernoulli and Gaussian nodes, respectively, as described in Section \[tuning\]. This choice yields good precision ($52\%$) and recall ($95\%$) for edge recovery in the graph. To obtain the curves in Fig. \[fig2\], we set $\lambda_n^{B} = (\hat{\lambda}_n^{B}/\hat{\lambda}_n^{G}) \lambda_n^{G}$, and varied the value of $\lambda_n^G$.
In general, our proposal outperforms the competitors, which is expected since it assumes the correct model. Though the proposals of @lee2012 and @cheng2013 are intended for a Gaussian-Bernoulli graph, they attempt to capture more complicated relationships than in , and so they perform worse than our proposal. On the other hand, the graphical random forest of @fellinghauer2013 performs reasonably well, despite the fact that it is a nonparametric approach. Neighbourhood selection in the Gaussian graphical model performs closest to the proposed method in terms of edge selection. The Ising model suffers substantially due to dichotomization of the Gaussian variables. The graphical lasso algorithm experiences serious violations to its multivariate Gaussian assumption, leading to poor performance.
Application of Selection Rules for Mixed Graphical Models {#PB}
---------------------------------------------------------
In Section \[TP\], in keeping with the results of Section \[selection\], we always used the estimates from the Gaussian nodes in estimating an edge between a Bernoulli node and a Gaussian node. Here we consider a mixed graphical model of Poisson and Bernoulli nodes. In this case, the selection rules in Section \[selection\] are more complex, and whether it is better to use a Poisson node or a Bernoulli node in order to estimate a Bernoulli-Poisson edge depends on the true parameter values in Table \[tab2\].
We generate a graph with $p=80$ nodes as follows: $a=0.8$ and $b=1$ in (\[generate.theta\]), $\alpha_{1s}=-3$ for $s=1, \ldots, 20$ and $\alpha_{1s}=0$ for $s=21, \ldots, 40$ for the Poisson nodes, and $\alpha_{1s}=0$ for the Bernoulli nodes. This guarantees that $b_P$ in is smaller than 1 for the first half of the Poisson nodes, and larger than 2 for the second half, due to the structure of the graph from Fig. \[NEWFIG\]. In order to estimate a Bernoulli-Poisson edge, we will use the estimates from the Poisson nodes if $b_P<1$ and the estimates from the Bernoulli nodes if $b_P>2$, according to the selection rules in Table \[tab2\].
We compare the performance of our proposed approach using the selection rules in Table \[tab2\], with the true and estimated parameters, to our proposed approach using the union and intersection rules (Section \[selection\]), as well as the graphical random forest of @fellinghauer2013. To prevent over-shrinkage of the parameters for estimation of $b_P$ in , we set $\lambda_n$ in to equal 0$\cdot$5 times the value from the Bayesian information criterion for each node type. We present only the results for Poisson-Bernoulli edges, as the selection rules in Section \[selection\] apply to edges between nodes of different types.
Results are shown in Fig. \[fig3\], averaged over 20 samples from each of 25 random graphs. The selection rules proposed in Section \[selection\] clearly outperform the commonly-used union and intersection rules. The curve for the selection rule from Section \[selection\] using the estimated parameter values is almost identical to the curve using the true parameter values, which indicates that in this case the quantity $b_P$ is accurately estimated for each node. The graphical random forest slightly outperforms our proposal when few edges are estimated, but performs worse when the estimated graph includes more edges. This may indicate that as the graph becomes less sparse, the nonparametric graphical random forest approach suffers from insufficient sample size.
Discussion
==========
In Section \[compatibility\] we saw that a stringent set of restrictions is required for compatibility or strong compatibility of the node-conditional distributions given in –. These restrictions limit the theoretical flexibility of the conditionally-specified mixed graphical model, especially when modeling unbounded variables. It is possible that by truncating unbounded variables, we may be able to circumvent some of these restrictions. Furthermore, the model assumes pairwise interactions in the form of $x_s x_t$, which can be seen as a second-order approximation of the true edge potentials in . We can relax this assumption by fitting non-linear edge potentials using semi-parametric penalized regressions, as in @voorman2014.
[ `R` code for replicating the numerical results in this paper is available at <https://github.com/ChenShizhe/MixedGraphicalModels>. ]{}
Acknowledgement {#acknowledgement .unnumbered}
===============
[We thank Jie Cheng, Bernd Fellinghauer, and Jason Lee for providing code and responding to our inquiries.]{} This work was partially supported by National Science Foundation grants to D.W. and A.S., National Institutes of Health grants to D.W. and A.S., and a Sloan Fellowship to D.W.
Appendix
========
A Proof for Proposition \[compat:prop\] {#proof:compat}
---------------------------------------
First of all, it is easy to see that if $\theta_{st}=\theta_{ts}$, then any function $g$ such that $$g({ }{x}) \propto \exp \left\{ \sum\limits_{s=1}^p f_s(x_s) + \frac{1}{2} \sum\limits_{s=1}^{p}\sum\limits_{t \neq s} \theta_{ts} x_s x_t\right\}
\label{compat:proof}$$ is capable of generating the conditional densities in as long as the function $g$ is integrable with respect to $x_s$ for $s = 1,\ldots, p$. The function $g$ can be decomposed as $$g({ }{x}) \propto \exp \left\{ f_s(x_s) + \frac{1}{2} \sum\limits_{t: t \neq s}{(\theta_{ts}+\theta_{st})} x_s x_t\right\} \exp\left\{\sum\limits_{t\neq s}f_t(x_t)+
\frac{1}{2} \sum\limits_{t: t\neq s, \ j: j\neq s,\ j\neq t} \theta_{tj} x_j x_t \right\},$$ so the integrability of the conditional density $p(x_s\mid x_{-s})$ guarantees the integrability of $g$ with respect to $x_s$. Therefore, the conditional densities of the form in are compatible if $\theta_{ts}=\theta_{st}$.
We now prove that any function $h$ that is capable of generating the conditional density in is in the form . The following proof is essentially the same as that in @besag1974. Suppose $h$ is a function that is capable of generating the conditional densities. Define $P({ }{x})=\log \{ h({ }{x})/h({ }{0})\}$, where ${ }{0}$ can be replaced by any interior point in the sample space.
By definition, $P({ }{0})=\log\{h(0)/h(0) \}=0$. Therefore, $P$ can be written in the general form $$P({ }{x})= \sum\limits_{s=1}^p x_s G_s (x_s) + \sum\limits_{t\neq s} \frac{G_{ts} (x_t,x_s)}{2} x_t x_s + \sum\limits_{t \neq s,t\neq j,j\neq s} \frac{G_{tsj} (x_t,x_s,x_j)}{6} x_t x_s x_j + \cdots \ ,$$ where we write the function $P$ as the sum of interactions of different orders. Note that the factor of $1/2$ is due to $G_{st}(x_s,x_t)=G_{ts}(x_t,x_s)$; similar factors apply for higher-order interactions. Recalling that we assume $h$ is capable of generating the conditional density $p(x_s\mid x_{-s})$, from Definition \[compat:defn\] we know that $$P({ }{x})-P({ }{x}_{s}^0)= {\log \left\{ \frac{h(x)/\int h(x) dx_s }{h(x_s^0)/\int h(x) dx_s } \right\}} = \log \left\{ \frac{p(x_s\mid { }{x}_{-s})}{p(0 \mid { }{x}_{-s}) } \right\},$$ where ${ }{x}_{s}^0={(x_1,\ldots, x_{s-1}, 0, x_{s+1},\ldots, x_{p})}^\T$ and $p(x_s\mid x_{-s})$ is the conditional density in . It follows that $$\log \left\{ \frac{p(x_s\mid { }{x}_{-s})}{p(0 \mid { }{x}_{-s}) } \right\} = P({ }{x})-P({ }{x}_{s}^0)= x_s \left(G_s (x_s) + \sum\limits_{t: t\neq s} x_t G_{ts} (x_t,x_s)+\cdots \right).
\label{Q:xs}$$
Letting $x_t=0$ for $t \neq s$ in and using the form of the conditional densities in , we have $$x_s G_s (x_s) = f_s(x_s)-f_s(0).
\label{first.order}$$ Here we set $f_s(0)=0$ since $f_s(0)$ is a constant. For the second-order interaction $G_{ts}$, we let $x_j=0$ for $j\neq t, j\neq s$ in (\[Q:xs\]): $$x_s G_s(x_s) + x_s x_t G_{ts} (x_t, x_s) = \theta_{st} x_t x_s + f_s(x_s).$$ Similarly, applying the previous argument on $P({ }{x})-P({ }{x}_{t}^0)$, we have $$x_t G_t(x_t) + x_s x_t G_{st} (x_s, x_t) = \theta_{ts} x_t x_s + f_t(x_t).$$ Therefore, if $\theta_{st}=\theta_{ts}$, then by , $$G_{st} (x_s, x_t) =G_{ts} (x_t, x_s)= \theta_{st}.$$ It is easy to show that, by setting $x_k=0$ $ (k\neq s, k\neq t, k\neq j)$ in , the third-order interactions in $P({ }{x})$ are zero. Similarly, we can show that fourth-and-higher-order interactions are zero. Hence, we arrive at the following formula for $P$: $$P({ }{x})= \sum\limits_{s=1}^p f_s(x_s) + \frac{1}{2} \sum\limits_{s=1}^{p}\sum\limits_{t \neq s}\theta_{ts} x_s x_t.$$ Furthermore, $P({ }{x})=\log \{ h({ }{x}) /h({ }{0})\}$, so the function $h$ takes the form $$h({ }{x}) \propto \exp \{P({ }{x})\} = \exp \left\{ \sum\limits_{s=1}^p f_s(x_s)+\frac{1}{2} \sum\limits_{s=1}^{p}\sum\limits_{t \neq s} \theta_{ts} x_s x_t \right\},$$ which is the same as .
A Proof for Lemma \[lmm:compat\]
--------------------------------
We first prove the claim about compatibility.
It is easy to verify that the conditional densities are integrable given the restrictions with asterisks in Table \[tab1\]. Therefore, these restrictions are sufficient for compatibility.
We now show that the restrictions with a dagger in Table \[tab1\] are necessary, by investigating each of the distributions in Equations \[cond:Gaussian\] to \[cond:exp\]. Note that we have limited our discussion to the case where all conditional densities are non-degenerate. Recall that we refer to the type of distribution of $x_s$ given the others as the node type of $x_s$.
Suppose that $x_s$ is exponential, as in . By definition of the exponential distribution, it must be that $\eta_s=\alpha_{1s}+\sum_{t\neq s}\theta_{ts} x_t <0$. This leads to the following restrictions on $\theta_{ts}$: 1) When $x_t$ is Poisson or exponential, it must be that $\theta_{ts}\leq 0$ since $x_t$ is unbounded in $\mathcal{R}^{+}$. 2) When $x_t$ is Gaussian, then it must be that $\theta_{ts}=0$ since $x_t$ is unbounded on the real line. 3) Let $I$ denote the indices of the Bernoulli variables. Then it must be that $\sum_{t \in I} |\theta_{ts}|< -\alpha_{1s}$ so that $\eta_s<0$ for any combination of $\{x_t\}_{t \in I}$.
Suppose that $x_s$ is Gaussian, as in . Then $\alpha_{2s}$ has to be negative for the conditional density to be well-defined.
Suppose that $x_s$ is Bernoulli or Poisson, as in Equations \[cond:Bernoulli\] or \[cond:Poisson\]. We can see that there are no restrictions on $\eta_s$, and thus no restrictions on $\theta_{ts}$ or $\alpha_{1s}$.
Hence, the conditions with a dagger in Table \[tab1\] are necessary for the conditional densities in Equations \[cond:Gaussian\] to \[cond:exp\] to be compatible.
We now show the statement about strong compatibility.
We first prove the necessity of the conditions in Table \[tab1\]. [Recall from Definition \[compat:defn\] that in order for strong compatibility to hold, compatibility must hold, and any function $g$ that satisfies must be integrable.]{} [Therefore, we derive]{} the necessary conditions for $g$ to be integrable.
For Gaussian nodes that are indexed by $J$, recall that $\Theta_{JJ}$ is defined as in . Then, from properties of the multivariate Gaussian distribution, $\Theta_{JJ}$ must be negative definite if the joint density exists and is non-degenerate.
Let $x_1$ be a Poisson node, and $x_2$ an exponential node. Consider the ratio $$G(x_1, x_2) = \frac{g(x_1,x_2,0,...,0)}{g(0,0,0,...,0)} = \exp\{ -\log(x_1!) + \alpha_{11} x_1 +\theta_{12} x_1 x_2 + \alpha_{12} x_2\},
\label{ratio}$$ where $g$ is the function in . It is not hard to see that integrability of $G(x_1,x_2)$ is a necessary condition for integrability of the joint density. Summing over $x_1$ yields $$\sum\limits_{i=0}^{\infty}G(i,x_2)=\exp\{ \alpha_{12} x_2 + \exp(\alpha_{11}+\theta_{12}x_2)\}.$$ Therefore, if $\sum_{i=0}^{\infty}G(i,x_2)$ is integrable with respect to the exponential node $x_2$, it must be the case that $\theta_{12}=\theta_{21} \leq 0$. Following a similar argument, the edge potential $\theta_{12}=\theta_{21}$ has to be non-positive when $x_2$ is Poisson, and zero when $x_2$ is Gaussian.
A similar argument to the one just described can be applied to the exponential nodes. Such an argument reveals that conditions on the edge potentials of the exponential nodes that are necessary for $g$ to be a density are those stated in Table \[tab1\].
For Bernoulli nodes, no restrictions on the edge potentials are necessary in order for $g$ to be a density.
Therefore, the conditions listed in Table \[tab1\] are necessary for the conditional densities in Equations \[cond:Gaussian\] to \[cond:exp\] to be strongly compatible.
We now show that the conditions listed in Table \[tab1\] are sufficient for the conditional densities to be strongly compatible. We can restrict the discussion by conditioning on the Bernoulli nodes, since integrating over Bernoulli variables yields a mixture of finite components. Table \[tab1\] guarantees that the Gaussian nodes are isolated from the Poisson and exponential nodes, as the corresponding edge potentials are zero. From Table \[tab1\], the distribution of Gaussian nodes is integrable, as $\Theta_{JJ}$ in is negative definite. Now we consider the Poisson and exponential nodes. For these, $$\exp \left\{ \sum\limits_{s=1}^p f_s(x_s) + \frac{1}{2}\sum\limits_{s=1}^{p}\sum\limits_{t \neq s} \theta_{st} x_s x_t \right\} \leq \exp \left\{ \sum\limits_{s=1}^p f_s(x_s) \right\}
$$ since $\theta_{st} x_s x_t \leq 0$. So the joint density is dominated by the density of a model with no interactions, which is integrable since $\alpha_{1t}$ for an exponential node $x_t$ is non-positive; this follows from the fact that $ 0 \leq \sum_{s \in I}|\theta_{st}| < - \alpha_{1t}$, as stated in Table \[tab1\]. Therefore, the conditions listed in Table \[tab1\] are also sufficient for the conditional densities in Equations \[cond:Gaussian\] to \[cond:exp\] to be strongly compatible.
A Proof for Theorem \[thm\] {#proof.thm}
---------------------------
Our proof is similar to that of Theorem 1 in @yang2012, and is based on the primal-dual witness method [@wainwright2009]. The primal-dual witness method studies the property of $\ell_1$-penalized estimators by investigating the sub-gradient condition of an oracle estimator. We assume that readers are familiar with the primal-dual witness method; for reference, see @ravikumar2011 and @yang2012. Without loss of generality, we assume $s=p$ to avoid cumbersome notation. For other values of $s$, a similar proof holds with more complicated notation. Below we denote $\Theta_p$ as $\theta$, $\eta_p$ as $\eta$, and $\ell_p$ as $\ell$ for simplicity. We also denote the neighbours of $x_p$, $N(x_p)$, as $N$.
The sub-gradient condition for with respect to $(\theta^\T,\alpha_{1p})^\T$ is $$-\nabla\ell({\theta}, {\alpha}_{1p}; {X})+\lambda_n {Z}=0; \quad {Z}_{t}= \text{sgn} ({\theta}_{t}) \quad \text{for} \ t < p; \quad {Z}_p=0,
\label{subgrad}$$ where $$\text{sgn}(x)=\begin{cases}
x/|x|, & x\neq 0,\\
\gamma \in [-1,1], & x=0.
\end{cases}$$
We construct the oracle estimator $(\hat{\theta}_N^\T, \hat{\theta}_{\Delta}^\T, \hat{\alpha}_{1p})^\T$ as follows: first, let $\hat{\theta}_{\Delta}=0$ where $\Delta$ indicates the set of non-neighbours; second, obtain $\hat{\theta}_{N},\hat{\alpha}_{1p}$ by solving with an additional restriction that $\hat{\theta}_{\Delta}=0$; third, set $\hat{Z}_{t}=\text{sgn}(\hat{\theta}_{t})$ for $t \in N$ and $\hat{Z}_p=0$; last, estimate $\hat{Z}_{\Delta}$ from by plugging in $\hat{\theta}, \hat{\alpha}_{1p}$ and $\hat{Z}_{\Delta^c}$. To complete the proof, we verify that $(\hat{\theta}_N^\T, \hat{\theta}_{\Delta}^\T, \hat{\alpha}_{1p})^\T$ and $\hat{Z}=(\hat{Z}_{N}^\T, \hat{Z}_{\Delta}^\T, 0)^\T$ is a primal-dual pair of and recovers the true neighbourhood exactly.
Applying the mean value theorem on each element of $\nabla \ell(\hat{\theta}, \hat{\alpha}_{1p}; {X} )$ in the subgradient condition gives $$Q^*{\begin{pmatrix}}\hat{\theta}- {\theta}^* \\ \hat{\alpha}_{1p}-{\alpha}_{1p}^* {\end{pmatrix}}= -\lambda_n \hat{Z} +W^n+R^n,
\label{Taylorsubgrad}$$ where $W^n = \nabla \ell (\theta^*, \alpha_{1p}^* ; {X})$ is the sample score function evaluated at the true parameter $( {\theta^*}^\T, \alpha_{1p}^*)^{\T}$. Recall that $Q^*=-\nabla^2 \ell(\theta^*, \alpha_{1p}^*; {X})$ is the negative Hessian of $\ell(\theta,\alpha_{1p}; {X})$ with respect to $(\theta^\T, \alpha_{1p})^\T$, evaluated at the true values of the parameters. In , $R^n$ is the residual term from the mean value theorem, whose $k$th term is $$R^n_k = {[\nabla^2\ell(\bar{\theta}^k, \bar{\alpha}_{1p}^k ; {X} )-\nabla^2 \ell(\theta^*, \alpha_{1p}^* ; {X})]}^\T_k {\begin{pmatrix}}\hat{\theta}-\theta^* \\ \hat{\alpha}_{1p}-\alpha_{1p}^* {\end{pmatrix}},
\label{eqn:R}$$ where $\bar{\theta}^{k}$ denotes an intermediate point between ${\theta^*}$ and $ \hat{\theta}$, $\bar{\alpha}^{k}_{1p}$ denotes an intermediate point between $\alpha_{1p}^*$ and $\hat{\alpha}_{1p}$, and ${[\cdot]}^\T_k$ denotes the $k$th row of a matrix.
By construction, $\hat{\theta}_{\Delta}=0$. Thus, can be rearranged as $$\lambda_n \hat{Z}_{\Delta}=(W^n_{\Delta}+R^n_{\Delta})-Q^*_{\Delta \Delta^c}(Q^*_{\Delta^c \Delta^c})^{-1}(W^n_{\Delta^c}+R^n_{\Delta^c}-\lambda_n \hat{Z}_{\Delta^c}).
\label{strictd.eq}$$ We obtain an estimator $\hat{Z}_{\Delta}$ by plugging $\hat{\theta}, \hat{\alpha}_{1p}$ and $\hat{Z}_{\Delta^c}$ into . To complete the proof, we need to verify strict dual feasibility, $$\| \hat{Z}_{\Delta}\|_{\infty} < 1,
\label{strictdf}$$ and sign consistency, $$\text{sgn}(\hat{\theta}_{t})=\text{sgn}(\theta_{t}^*) \quad \text{for any} \ t \in N.
\label{signc}$$
In , $ \max_{l\in \Delta} \| Q^*_{l\Delta^c} (Q^*_{\Delta^c \Delta^c})^{-1}\|_1 \leq 1-a$ by Assumption \[irrep\]. The following lemmas characterize useful concentration inequalities regarding $W^n$, $R^n$, and $\hat{\theta}_{N} - \theta^*_{N}$. Proofs of Lemmas \[lemma.W\] and \[lemma.R\] are given in Sections \[lemmaW.proof\] and \[lemmaR.proof\], respectively.\
Suppose that $$\frac{8(2-a)}{a}\{\delta_2 \kappa_2 \log (2p)/n\}^{1/2} \leq \lambda_n \leq \frac{2(2-a)}{a} \delta_2 \kappa_2 M,$$ where $\delta_2$ is defined in Proposition \[prop.e2\], and $a$ and $\kappa_2$ are defined in Assumptions \[irrep\] and \[D\], respectively. Then, $$pr\left( \left.\|W^n\|_\infty>\frac{a \lambda_n }{8-4a} \right| \xi_2,\xi_1 \right) \leq \exp (-c_3 \delta_3 n),$$ where $\delta_3=1/(\kappa_2 \delta_2)$ and $c_3$ is some positive constant. \[lemma.W\]
Suppose that $\xi_1$ and $\|W^n\|_\infty \leq a\lambda_n /(8-4a) $ hold and $$\lambda_n \leq \min\left\{ \frac{a \Lambda_{1}^2 (d+1)^{-1}}{288 (2-a) \kappa_2 \Lambda_{2} }, \frac{\Lambda_{1}^2 (d+1)^{-1}}{12 \Lambda_{2} \kappa_3 \delta_1 \log p} \right\},$$ where $\delta_1$ is defined in Proposition \[prop.e1\], and $a$ and $\kappa_3$ are defined in Assumptions \[irrep\] and \[D\], respectively. Then with probability 1, $$\| \hat{\theta}_{N}-\theta^*_{N}\|_2 < \frac{10}{\Lambda_{1}}(d+1)^{1/2}\lambda_n, \quad \|R^n\|_\infty \leq \frac{a \lambda_n }{8-4a}.$$ \[lemma.R\]
We now continue with the proof of Theorem \[thm\]. Given Assumption \[tuning.range\], the conditions regarding $\lambda_n$ are met for Lemmas \[lemma.W\] and \[lemma.R\].
We now assume that $\xi_1, \xi_2$ and the event $\|W^n\|_\infty \leq a\lambda_n /(8-4a)$ are true so that the conditions for the two lemmas are satisfied. We derive the lower bound for the probability of these events at the end of the proof.
First, applying Lemma \[lemma.R\] and Assumption \[irrep\] to yields $$\begin{aligned}
\|\hat{Z}_{\Delta}\|_{\infty} \leq & \max_{l \in \Delta} \| Q^*_{l \Delta^c} (Q^*_{\Delta^c \Delta^c})^{-1}\|_1 \left( \|W^n_{\Delta^c}\|_{\infty} +\|R^n_{\Delta^c}\|_{\infty}+\lambda_n \|\hat{Z}_{\Delta^c}\|_{\infty} \right)/\lambda_n+\\
& \left( \|W^n_{\Delta}\|_{\infty}+\|R^n_{\Delta}\|_{\infty} \right)/\lambda_n\\
\leq & (1-a)+(2-a) \left\{\frac{a}{4(2-a)}+\frac{a}{4(2-a)} \right\} < 1.
\label{strictdf.result}
\end{aligned}$$
Next, applying Lemma \[lemma.R\] and a norm inequality to $\| \hat{\theta}_{N}-\theta^*_{N}\|_{\infty}$ gives $$\| \hat{\theta}_{N}-\theta^*_{N}\|_{\infty} \leq \| \hat{\theta}_{N}-\theta^*_{N}\|_2 < \frac{10}{\Lambda_{1}} (d+1)^{1/2} \lambda_n \leq \min_{t} |\theta_{t}|,
\label{signcrt.result}$$ since $\min_t |\theta_{t}| \geq 10 (d+1)^{1/2} \lambda_n/\Lambda_{1} $ by Assumption \[thetamin\]. The strict inequality in ensures that the sign of the estimator is consistent with the sign of the true value for all edges.
Equations \[strictdf.result\] and \[signcrt.result\] are sufficient to establish the result, i.e., $\hat{N}=N$. Let $A$ be the event $\|W^n\|_\infty \leq a\lambda_n /(8-4a) $. Recall that we have assumed events $A$, $\xi_1$, and $\xi_2$ to be true in order to prove and . We now derive the lower bound for the probability of $A\cap \xi_1 \cap \xi_2$.
Using the fact that $$\text{pr}\{ (A\cap \xi_1 \cap \xi_2)^c \}\leq \text{pr}(A^c \mid \xi_1\cap \xi_2) + \text{pr}\{(\xi_1 \cap \xi_2)^c\} \leq \text{pr}(A^c \mid \xi_1, \xi_2)+\text{pr}(\xi_1^c)+\text{pr}(\xi_2^c),$$ we know the probability of $A\cap \xi_1 \cap \xi_2$ satisfies $$\text{pr}\left\{ \left(\|W^n\|_\infty \leq \frac{a}{2-a}\frac{\lambda_n}{4}\right)\cap \xi_2\cap \xi_1 \right\}\geq 1-c_1 p^{-\delta_1+2}-\exp(-c_2 \delta_2^2 n)-\exp(-c_3 \delta_3 n),
$$ where $c_1$, $c_2$, and $c_3$ are constants from Proposition \[prop.e1\], Proposition \[prop.e2\], and Lemma \[lemma.W\]. Thus, the event $A\cap \xi_1 \cap \xi_2$ happens with high probability when the sample size $n$ is large. This completes the proof.
A Proof for Lemma \[lemma.W\] {#lemmaW.proof}
-----------------------------
Recall that $\eta^{(i)} = \alpha_{1p} + \sum_{t<p} \theta_{t} x_t^{(i)}$ and that we have assumed that $\alpha_{kp}$ is known for $k\geq 2$. We can rewrite the conditional density in as $$p(x_p \mid x_{-p}) \propto \exp \{ \eta x_p - D(\eta) \}.$$ For any $t<p$, $$W^n_t=\frac{\partial \ell}{\partial \theta_{t}}= \sum\limits_{i=1}^{n} \frac{\partial \ell}{\partial \eta^{(i)}} \frac{\partial \eta^{(i)}}{\partial \theta_{t}} = \frac{1}{n} \sum\limits_{i=1}^{n} \{x_p^{(i)}-D^{'}(\eta^{(i)})\}x_t^{(i)}.
\label{eqn:W}$$
Recall that $M$ is a large constant introduced in Assumption \[D\]. Suppose that $M$ is sufficiently large that $|\alpha_{1p}^*|+\sum_{ k <p} | \theta_{k}^*| < M/2$. For every $v$ such that $0<v<M/2$, $$\begin{aligned}
E \left( \left. \exp \left[ v x_t^{(i)}\left\{ x_p^{(i)}-D^{'}(\eta^{(i)})\right\}\right] \right| {X}_{- p}\right) = &
E \left\{ \left. \exp \left( v x_t^{(i)} x_p^{(i)}\right) \right| {X}_{- p}\right\} \exp \left\{- v x_t^{(i)}D^{'}(\eta^{(i)})\right\} \\
= & \exp \left\{ D(\eta^{(i)}+v x_t^{(i)})- D(\eta^{(i)}) \right\} \exp \left\{- v x_t^{(i)}D^{'}(\eta^{(i)})\right\} \\
= & \exp \left\{ v x_t^{(i)} D^{'}(\eta^{(i)})+ (v x^{(i)}_t )^2 \frac{D^{''}(\tilde{\eta})}{2} \right\} \exp \left\{- v x_t^{(i)}D^{'}(\eta^{(i)})\right\} \\
= &\exp \left\{ (v x^{(i)}_t )^2 \frac{D^{''}(\tilde{\eta})}{2} \right\}, \ \ \tilde{\eta} \in [\eta^{(i)}, \eta^{(i)}+ v x_t^{(i)} ], \\
\end{aligned}
\label{eqn:W.t}$$ where the second equality was derived using the properties of the moment generating function of the exponential family, and the third equality follows from a second-order Taylor expansion. Since $\tilde{\eta} \in [\eta^{(i)}, \eta^{(i)}+ v x_t^{(i)} ]$, the event $\xi_1$ implies that $$|\tilde{\eta}|\leq |\alpha_{1p}^*|+\sum_{ k <p} |x^{(i)}_{k} \theta_{k}^*| + |v x_t^{(i)}| \leq |\alpha_{1p}^*|+ (\sum_{ k <p} |\theta_{k}^*| + |v| )\underset{t,i}{\max} |x_t^{(i)}| \leq M \delta_1 \log p.
\label{eqn:eta.max}$$ Therefore, the condition of Assumption \[D\] is satisfied, and thus $|D^{''}(\tilde{\eta})|\leq \kappa_2$. Recalling that $\{x^{(i)}\}^n_{i=1}$ are independent samples, it follows from and that $$\begin{aligned}
E\left\{ \exp ( v n W^n_t) \mid \xi_2, \xi_1 \right\} = &E \left[ E \left\{ \exp ( v n W^n_t) \mid X_{-p}, \xi_2, \xi_1 \right\} \mid \xi_2, \xi_1 \right] \\
\leq & E \left[ \left. \exp \left\{v^2 \frac{\kappa_2}{2}\sum\limits_{i=1}^{n}(x^{(i)}_t)^2 \right\} \right| \xi_2, \xi_1 \right] \\
\leq & \exp ( n v^2 \kappa_2 \delta_2/2 ),
\end{aligned}
\label{eqn:W.pos}$$ where we use the event $\xi_2$ in the last inequality. Similarly, $$E \left\{\exp ( -n vW^n_t)\mid \xi_2,\xi_1\right\} \leq \exp ( n v^2 \kappa_2 \delta_2/2 ).
\label{eqn:W.neg}$$
Furthermore, one can see from a similar argument as in and that $$\begin{aligned}
E \left\{\exp ( n vW^n_p) \mid \xi_1 \right\} = & E \left\{ \left. \exp \left(v n \frac{\partial \ell}{\partial \alpha_{1p}} \right)\right| \xi_1 \right\} \\
= & \prod_{i=1}^n E \left(\exp [ \left.v \{x_p^{(i)}-D^{'}(\eta^{(i)}) \}]\right| \xi_1 \right) \leq \exp ( n \kappa_2v^2/2 ) .
\end{aligned}$$
We focus on the discussion of and since $\delta_2 \geq 1$. For some $\delta$ to be specified, we let $v=\delta/(\kappa_2 \delta_2)$ and apply the Chernoff bound [@chernoff1952; @ravikumar2004] with and to get
$$\text{pr}(|W^n_t|>\delta \mid \xi_2, \xi_1) \leq \frac{ E \{\exp( v n W^n_t ) \mid \xi_2, \xi_1\} }{\exp(v n \delta)} + \frac{ E \{\exp( -v n W^n_t) \mid \xi_2, \xi_1 \} }{\exp(v n \delta)} \leq 2\exp \left( -n\frac{\delta^2}{2\kappa_2 \delta_2} \right).$$
Letting $\delta=a\lambda_n /(8-4a) $ and using the Bonferroni inequality, we get $$\begin{aligned}
\text{pr}\left( \left.\|W^n\|_\infty > \frac{a}{2-a} \frac{\lambda_n}{4} \right| \xi_2,\xi_1 \right) \leq & 2\exp \left\{ -n\frac{a^2 \lambda_n^2}{32 (2-a)^2\kappa_2 \delta_2} +\log (p) \right\}\\
\leq & \exp \left\{ - \frac{ a^2 \lambda_n^2}{64 (2-a)^2\kappa_2 \delta_2} n\right\}=\exp(-c_3 \delta_3 n),
\end{aligned}
\label{W}$$ where $\delta_3=1/(\kappa_2 \delta_2)$ and $c_3= a^2 \lambda_n^2/\{64 (2-a)^2\}$. In , we made use of the assumption that $\lambda_n \geq 8(2-a)\{\kappa_2 \delta_2 \log (2p)/n \}^{1/2} /a$, and we also require that $\lambda_n \leq 2(2-a) \kappa_2 \delta_2 M/a $ since $v=a\lambda_n/ \{ (8-4a) \kappa_2 \delta_2 \}\leq M/2$.
A Proof for Lemma \[lemma.R\] {#lemmaR.proof}
-----------------------------
We first prove that $ \| \hat{\theta}_{N}-\theta^*_N\|_2 < 10(d+1)^{1/2}\lambda_n/\Lambda_{1}.$
Following the method in @fan2004 and @ravikumar2010, we construct a function $F(u)$ as $$F(u) =- \ell (\theta^*+u_{-p}, \alpha_{1p}^*+u_p; {X})+\ell(\theta^*,\alpha_{1p}^*; {X})+\lambda_n\|\theta^*+u_{-p}\|_1-\lambda_n\|\theta^*\|_1,
\label{defineF}$$ where $u$ is a $p$-dimensional vector and $u_{\Delta}=0 $. $F(u)$ has some nice properties: (i) $F(0)=0$ by definition; (ii) $F(u)$ is convex in $u$ given the form of ; and (iii) by the construction of the oracle estimator $\hat{\theta}$, $F(u)$ is minimized by $\hat{u}$ with $\hat{u}_{-p} = \hat{\theta}-\theta^*$ and $\hat{u}_p = \hat{\alpha}_{1p}-\alpha_{1p}^*$.
We claim that if there exists a constant $B$ such that $F(u)>0$ for any $u$ such that $\|u\|_2=B$ and $u_{\Delta}=0$, then $\| \hat{u} \|_2 \leq B$. To show this, suppose that $\| \hat{u} \|_2 > B$ for such a constant. Let $t=B/\|\hat{u}\|_2$. Then, $t<1$, and the convexity of $F(u)$ gives $$F(t \hat{u}) \leq (1-t) F(0)+tF(\hat{u})\leq 0.$$ Thus, $\|t\hat{u}\|_2=B$ and $(t\hat{u})_{\Delta}=t\hat{u}_{\Delta}=0$, but $F(t \hat{u})\leq 0$, which is a contradiction.
Applying a Taylor expansion to the first term of $F(u)$ gives $$\begin{aligned}
F(u)=& -{\nabla \ell (\theta^*, \alpha_{1p}^*; {X})}^\T u- {u}^\T \nabla^2\ell(\theta^*+v u_{-p}, \alpha_{1p}^*+v u_p; X) u/2 +\lambda_n (\|\theta^*+u_{-p}\|_1-\|\theta^*\|_1 ) \\
= & \ \text{I}+\text{II}/2+\text{III} ,
\end{aligned}$$ for some $v \in [0,1]$. Recall that $u_{\Delta}=0$ as defined in . The gradient and Hessian are with respect to the vector $(\theta^T, \alpha_{1p})^\T$.
We now proceed to find a $B$ such that for $\|u\|_2=B$ and $u_{\Delta}=0$, the function $F(u)$ is always greater than 0. First, given that $\|W^n\|_\infty \leq a \lambda_n / (8-4a) $ and $a < 1$ assumed in Assumption \[irrep\], $$|\text{I}|=|{(W^n)}^\T u|\leq \| W^n\|_\infty \|u\|_1 \leq \frac{a}{2-a} \frac{\lambda_n}{4} (d+1)^{1/2} B \leq \frac{\lambda_n}{4} (d+1)^{1/2} B.$$ Next, by the triangle inequality and the Cauchy-Schwarz inequality, $$\text{III}\geq -\lambda_n\|u_{-p}\|_1 \geq -\lambda_n d^{1/2} \|u_{-p}\|_2 \geq -\lambda_n (d+1)^{1/2} B.$$
To bound II, we note that $$- \nabla^2\ell(\theta^*+vu_{-p}, \alpha_{1p}^*+vu_p;X)=\frac{1}{n} \sum\limits_{i=1}^{n} { }{x}^{(i)}_{0} {({ }{x}^{(i)}_{0})}^\T D^{''}(\eta^{(i)}_r),$$ where $x_0=(x_{-p}^{\T}, 1)^{\T}$ as in Assumption \[dep\], and $\eta^{(i)}_r=\alpha_{1p}^*+v u_p + \sum_{t<p} (\theta_t^*+v u_t)x_t^{(i)}$. Applying a Taylor expansion on each $D^{''}(\eta_r^{(i)})$ at $\eta^{(i)}=\alpha_{1p}^*+\sum_{t<p} \theta_t^* x_t^{(i)}$, we get $$\begin{aligned}
- \nabla^2\ell(\theta^*+vu_{-p}, \alpha_{1p}^*+vu_p; X)=& \frac{1}{n} \sum\limits_{i=1}^{n} { }{x}^{(i)}_{0} {({ }{x}^{(i)}_{0})}^\T D^{''}(\eta^{(i)}) +\frac{1}{n} \sum\limits_{i=1}^{n} { }{x}^{(i)}_{0} {({ }{x}^{(i)}_{0})}^\T D^{'''}(\tilde{\eta}^{(i)}) \left( v{u}^\T{ }{x}^{(i)}_{0} \right)\\
= & Q^* + \frac{1}{n} \sum\limits_{i=1}^{n} { }{x}^{(i)}_{0} {({ }{x}^{(i)}_{0})}^\T D^{'''}(\tilde{\eta}^{(i)}) \left( v{u}^\T{ }{x}^{(i)}_{0} \right),
\end{aligned}$$ where $\tilde{\eta}^{(i)} \in [\eta^{(i)}, \eta^{(i)}_r]$. Using the argument on $\tilde{\eta}$ in and the fact that $v\leq 1$ and $ \|u\|_2=B$, we can see that $\tilde{\eta}^{(i)}$ is in the range required for Assumption \[D\] to hold given $\xi_1$. Therefore, applying Assumption \[D\] we can write $$\begin{aligned}
\text{II} \geq & \min_{u: \|u\|_2=B, u_{\Delta}=0} \{ - u^T \nabla^2 \ell(\theta^*+vu_{-p}, \alpha_{1p}^*+vu_p; X) u \} \\
\geq & B^2 \Lambda_{\min} (Q^*_{\Delta^c \Delta^c} )-
\max_{v \in [0,1]}\max_{u: \|u\|_2=B , u_{\Delta}=0} {u}^\T \left\{ \frac{1}{n}\sum\limits_{i=1}^{n} D^{'''}(\tilde{\eta}^{(i)}) (v{u}^\T{ }{x}^{(i)}_{0} ) { }{x}^{(i)}_{0} ({{ }{x}^{(i)}_{0})}^\T \right\} u \\
\geq & \Lambda_{1} B^2 -
\max_{u: \|u\|_2=B , u_{\Delta}=0} \left\{\max_{i, v\in [0,1]}( v {u}^\T{ }{x}^{(i)}_{0}) \max_{\tilde{\eta}^{(i)}} D^{'''}(\tilde{\eta}^{(i)}) \frac{1}{n}\sum\limits_{i=1}^{n} ({u}^\T{ }{x}^{(i)}_{0})^2 \right\} \\
\geq & \Lambda_{1} B^2 - \kappa_3 \max_{i, u: \|u\|_2=B , u_{\Delta}=0, v \in [0,1]} ( v {u}^\T{ }{x}^{(i)}_{0} ) \max_{u: \|u\|_2=B , u_{\Delta}=0} \left\{ \frac{1}{n} \sum\limits_{i=1}^{n} ({u}^\T{ }{x}^{(i)}_{0})^2 \right\}. \quad \\
\end{aligned}$$ By inspection, the maximum of $u^\T x^{(i)}_0$ is non-negative. Thus, the maximum of $ vu^T { }{x}^{(i)}_{0}$ is achieved at $v=1$. Then, using $\xi_1$ and Assumption \[dep\], $$\begin{aligned}
\text{II} \geq & \Lambda_{1} B^2 - \kappa_3 B (d+1)^{1/2} \delta_1 \log (p) B^2 \Lambda_{\max} \left\{ \frac{1}{n}\sum\limits_{i=1}^{n} { }{x}^{(i)}_{0} {({ }{x}^{(i)}_{0})}^\T \right\} \\
\geq & \Lambda_{1} B^2 - \kappa_3 B^3 (d+1)^{1/2}\delta_1 \log (p) \Lambda_{2}.
\end{aligned}$$ Thus, if our choice of $B$ satisfies $$\Lambda_{1} - \delta_1 \log (p) B \kappa_3 (d+1)^{1/2} \Lambda_{2} \geq \frac{\Lambda_{1}}{2},
\label{lambdan.cond1}$$ then the lower bound of $F(u)$ is $$F(u) \geq -\frac{\lambda_n}{4} (d+1)^{1/2} B +\frac{\Lambda_{1}}{4} B^2-\lambda_n (d+1)^{1/2} B.$$ So, $F(u)>0$ for any $B>5 (d+1)^{1/2}\lambda_n /\Lambda_{1}$. We can hence let $$B= 6(d+1)^{1/2}\lambda_n/\Lambda_{1}
\label{eqn:B}$$ to get $$\| \hat{\theta}_N-\theta^*_N\|_2 \leq \| \hat{u}\|_2 \leq B = \frac{6}{\Lambda_{1}}(d+1)^{1/2}\lambda_n.
\label{eqn:uB}$$ And thus, $\| \hat{\theta}_N-\theta^*_N\|_2 < 10\lambda_n (d+1)^{1/2} / \Lambda_{1}$. It is easy to show that satisfies provided that $$\lambda_n \leq \frac{\Lambda_{1}^2(d+1)^{-1}}{12 \Lambda_{2} \kappa_3 \delta_1 \log p}.$$
To find the bound for $R^n$ defined in , we first recall that $(\bar{\theta}^{\T},\bar{\alpha})^{\T}$ is an intermediate point between $( {\theta^*}^\T, \alpha_{1p}^*)^{\T}$ and $( \hat{\theta}^{\T}, \hat{\alpha}_{1p})^{\T}$. We denote $\bar{\eta}^{(i)}=\bar{\alpha}_{1p}+\sum_{t < p} \bar{\theta}_{t} x_t^{(i)}$, and observe that $|\bar{\eta}^{(i)}| \leq M \delta_1 \log p $ for $i=1, \ldots, n$ using the argument of , which implies that Assumption \[D\] is applicable. Thus, $$\begin{aligned}
\Lambda_{\max}\{\nabla^2\ell(\bar{\theta}, \bar{\alpha}_{1p};X )-\nabla^2 \ell(\theta^*, {\alpha}_{1p}^*;X)\}=&\underset{\|u\|_2=1}{\max} {u}^\T \{ \nabla^2\ell(\bar{\theta}, \bar{\alpha}_{1p};X )-\nabla^2 \ell(\theta^*, {\alpha}_{1p}^*;X)\}u \\
= & \underset{\|u\|_2=1}{\max} {u}^\T \left[ \frac{1}{n} \sum\limits_{i=1}^{n} \left\{ D^{''}(\bar{\eta}^{(i)} ) - D^{''} (\eta^{*}) \right\}{ }{x}^{(i)}_{0} {({ }{x}^{(i)}_{0})}^\T \right] u.
\end{aligned}$$ By Assumption 3, $|D^{''}(\bar{\eta}^{(i)} ) - D^{''} (\eta^{*})| \leq 2 \kappa_2 $, and so $$\begin{aligned}
\Lambda_{\max}\{ \nabla^2\ell(\bar{\theta}, \bar{\alpha}_{1p};X )-\nabla^2 \ell(\theta^*, {\alpha}_{1p}^*;X) \} = & \underset{\|u\|_2=1}{\max} {u}^\T \left[ \frac{1}{n} \sum\limits_{i=1}^{n} \left\{ D^{''}(\bar{\eta}^{(i)} ) - D^{''} (\eta^{*})\right\} { }{x}^{(i)}_{0} {({ }{x}^{(i)}_{0})}^\T \right] u \\
\leq & 2\kappa_2 \underset{\|u\|_2=1}{\max} {u}^\T \left\{ \frac{1}{n} \sum\limits_{i=1}^{n} { }{x}^{(i)}_{0} {({ }{x}^{(i)}_{0})}^\T \right\} u \leq 2\kappa_2 \Lambda_{2},
\end{aligned}$$ using Assumption \[dep\] at the last inequality. Hence, we arrive at $$\begin{aligned}
\|R^n\|_{\infty} \leq& \|R^n\|_2^2=\left\|\{\nabla^2\ell(\bar{\theta}, \bar{\alpha}_{1p};X )-\nabla^2 \ell(\theta^*, {\alpha}_{1p}^*;X)\}^\T {\begin{pmatrix}}\hat{\theta}-\theta^* \\ \hat{\alpha}_{1p}-{\alpha}_{1p}^* {\end{pmatrix}}\right\|_2^2 \\
\leq &\Lambda_{\max}\{\nabla^2\ell(\bar{\theta}, \bar{\alpha}_{1p};X )-\nabla^2 \ell(\theta^*, {\alpha}_{1p}^*;X) \} \left\| {\begin{pmatrix}}\hat{\theta}-\theta^* \\ \hat{\alpha}_{1p}-{\alpha}_{1p}^* {\end{pmatrix}}\right\|_2^2 \\
= &\Lambda_{\max}\{\nabla^2\ell(\bar{\theta}, \bar{\alpha}_{1p};X )-\nabla^2 \ell(\theta^*, {\alpha}_{1p}^*;X) \} \left\| \hat{u} \right\|_2^2 \\
\leq & \frac{72\kappa_2 \Lambda_{2} }{\Lambda^2_{1}} (d+1) \lambda_n^2,
\end{aligned}$$ where the last inequality follows from . So $\|R^n\|_\infty \leq a\lambda_n /(8-4a)$ if $$\lambda_n \leq \frac{a}{2-a} \frac{\Lambda_{1}^2}{288 (d+1)\kappa_2\Lambda_{2} },
\label{lambdan.cond2}$$ which holds by assumption.
A Proof for Corollary \[col1\] {#col1.proof}
------------------------------
The proof is essentially the same as the proof in Section \[proof.thm\]. We first show that a modified version of Lemma \[lemma.R\] holds with fewer conditions.
Suppose that [$p(x_p|x_{-p})$]{} follows a Gaussian distribution as in , and $\|W^n\|_\infty \leq a\lambda_n /(8-4a) $. Then $$\| \hat{\theta}_{N}-\theta^*_{N}\|_2 < \frac{10}{\Lambda_{1}}(d+1)^{1/2}\lambda_n, \quad \|R^n\|_\infty =0.$$ \[lemma.R2\]
To prove this lemma, we go through the argument in Section \[lemmaR.proof\]. But for $\text{II}$ we note that $$\begin{aligned}
\text{II} \geq & \min_{u: \|u\|_2=B, u_{\Delta}=0} \{ - u^T \nabla^2 \ell(\theta^*+vu_{-p}, \alpha_{1p}^*+vu_p; X) u \} \\
\geq & B^2 \Lambda_{\min} (-Q^*_{\Delta^c \Delta^c} )-
\max_{v \in [0,1]}\max_{u: \|u\|_2=B , u_{\Delta}=0} {u}^\T \left\{ \frac{1}{n}\sum\limits_{i=1}^{n} D^{'''}(\tilde{\eta}^{(i)}) (v{u}^\T{ }{x}^{(i)}_{0} ) { }{x}^{(i)}_{0} ({{ }{x}^{(i)}_{0})}^\T \right\} u \\
\geq & \Lambda_{1} B^2 -0,
\end{aligned}$$ since $D^{'''}(\tilde{\eta}^{(i)})=0$ for a Gaussian distribution. Therefore, $$F(u) \geq -\frac{\lambda_n}{4} (d+1)^{1/2} B +\frac{1}{2}\Lambda_{1} B^2-\lambda_n (d+1)^{1/2} B.$$ So, $F(u)> 0$ for $B > 5\lambda_n (d+1)^{1/2} /(2\Lambda_{1})$. We can hence let $B= 5(d+1)^{1/2}\lambda_n/\Lambda_{1}$ to get $$\| \hat{\theta}_N-\theta^*_N\|_2 \leq \| \hat{u}\|_2 \leq B= \frac{5}{\Lambda_{1}}(d+1)^{1/2}\lambda_n.$$ Thus, $\|\hat{\theta}_N - \theta^{*}_N \|_2<10 \lambda_n (d+1)^{1/2}/\Lambda_1$. And $\|R^n\|_\infty =0$ trivially as $D^{''}(\bar{\eta}^{(i)} ) - D^{''} (\eta^{*})=0$ for a Gaussian distribution.
With Lemma \[lemma.R2\], we can then verify and as in Section \[proof.thm\]. Finally, we drop the requirement of $\xi_1$ in the condition of Lemma \[lemma.R2\], so the probability of $\hat{N}=N$ is $$\text{pr}\left\{ \left(\|W^n\|_\infty \leq \frac{a}{2-a}\frac{\lambda_n}{4}\right)\cap \xi_2\right\}\geq 1-\exp(-c_2 \delta_2^2 n)-\exp(-c_3 \delta_3 n),$$ where $c_2$ and $c_3$ are constants from Proposition \[prop.e2\] and Lemma \[lemma.W\].
Additional Details of Data-Generation Procedure {#generation_detail}
-----------------------------------------------
Here we provide additional details of the data-generation procedure described in Section \[generate\]. In particular, we describe the approach used to guarantee that the conditions listed in Table \[tab1\] for strong compatibility of the conditional distributions are satisfied.
Recall from Table \[tab1\] that in order for strong compatibility to hold, the matrix $\Theta_{JJ}$ in that contains the edge potentials between the Gaussian nodes must be negative definite. If $\Theta_{JJ}$ generated as described in Section \[generate\] is not negative definite, then we define a matrix $T_{JJ}$ as $$T_{JJ}= -{ \Theta}_{JJ}+ \left\{ \Lambda_{\min}({\Theta}_{JJ}) - 0.1 \right\} { I} ,$$ where $\Lambda_{\min}({\Theta}_{JJ})$ denotes the minimum eigenvalue of $\Theta_{JJ}$. Thus, $T_{JJ}$ is guaranteed to be negative definite, as all its eigenvalues are no larger than $-0.1$. We then standardize $T_{JJ}$ so that its diagonal elements equal $-1$, $$\tilde{T}_{JJ}=\text{diag}(|T_{11}|^{-1/2}, \ldots,|T_{mm}|^{-1/2} )\ T_{JJ}\ \text{diag}(|T_{11}|^{-1/2}, \ldots,|T_{mm}|^{-1/2} ).$$ Finally, we replace $\Theta_{JJ}$ with $\tilde{T}_{JJ}$.
Table \[tab1\] also indicates that for strong compatibility to hold, the edge potential between two Poisson nodes must be negative. Therefore, after generating edge potentials as described in Section \[generate\], we replace $\theta_{st}$ with $-|\theta_{st}|$ where $x_s$ and $x_t$ are Poisson nodes.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Numerical inversion is a general detector calibration technique that is independent of the underlying spectrum. This procedure is formalized and important statistical properties are presented, using high energy jets at the Large Hadron Collider as an example setting. In particular, numerical inversion is inherently biased and common approximations to the calibrated jet energy tend to over-estimate the resolution. Analytic approximations to the closure and calibrated resolutions are demonstrated to effectively predict the full forms under realistic conditions. Finally, extensions of numerical inversion are presented which can reduce the inherent biases. These methods will be increasingly important to consider with degraded resolution at low jet energies due to a much higher instantaneous luminosity in the near future.'
address:
- 'Physics Department, Stanford University, Stanford, CA, 94305, USA'
- 'SLAC National Accelerator Laboratory, Stanford University, Menlo Park, CA 94025, USA'
- 'Physics Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94704, USA'
author:
- Aviv Cukierman
- Benjamin Nachman
bibliography:
- 'myrefs.bib'
title: |
Mathematical Properties of\
Numerical Inversion for Jet Calibrations
---
Introduction {#sec:intro}
============
At a proton-proton collider like the Large Hadron Collider (LHC), quarks and gluons are produced copiously. These partons fragment to produce collimated streams of colorless particles that leave their energy in the calorimeters of the ATLAS and CMS detectors[^1]. The energy depositions are organized using jet clustering algorithms to stand as experimental proxies for the initiating quarks and gluons. The most widely used clustering scheme in ATLAS and CMS is the anti-$k_t$ algorithm [@Cacciari:2008gp] with radius parameter $R=0.4$. Even though the inputs to jet clustering (topological clusters for ATLAS [@topo1; @topo2] and particle flow objects for CMS [@pflow1; @pflow2]) are themselves calibrated, the average reconstructed jet energy is not the same as the true jet energy, because of various detector effects. To account for this, calibrations are applied to each reconstructed jet.
Numerical Inversion {#sec:numinversion}
===================
The jet calibration procedures of ATLAS [@Aad:2011he] and CMS [@Chatrchyan:2011ds; @Khachatryan:2016kdb] involve several steps to correct for multiple nearly simultaneous $pp$ collisions (pileup), the non-linear detector response, the $\eta$-dependence of the jet response, flavor-dependence of the jet response, and residual data/simulation differences in the jet response. The simulation-based corrections to correct for the calorimeter non-linearities in transverse energy $E_\text{T}$ and pseudorapidity $\eta$ are accounted for using [*numerical inversion*]{}.
The purpose of this note is to formally document numerical inversion and describe (with proof) some of its properties. In what follows, $X$ will be a random variable representing the particle-jet $E_\text{T}$ and $Y$ will be a random variable representing the reconstructed jet $E_\text{T}$. Define[^2]
$$\begin{aligned}
\label{eq:fedef}
f_\text{me}(x)&=\mathbb{E}[Y|X=x]\\\label{eq:fedef2}
R_\text{me}(x) &= \mathbb{E}\left[\frac{Y}{x}\middle| X=x\right] = \frac{f_\text{me}(x)}{x}. \end{aligned}$$
Where the subscript indicates that we are taking the mean of the stated distribution and ‘$\mathbb{E}$’ stands for [*expected value*]{} ($=$ average). In practice, sometimes the core of the distribution of $Y|X=x$ is fit with a Gaussian and so the effective measure of central tendency is the mode of the distribution. Therefore in analogy to Equations \[eq:fedef\] and \[eq:fedef2\], we define $$\begin{aligned}
f_\text{mo}(x)&=\text{mode}[Y|X=x]\\
R_\text{mo}(x) &= \text{mode}\left[\frac{Y}{x}\middle| X=x\right] = \frac{f_\text{mo}(x)}{x}. \end{aligned}$$ We will often drop the subscript of $f$ and $R$ for brevity in the text, when it is clear which definition we are referring to. If not specified, $f$ and $R$ will refer to a definition using a generic definition of central tendency. For all sensible notions of central tendency, we still have that $R(x) = \frac{f(x)}{x}$.
We will often think of $Y|X=x\sim \mathcal{N}(f(x),\sigma(x))$, where this notation means ‘$Y$ given $X=x$ is normally distributed with mean $f(x)$ and standard deviation $\sigma(x)$’; however, in this note, we will remain general unless stated otherwise. The function $R(x)$ is called the [*response function*]{}. Formally, numerical inversion is the following procedure:
1. Compute $f(x)$, $R(x)$.
2. Let $\tilde{R}(y) = R(f^{-1}(y))$.
3. Apply a jet-by-jet correction: $Y\mapsto Y/\tilde{R}(Y)$.
The intuition for step 2 is that for a given value $y$ drawn from the distribution $Y|X=x$, $f^{-1}(y)$ is an estimate for $x$ and then $R(f^{-1}(y))$ is an estimate for the response at the value of $x$ that gives rise to $Y$. Let $p(x)$ be the prior probability density function of $E_\text{T}$. Then we note that we do not want to use $\mathbb{E}[X|Y]$ instead of $f^{-1}(Y)$ because the former depends on $p(x)$, whereas $f$ (and thus $f^{-1}$) does not depend on $p(x)$, by construction.
We can see now our first result, which will be useful for the rest of this note:
[*The correction derived from numerical inversion is $Y \mapsto Z = f^{-1}(Y)$.*]{}
[**Proof.**]{} $$\begin{aligned}
\tilde{R}(Y) &= R(f^{-1}(Y))\nonumber\\
&= \frac{f(f^{-1}(Y))}{f^{-1}(Y)}\nonumber\\
&= \frac{Y}{f^{-1}(Y)}\nonumber\\
\rightarrow Z &= \frac{Y}{\tilde{R}(Y)}\nonumber\\
&= f^{-1}(Y) \hspace{1 cm} \Box\end{aligned}$$
Closure {#sec:introclosure}
-------
One important property of numerical inversion is the concept of [*closure*]{}, which quantifies whether the new distribution $f^{-1}(Y|X=x)$ obtained after numerical inversion is centered at $x$, using the same notion of central tendency as in the definition of $f$. In particular, define the closure as
$$\begin{aligned}
C_\text{me}(x) \equiv \mathbb{E}\left[\frac{Z}{x}\middle| X=x\right] = \mathbb{E}\left[\frac{f^{-1}(Y)}{x}\middle| X=x\right]
\label{eqn:closure},\end{aligned}$$
and $C_\text{mo}$ is defined in an analogous way. The symbol $C$ will denote the closure for a generic notion of central tendency. We say that numerical inversion has [*achieved closure*]{} or simply [*closes*]{} if, for all $x$, $$\begin{aligned}
C = 1.\end{aligned}$$
Assumptions and Definitions {#sec:assumptions}
---------------------------
The general results presented in the following sections are based on three assumptions listed below. These requirements should be satisfied by real detectors using calorimeters and trackers to reconstruct jets, given that the detector-level reconstruction is of sufficiently high quality.
1. $f^{-1}(y)$ exists for all $y$ in the support of $Y$, and $f^{-1}$ is single-valued. These may seem like obvious statements, but are not vacuous, even for a real detector. For example, pileup corrections can result in non-zero probability that $Y<0$, so the function $f$ must be computed for all possible values of $Y$, even if the transverse energy is negative. At the high-luminosity LHC (HL-LHC), the level of pileup will be so high that the jet energy resolution may be effectively infinite at low transverse energies (no correlation between particle-level and detector-level jet energy). In that case, $f^{-1}$ may not be single valued and numerical inversion cannot be strictly applied as described in Sec. \[sec:numinversion\].
2. $f(x)$ is monotonically increasing: $f'(x)>0$ for all $x$. This condition should trivially hold for any reasonable detector: detector-level jets resulting from particle-level jets with a higher $E_\text{T}$ should on average have a higher $E_\text{T}$ than those originating from a lower $E_\text{T}$ particle-level jet. Note that this is only true for a fixed $\eta$. Detector technologies depend significantly on $\eta$ and therefore the $\eta$-dependence of $f$ (for a fixed $x$) need not be monotonic. We note also that Assumption 1 implies that $f'(x)\ge 0$ or $f'(x) \le 0$ for all $x$; so Assumption 2 is equivalent to the additional assumptions that $f'(x)\ne 0$ for any $x$, and that $f'(x)>0$ (as opposed to $f'(x)<0$).
3. $f$ is twice-differentiable. The first derivative of $f$ has already been assumed to exist in Assumption 2, and the second derivative will also be required to exist for some of the later results. In practice we expect $f$ to be differentiable out to any desired order.
We note that as long as the above three assumptions hold, the theorems stated in the remainder of this paper are valid. In particular, this implies that $x$ could be any calibrated quantity that satisfies the above constraints, e.g. the jet transverse momentum $p_\text{T}$ or the jet mass $m$. We focus on the case of calibrating the $E_\text{T}$ for sake of concreteness.
We have separated the results in this paper into “Proofs” and “Derivations”. The “Proofs” require only the three assumptions stated above, and in particular do not assume anything about the shape of the underlying distributions, e.g. that the distributions $Y|X=x$ are Gaussian or approximately Gaussian. The “Derivations” are useful approximations that apply in the toy model described in \[sec:toy\_model\]; we expect them to apply in a wide variety of cases relevant to LHC jet physics. In particular, we expect these approximations to hold in cases with properties similar to the toy model presented here - e.g., good approximation of $f$ by its truncated Taylor series about each point and approximately Gaussian underlying distributions of $Y|X=x$.[^3]
Finally, in the rest of this paper, we write $\rho_{Y|X}(y|x)$ to represent the probability distribution of $Y$ given $X=x$, and $\rho_{Z|X}(z|x)$ to be the probability distribution of $Z$ given $X=x$. A standard fact about the probability distribution from changing variables is that
$$\begin{aligned}
\rho_{Z|X}(z|x) = f'(z)\rho_{Y|X}(f(z)|x).
\label{eqn:newdist}\end{aligned}$$
To ease the notation, we will often use $\rho_Y(y)$ and $\rho_Z(z)$ interchangeably with $\rho_{Y|X}(y|x)$ and $\rho_{Z|X}(z|x)$, respectively, when it is clear (as is usually the case) that we are conditioning on some true value $x$.[^4]
Results
=======
In the subsequent sections, we will derive properties about the closure $C$ for three different definitions of the central tendancy: mean (Sec. \[sec:mean\]), mode (Sec. \[sec:mode\]), and median (Sec. \[sec:median\]).
Mean {#sec:mean}
----
In the following section only, for brevity, we will let $f$ be $f_\text{me}$ and $C$ be $C_\text{me}$.
### Closure {#sec:meanclosuresection}
We can write the closure (Eq. \[eqn:closure\]) as
$$\begin{aligned}
C = \mathbb{E}\left[\frac{Z}{x}\middle| X=x\right] &=\frac{1}{x} \int dy \rho_{Y|X}(y|x) f^{-1}(y).
\label{eqn:closuredef}\end{aligned}$$
We find that for many functions $f$, numerical inversion does not close. This is summarized in the following result:
[*Let the notion of central tendency be the mean. If $f$ is linear, then numerical inversion closes. If $f$ is not linear, then numerical inversion does not necessarily close.*]{}
[**Proof.**]{} Let $f$ be linear, $f(x) = a(x+b)$. Then[^5] $f^{-1}(y) = \frac{y}{a}-b$. We can see that we necessarily have closure as Eq. \[eqn:closuredef\] can be written $$\begin{aligned}
C &=\frac{1}{x} \int dy \rho_{Y|X}(y|x) \left(\frac{y}{a}-b\right)\nonumber\\
&=\frac{1}{x} \left(\frac{1}{a}\mathbb{E}\left[Y\middle| X=x\right]-b\right)\nonumber\\
&=\frac{1}{x} \left(\frac{1}{a}f(x)-b\right)\nonumber\\
&=1.
\label{eqn:closure_linear_proof}\end{aligned}$$
Now let $f$ be nonlinear, and so therefore $f^{-1}$ is also nonlinear. We note that the statement being proved is that $f$ does not necessarily close in this case; not that $f$ necessarily does not close. Thus, it is sufficient to find a counterexample that does not close in order to demonstrate this statement. Let $f(x) = \left(\frac{x}{c}\right)^{\frac{1}{3}}$ with $c\ne 0$, so that $f^{-1}(y) = cy^3$, which is a simple non-linear monotonic function. We will also need to specify some higher moments of the distribution $\rho_{Y|X}$. With the standard definitions of the variance and skew, respectively: $$\begin{aligned}
\sigma(x)^2&\equiv
\mathbb{E}\left[\left(Y-\mathbb{E}\left[Y\right]\right)^2\middle| X=x\right]\\
\sigma(x)^3\gamma_1(x) &\equiv \mathbb{E}\left[\left(Y-\mathbb{E}\left[Y\right]\right)^3\middle| X=x\right].\end{aligned}$$ We specify the weak conditions that $\sigma(x) >0$ (which is always true as long as $\rho_{Y|X}$ is not a delta function), and that $\gamma_1(x)=0$ (which is true if $\rho_{Y|X}$ is symmetric). Then, the closure (Eq. \[eqn:closuredef\]) can be written $$\begin{aligned}
C &=\frac{1}{x} \int dy \rho_{Y|X}(y|x) \left(cy^3\right)\nonumber\\
&=\frac{c}{x} \left(\mathbb{E}\left[Y^3\middle| X=x\right]\right).\end{aligned}$$ With $\gamma_1(x)=0$, we have that $$\begin{aligned}
\mathbb{E}\left[Y^3\middle| X=x\right] &= 3\sigma(x)^2\mathbb{E}\left[Y\middle| X=x\right] + \mathbb{E}\left[Y\middle| X=x\right]^3\nonumber\\
&=3\sigma(x)^2f(x)+f(x)^3\nonumber\\
&=3\sigma(x)^2\left(\frac{x}{c}\right)^{\frac{1}{3}}+\frac{x}{c}.\end{aligned}$$ Then we see we do not have closure, as $$\begin{aligned}
C &=\frac{c}{x} \left(\mathbb{E}\left[Y^3\middle| X=x\right]\right)\nonumber\\
&=\frac{c}{x} \left(3\sigma(x)^2\left(\frac{x}{c}\right)^{\frac{1}{3}}+\frac{x}{c}\right)\nonumber\\
&= 1 + 3\sigma(x)^2\left(\frac{x}{c}\right)^{-\frac{2}{3}}\nonumber\\
&>1. \hspace{1 cm}\Box\end{aligned}$$ Although the counterexample provided here only applies to a specific choice of $f(x)$ and $\rho_{Y|X}(y|x)$, we have reason to believe that closure is not achieved for non-linear $f$ in the vast majority of cases, as can be seen in more detail in \[sec:mean\_nonclosure\]. In addition, we can Taylor expand the closure $C$ to derive an equation for the first non-closure term: $$\begin{aligned}
C \approx 1-\frac{1}{2}\frac{f''(x)}{f'(x)^3}\frac{\sigma(x)^2}{x},
\label{eqn:closureseries_text}\end{aligned}$$ the derivation of which can be found in \[sec:mean\_nonclosure\].
Figure \[fig:mean\_closure\] shows the inherent non-closure in numerical inversion for a toy calculation using a response function $R(x)$ that is typical for ATLAS or CMS, and the first term of the higher-order correction (Eq. \[eqn:closureseries\_text\]).
![The closure of numerical inversion when using the mean to calibrate, using a toy model similar to conditions in ATLAS or CMS. In blue, the exact calculated closure. In red, the estimate of the closure using the first term of the higher-order correction given in Eq. \[eqn:closureseries\_text\]. For details of the model, see \[sec:toy\_model\].[]{data-label="fig:mean_closure"}](mean_closure.pdf){width="90.00000%"}
### Calibrated Resolution
We often care about how well we have resolved the transverse energy of the jets, which we quantify by examining the width of the calibrated resolution $Z$.
The final calibrated resolution of the reconstructed jets is defined to be the standard deviation of the $Z$ distribution, with $X=x$, which is given by $$\begin{aligned}
\hat{\sigma}(x)^2\equiv\sigma\left(Z|X=x\right)^2 \equiv \mathbb{E}\left[Z^2\middle| X=x\right]-\mathbb{E}\left[Z\middle| X=x\right]^2,\end{aligned}$$ and the fractional resolution is just given by $\sigma\left(\frac{Z}{x}|X=x\right)$. The fractional resolution, to first order in the Taylor series, is given by $$\begin{aligned}
\sigma\left(\frac{Z}{x}|X=x\right)=\frac{1}{x}\hat{\sigma}(x) \approx \frac{1}{x}\frac{\sigma(x)}{f'(x)},
\label{eqn:resolutionseries_text}\end{aligned}$$ the derivation of which can be found in \[sec:calibrated\_resolution\_calculation\]. Note that $f'(x)$ is *not* the response $R(x)=\frac{f(x)}{x}$. In particular, $f'(x)=R(x)+R'(x)x$, so $f'(x)\ne R(x)$ unless $R'(x)=0$, or equivalently $f(x)= kx$ for some constant $k$ (which is not the case at ATLAS nor at CMS). Figure \[fig:mean\_resolution\] verifies Eq. \[eqn:resolutionseries\_text\] and compares it to the method of dividing the width of the distribution by $R$, which is a standard diagnostic technique when a full calibration is not applied.
![The resolution of the $E_\text{T}$ distribution following numerical inversion when using the mean to calibrate, using a toy model similar to conditions in ATLAS or CMS. In blue, the exact calculated resolution. In red, the estimate of the closure using the first term of the higher-order correction in Eq. \[eqn:resolutionseries\_text\]. In green, the uncalibrated resolution. In orange, the resolution when dividing by the response $R(x)$. For details of the model, see \[sec:toy\_model\].[]{data-label="fig:mean_resolution"}](mean_resolution.pdf){width="90.00000%"}
Mode {#sec:mode}
----
In the following section only, for brevity, we will let $f$ be $f_\text{mo}$ and $C$ be $C_\text{mo}$. The distribution $\rho_{Y|X}(y|x)$ is usually unimodal and Gaussian fits to the “core” of this distribution are essentially picking out the mode of the distribution. Therefore, the results of this section are a good approximation to what is often used in practice. We note that in the case that the underlying distribution is multimodal, it is not clear how to unambiguously define the mode of the distribution, and so the results of this section cannot be applied naively.
### Closure {#sec:modeclosuresection}
Assuming that the probability distribution function is unimodal, the mode is the point at which the first derivative of the function is 0: $$\begin{aligned}
f(x) = y^* \text{ s.t. } \rho'_Y(y^*) = 0.
\label{eqn:modedef}\end{aligned}$$ Then we can write the closure condition (Eq. \[eqn:closure\]) as $$\begin{aligned}
\text{mode}\left[\frac{Z}{x}\middle| X=x\right] = 1\nonumber\\
\rightarrow \text{mode}\left[Z\middle| X=x\right] = x\nonumber\\
\rightarrow \rho'_Z(x) = 0.
\label{eqn:modeclosuredef}\end{aligned}$$
Using this definition, we can prove a result similar (but stronger) to the closure condition for the mean in the previous section:
[*Let the notion of central tendency be the mode. Numerical inversion closes if and only if $f$ is linear.*]{}
[**Proof.**]{} We have from Eq. \[eqn:newdist\] that $$\begin{aligned}
\rho_Z(z) = f'(z)\rho_Y(f(z)).\end{aligned}$$ Therefore, $$\begin{aligned}
\rho'_Z(z) = f''(z)\rho_Y(f(z))+f'(z)^2\rho'_Y(f(z)),\end{aligned}$$ and $$\begin{aligned}
\rho'_Z(x) &= f''(x)\rho_Y(f(x))+f'(x)^2\rho'_Y(f(x))\nonumber\\
&=f''(x)\rho_Y(y^*)+f'(x)^2\rho'_Y(y^*)\nonumber\\
&=f''(x)\rho_Y(y^*),\end{aligned}$$ where $\rho_Y(y^*)>0$ since $y^*$ is the mode of the distribution $\rho_Y$. Then we see that if $f''(x)=0$, then $\rho'_Z(x)=0$ and closure is achieved. In contrast, if $f''(x)\ne 0$, then $\rho'_Z(x)\ne 0$ and closure is not achieved. $\hspace{0.5 cm}\Box$
The closure when using the mode to calibrate, to first order in the Taylor series, is given by $$\begin{aligned}
C \approx 1+\frac{f''(x)}{f'(x)^3}\frac{\tilde{\sigma}(x)^2}{x},
\label{eqn:closure_mode_text}\end{aligned}$$ where $\tilde{\sigma}(x)$ is the width of a Gaussian fitted to just the area near the peak of the function $\rho_{Y|X}(y|x)$ (defined precisely in the next section). The derivation of Eq. \[eqn:closure\_mode\_text\] can be found in \[sec:calibrated\_mode\_calculation\].
Figure \[fig:mode\_closure\] shows the inherent non-closure in numerical inversion for a toy calculation using a response function $R(x)$ that is typical for ATLAS or CMS, and the first term of the higher-order correction given in Eq. \[eqn:closureseries\_text\], when using the mode for calibration.
![The closure of numerical inversion when using the mode to calibrate, using a toy model similar to conditions in ATLAS or CMS. In blue, the exact calculated closure. In red, the estimate of the closure using the first term of the higher-order correction given in Eq. \[eqn:closure\_mode\_text\]. For details of the model, see \[sec:toy\_model\].[]{data-label="fig:mode_closure"}](mode_closure.pdf){width="90.00000%"}
### Resolution
Let $z^*(x)$ be the mode of the distribution $Z|X=x$, which is not necessarily equal to $x$ given the above result. It is often the case at ATLAS and CMS that a Gaussian is fit to the distributions $\rho_{Y|X}(y|x)$ and $\rho_{Z|X}(z|x)$ only in the vicinity of the modes $f(x)$ and $z^*(x)$, respectively, since it is assumed that the distributions have a Gaussian core but non-Gaussian tails. The width of the Gaussian core found in this fit is then used as a measure of the resolution of the distribution. We thus define a “trimmed resolution” for a distribution $P$ with probability distribution function $\rho_P(p)$ about its mode $m$, which is valid if $P\sim\mathcal{N}(m,\tilde{\sigma})$ for $p$ near $m$: $$\begin{aligned}
\label{eq:tildesigma}
\tilde{\sigma}(P)^2 \equiv -\frac{\rho_P(m)}{\rho_P''(m)}.\end{aligned}$$
The definition in Eq. \[eq:tildesigma\] is chosen because it reduces to the usual variance for a Gaussian distribution. For the distributions $\rho_{Y|X}(y|x)$ and $\rho_{Z|X}(z|x)$, we thus have the trimmed resolutions $$\begin{aligned}
\tilde{\sigma}(x)^2\equiv\tilde{\sigma}\left(Y|X=x\right)^2 = -\frac{\rho_Y(f(x))}{\rho_Y''(f(x))} \\
\hat{\tilde{\sigma}}(x)^2\equiv\tilde{\sigma}\left(Z|X=x\right)^2 = -\frac{\rho_Z(z^*(x))}{\rho_Z''(z^*(x))}.
\label{mode_resolution_def}\end{aligned}$$
The calibrated fractional trimmed resolution $\tilde{\sigma}\left(\frac{Z}{x}|X=x\right)$, to first order in the Taylor series, is given by $$\begin{aligned}
\tilde{\sigma}\left(\frac{Z}{x}|X=x\right) = \frac{1}{x}\hat{\tilde{\sigma}}(x) \approx \frac{1}{x}\frac{\tilde{\sigma}(x)}{f'(x)},
\label{eqn:resolutionmode_text}\end{aligned}$$ the derivation of which can be found in \[sec:mode\_resolution\_calculation\].
Median {#sec:median}
------
In the previous sections we have examined using the mean or the mode to define $f$ and $C$, and found that both results do not lead to closure in general. We propose a new definition, using the median of the reconstructed jet $E_\text{T}$ distributions: $$\begin{aligned}
f_\text{med}(x)&=\text{median}[Y|X=x]\\
R_\text{med}(x) &= \text{median}\left[\frac{Y}{x}\middle| X=x\right] = \frac{f_\text{med}(x)}{x}. \end{aligned}$$ And define $C_\text{med}$ analogously. In the following section only, for brevity, we will let $f$ be $f_\text{med}$ and $C$ be $C_\text{med}$.
### Closure {#sec:medianclosuresection}
The median of the distribution is the point at which 50% of the distribution is above and 50% is below: $$\begin{aligned}
f(x) = y^* \text{ s.t. } \int_{-\infty}^{y^*} \rho_Y(y) dy = 0.5.\end{aligned}$$ Then the closure condition (Eq. \[eqn:closure\]) can be written $$\begin{aligned}
\text{median}\left[\frac{Z}{x}\middle| X=x\right] = 1\nonumber\\
\rightarrow \text{median}\left[Z\middle| X=x\right] = x\nonumber\\
\rightarrow \int_{-\infty}^{x} \rho_Z(z) dz = 0.5.
\label{eqn:medianclosuredef}\end{aligned}$$ We can see then the following result under this definition of central tendency:
[*Let the notion of central tendency be the median. Then numerical inversion always closes.*]{}
[**Proof.**]{} We have from Eq. \[eqn:newdist\] that $$\begin{aligned}
\rho_Z(z) = f'(z)\rho_Y(f(z)).\end{aligned}$$ So the closure condition in Eq. \[eqn:medianclosuredef\] becomes $$\begin{aligned}
0.5 &= \int_{-\infty}^{x} \rho_Z(z) dz\nonumber\\
&=\int_{-\infty}^{x} f'(z)\rho_Y(f(z)) dz.\end{aligned}$$ Then with $u = f(z), du = f'(z) dz$ we have $$\begin{aligned}
0.5 &= \int_{-\infty}^{f(x)} \rho_Y(u) du\nonumber\\
&=\int_{-\infty}^{y^*} \rho_Y(u) du\nonumber\\
&=0.5. \hspace{1 cm} \Box\end{aligned}$$
### Resolution
A natural definition of resolution when using the median to calibrate jets is the 68% interquantile range, defined as follows for a distribution $P$ with probability density function $\rho_P(p)$:
With $I_P^-$ and $I_P^+$ defined by $$\begin{aligned}
\int_{-\infty}^{I_P^-}\rho_P(p)dp \equiv \Phi(-1),\\
\int_{-\infty}^{I_P^+}\rho_P(p)dp \equiv \Phi(+1);\end{aligned}$$ the 68% interquantile range is defined as $$\begin{aligned}
\sigma_\text{IQR}(P) \equiv \frac{1}{2}\left(I_P^+-I_P^-\right).\end{aligned}$$ Where $\Phi(x)=\frac{1}{2}\text{erfc}\left(\frac{-x}{\sqrt{2}}\right)$ is the cumulative distribution function of the normal distribution. The definition is designed so that if $P\sim\mathcal{N}(\mu,\sigma)$ then $\sigma_\text{IQR}(P)=\sigma$. The quantity $\sigma_\text{IQR}$ is called the “68% interquantile range” because $\Phi(+1)-\Phi(-1) \approx 0.68$. For the distributions $Y|X=x$ and $Z|X=x$, define:
$$\begin{aligned}
\sigma_\text{IQR}(x) = \sigma_\text{IQR}(Y|X=x)\\
\hat{\sigma}_\text{IQR}(x) = \sigma_\text{IQR}(Z|X=x).\end{aligned}$$
Then we can see the following result for the calibrated resolution $\sigma_\text{IQR}(\frac{Z}{x}|X=x)$:
[*The 68% IQR of the calibrated response distribution is given by $\sigma_\text{IQR}(\frac{Z}{x}|X=x) = \frac{1}{2x}\left(f^{-1}(I_Y^+)-f^{-1}(I_Y^-)\right)$.*]{}
[**Proof.**]{} We have $$\begin{aligned}
\int_{-\infty}^{I_Z^-}\rho_Z(z)dz = \Phi(-1)\\
\int_{-\infty}^{I_Z^+}\rho_Z(z)dz = \Phi(+1).\end{aligned}$$ From Eq. \[eqn:newdist\], $$\begin{aligned}
\rho_Z(z) = f'(z)\rho_Y(f(z)),\end{aligned}$$ so that $$\begin{aligned}
\Phi(-1) &= \int_{-\infty}^{I_Z^-}f'(z)\rho_Y(f(z))dz\nonumber\\
&=\int_{-\infty}^{f(I_Z^-)}\rho_Y(u)du\nonumber\\
\rightarrow f(I_Z^-) &= I_Y^-\\
\Phi(+1) &= \int_{-\infty}^{I_Z^+}f'(z)\rho_Y(f(z))dz\nonumber\\
&=\int_{-\infty}^{f(I_Z^+)}\rho_Y(u)du\nonumber\\
\rightarrow f(I_Z^+) &= I_Y^+.\end{aligned}$$ Therefore, $$\begin{aligned}
\sigma_\text{IQR}(Z|X=x) &= \frac{1}{2}\left(I_Z^+-I_Z^-\right)\nonumber\\
&=\frac{1}{2}\left(f^{-1}(I_Y^+)-f^{-1}(I_Y^-)\right),\end{aligned}$$ and $$\begin{aligned}
\sigma_\text{IQR}\left(\frac{Z}{x}|X=x\right)=\frac{1}{2x}\left(f^{-1}(I_Y^+)-f^{-1}(I_Y^-)\right).\hspace{1 cm}\Box
\label{eqn:resolutionmedian_text}\end{aligned}$$
Discussion {#sec:discussion}
==========
After a quick summary in Section \[sec:summary\] of the results presented so far, Section \[sec:recommendations\] discusses the benefits and drawbacks of various methods of calibration, and Sections \[sec:iterated\_text\] and \[sec:corrected\_numerical\_inversion\_text\] describe extensions of numerical inversion that may help to improve closure.
Summary of Results {#sec:summary}
------------------
In Section \[sec:introclosure\] we defined the concept of closure in the process of calibrating the $E_\text{T}$ of jets. We found in Sections \[sec:meanclosuresection\] and \[sec:modeclosuresection\] that when using the mean or mode, respectively, of the distribution $Y|X=x$ to calibrate, closure is not necessarily achieved; with the response functions found at ATLAS or CMS, it is expected that numerical inversion will not close. We also provided estimates for the non-closure for the mean (Eq. \[eqn:closureseries\_text\]) and for the mode (Eq. \[eqn:closure\_mode\_text\]). In those estimates we find that as the underlying resolution $\sigma(x)$ or $\tilde{\sigma}(x)$ of the uncalibrated jet distribution $Y|X=x$ increases, the non-closure gets worse. This indicates that the non-closure issues raised in this note will become more important as the LHC moves to conditions with higher pileup in the future.
A new calibration scheme based on the median of $Y|X=x$ is proposed in Section \[sec:medianclosuresection\]. With this method of calibration, closure is always achieved.
Each section also explored various definitions of the resolution of the fractional calibrated jet distribution $\frac{Z}{x}|X=x$, where the most natural definition depends on the manner in which calibration has been performed (i.e., whether using the mean, mode, or median to calibrate). We provided useful estimates for the standard deviation (Eq. \[eqn:resolutionseries\_text\]), the trimmed Gaussian width (Eq. \[eqn:resolutionmode\_text\]), and an exact formula for the 68% IQR (Eq. \[eqn:resolutionmedian\_text\]). These expressions can be used to quickly estimate the final resolution of a jet algorithm without having to actually apply the calibration jet-by-jet.
Recommendation for Method of Calibration {#sec:recommendations}
----------------------------------------
As mentioned in the summary above, we have that for a non-linear response function closure is not necessarily achieved when using the mode or mean to calibrate, and closure is necessarily achieved when using the median. While this indicates that the median is a useful metric to use if closure is the main objective, we accept that there might be reasons to use the mode instead (for example, if the tails of $\rho_{Y|X}(y|X=x)$ are cut off, then the mode should stay constant while the median and mean will change). Thus we leave it to the reader to decide which method of calibration is most appropriate to use for their specific purposes. To that end, we also have discussion below about methods to improve the closure when the mode is being used to calibrate.
Iterated Numerical Inversion {#sec:iterated_text}
----------------------------
A natural question is whether it is useful for the purposes of achieving closure to implement numerical inversion again on the calibrated jet collection, if closure has not been achieved the first time. We define the *iterated numerical inversion* process as follows:
With $C(x)$ defined as in Eq. \[eqn:closure\], let $$\begin{aligned}
R_{\text{new}}(x) &\equiv C(x)\\
f_{\text{new}}(x) &\equiv C(x)x.\end{aligned}$$ Then, apply numerical inversion on the calibrated distribution $Z$: $$\begin{aligned}
Z\mapsto Z_\text{new} = f_{\text{new}}^{-1}(Z).\end{aligned}$$ We then ask if the closure of this new distribution, $C_\text{new}(x)$ (defined analogously as in Eq. \[eqn:closure\]), is closer to 1 than $C(x)$. In general, this is a difficult question to answer, but we have derived analytic approximations when the mode is used to derive the calibration (see \[sec:iterated\]). Iterating numerical inversion does *not* always help:
$$\begin{aligned}
\frac{|C_\text{new}(x)-1|}{|C(x)-1|} &\approx \frac{12f''(x)^2\tilde{\sigma}(x)^2}{f'(x)^4}\label{eqn:iterated_closure_ratio}.\end{aligned}$$
If the ratio in Eq. \[eqn:iterated\_closure\_ratio\] is greater than 1, then the closure gets worse after a second iteration of numerical inversion. In particular, as $\tilde{\sigma}(x)$ gets larger, the iterated closure gets worse relative to the original closure. So we expect at higher levels of pileup that iterating numerical inversion will not be useful. In Figure \[fig:mode\_closure\_bigs\] we can see that iterating numerical inversion does make the closure worse than the original closure, in a model simulating higher pileup conditions. The next section provides another scheme to correct for the residual non-closure that does not require iterating the process of numerical inversion.
Corrected Numerical Inversion {#sec:corrected_numerical_inversion_text}
-----------------------------
As noted above, when using the mean or mode of the distribution $Y|X=x$ to calibrate, closure is not achieved in general. With the closed-form estimates of the non-closure provided in the text, one might think to simply “subtract off” the non-closure. However the non-closure estimates provided are in terms of the truth $E_\text{T}$ value $x$. Since $x$ is not available in data, a sensible proxy is to use numerical inversion as an estimate for $x$. This is actually equivalent to iterated numerical inversion, which as shown in the previous section does not always help.
Another possibility is to use a different original response function to perform the calibration. Suppose that instead of using $f(x)=R(x)x$, there was a new function $g(x)\ne f(x)$ such that if the calibration is performed with this new function, $Y\mapsto Z_\text{corr}=g^{-1}(Y)$, the new calibrated distribution $Z_\text{corr}|X=x$ does achieve closure or gets closer to achieving closure than when calibrating using $f$.
We define the *corrected numerical inversion* process as follows:
1. Calculate $f(x)=f_\text{mo}(x)=\text{mode}[Y|X=x]$.
2. Let $g(x) = g(x;f(x))$ be a calibration function depending on the fitted function $f(x)$.
3. Apply the calibration $Y\mapsto Z_\text{corr}=g^{-1}(Y)$ jet-by-jet.
We then can examine the closure $$\begin{aligned}
C_\text{corr}(x) = \text{mode}\left[\frac{Z_\text{corr}}{x}\Big|X=x\right].\end{aligned}$$ And say we have achieved closure if $$\begin{aligned}
C_\text{corr}(x) \equiv 1.\end{aligned}$$ We examine the case of using the mode to measure closure, again because in practice that is what is often used when there are significant non-Gaussian tails.
One way to specify $g$ is by explicitly requiring closure. In \[sec:corrected\_numerical\_inversion\_calculation\] it is shown that in the case that closure is achieved exactly, $g$ necessarily satisfies the differential equation[^6] $$\begin{aligned}
0=g''(x)-g'(x)^2\frac{g(x)-f(x)}{\tilde{\sigma}(x)^2}.
\label{eqn:diffeq}\end{aligned}$$ In principle Eq. \[eqn:diffeq\] can be solved numerically given numerical fitted values $f(x)$ and $\tilde{\sigma}(x)$, though in practice such a method may prove intractable.
Another way to specify $g$ is to use external parameters $$\begin{aligned}
g(x) = g(x;f(x);a_1,...,a_n).\end{aligned}$$ Then the parameters $a_1,...,a_n$ can be chosen such that the closure is as close to 1 as possible. This is the method used to find the corrected calibration curve in Figure \[fig:mode\_closure\_bigs\], and explained in more detail in \[sec:corrected\_numerical\_inversion\_parameterization\]. The absolute non-closure $|C-1|$ is significantly smaller than the original non-closure, even in a model simulating very high pileup conditions.
![The top plot shows the closure of numerical inversion when using the mode to calibrate, using a toy model similar to conditions in ATLAS or CMS but increasing $\sigma(x)$ by a factor of 1.4 in order to simulate higher pileup conditions. In blue, the original closure as defined in Eq. \[eqn:closure\]. In green, the closure after iterating numerical inversion once as in Section \[sec:iterated\_text\]. In orange, the closure after using the parameterized corrected numerical inversion technique as in Section \[sec:corrected\_numerical\_inversion\_text\]. For details of the model, see \[sec:toy\_model\]. The bottom plot shows the absolute non-closure $|C-1|$. In particular, at low $E_\text{T}$, iterating numerical inversion does worse, while corrected numerical inversion does better than the original calibration.[]{data-label="fig:mode_closure_bigs"}](mode_closure_bigs.pdf){width="90.00000%"}
Conclusions {#sec:conclusion}
===========
Jets are ubiquitous at the LHC and their calibration is one of the most important preprocessing steps for data analysis. The standard technique for jet calibration is numerical inversion. This paper has formally defined numerical inversion and derived many of its properties. The three most important results:
- [**Numerical inversion is inherently biased**]{}: calibrated reconstructed jets are not guaranteed to be centered around the corresponding particle-level jet $E_\text{T}$. However, when the median is used for the notion of ‘centered’, closure is guaranteed. In practice where the detector response is non-linear, there is never closure when ‘centered’ means the mode of the response distribution.
- [**Numerical inversion can be approximated**]{}: However, the resolution of the calibrated jets is not well-approximated by the uncalibrated jet resolution divided by the response. Calibrated resolutions can still be simply estimated, but they depend on the derivative of the calibration function and not the response.
- [**Numerical inversion can be improved**]{}: Modified calibration functions can be constructed to achieve a better closure than using the same measure of central tendency for deriving the calibration function and assessing the closure.
These considerations may become even more important in the future when fluctuations in the detector response increase due to the presence of larger contributions from pileup. Numerical inversion is a general technique that can be applied to any detector calibration where a reliable simulation exists for matching objects before and after the detector response. The results presented here may therefore have a broader applicability than to jets, the LHC, or even high energy physics.
Acknowledgments
===============
We would like to thank Alexx Perloff for his detailed comments on the manuscript and for useful discussions about the CMS jet calibration procedure. Additionally, we acknowledge Francesco Rubbo for his helpful suggestions on the manuscript. For a portion of this work, BN was supported by the NSF Graduate Research Fellowship under Grant No. DGE-4747 and by the Stanford Graduate Fellowship.
Gaussian Invariance Lemma {#sec:lemma}
=========================
[*Let $X\sim \mathcal{N}(\mu,\sigma)$ and $f$ be some function such that $f'(x)>0$. Then, $f(X)\sim\mathcal{N}(\mu',\sigma')$ if and only if $f(x)$ is linear in $x$.*]{}
[**Proof.**]{} The converse is a well-known result, and can be obtained directly from application of Eq. \[eqn:newdist\].
Now suppose that $f(X)\sim\mathcal{N}(\mu',\sigma')$. Let $Y=(X-\mu)/\sigma$ and define $$\begin{aligned}
g(y)=\frac{f(\sigma y+\mu)-\mu'}{\sigma'},\end{aligned}$$ so that $Y$ and $Z=g(Y)$ both have a standard normal distribution. Furthermore, $$\begin{aligned}
g'(y) = \frac{\sigma}{\sigma'}f'(\sigma y+\mu) > 0,\end{aligned}$$ so $g$ is monotonic.
We then can write for any $c$: $$\begin{aligned}
\nonumber
\Phi(c)=\Pr(Y<c)&=\Pr(g(Y)<g(c))\\\nonumber
&=\Pr(Z<g(c))\\
&=\Phi(g(c)),\end{aligned}$$ Where $\Phi(x)$ is the normal distribution cumulative distribution function. Since $\Phi$ is invertible, we then have that $g(c)=c$. Inserting the definition of $g$ then gives us the final result: $$\begin{aligned}
f(x)=\frac{\sigma'}{\sigma} (x-\mu)+\mu'.\hspace{1 cm}\Box\end{aligned}$$
Closure of the Mean {#sec:mean_nonclosure}
===================
[*The closure of jets reconstructed from truth jets with $E_\text{T} = x$ and $f(x)=f_{me}(x)$ is given to first order by $C\approx 1-\frac{1}{2}\frac{f''(x)}{f'(x)^3}\frac{\sigma(x)^2}{x}$.*]{}
[**Derivation.**]{} We begin by Taylor expanding $f^{-1}(y)$ about $y=f(x)$: $$\begin{aligned}
f^{-1}(y) &= \sum_{n=0}^\infty \frac{1}{n!}\left(f^{-1}\right)^{(n)}\left(f(x)\right)\cdot\left(y-f(x)\right)^n\nonumber\\
&=\sum_{n=0}^\infty \frac{1}{n!}g_n(x)\cdot\left(y-f(x)\right)^n,\end{aligned}$$ where $g_n(x) \equiv (f^{-1})^{(n)}(f(x))$ means the $n$th derivative of $f^{-1}(y)$, evaluated at $y=f(x)$. Plugging this into Eq. \[eqn:closuredef\], we have $$\begin{aligned}
C &= \frac{1}{x}\int dy \rho_{Y|X}(y|x) f^{-1}(y)\nonumber\\
&=\sum_{n=0}^\infty \frac{1}{n!}\frac{g_n(x)}{x}\int dy \rho_{Y|X}(y|x) \left(y-f(x)\right)^n\nonumber\\
&=\sum_{n=0}^\infty \frac{1}{n!}\frac{g_n(x)}{x} \mu_n(x),\end{aligned}$$ where $\mu_n(x)$ are the standard central moments $\mu_n(x) = \mathbb{E}\left[\left(Y-\mathbb{E}\left[Y\right]\right)^n\middle| X=x\right]$, since by definition $f(x)=\mathbb{E}[Y|X=x]$.
The first few central moments are independent of the distribution $\rho_{Y|X}$. In particular, $\mu_0 = 1$ is the normalization, and $\mu_1 = 0$. Writing these terms out, we have $$\begin{aligned}
C =\frac{g_0(x)}{x}+\sum_{n=2}^\infty \frac{1}{n!}\frac{g_n(x)}{x} \mu_n(x).\end{aligned}$$ Noting that $g_0(x) = f^{-1}(f(x)) = x$, $$\begin{aligned}
C &=1+\sum_{n=2}^\infty \frac{1}{n!}\frac{g_n(x)}{x} \mu_n(x).\label{eqn:closureseries}\end{aligned}$$
We see that, if $f$ is linear, then so is $f^{-1}$, and so $g_n = 0$ for all $n\ge 2$. Then Eq. \[eqn:closureseries\] reduces to $C=1$, and numerical inversion closes, as was found in Eq. \[eqn:closure\_linear\_proof\].
It will be instructive to expand out the first few terms of Eq. \[eqn:closureseries\]. We note that, by definition, $\mu_2(x) = \sigma(x)^2$ is the variance, and $\mu_3(x) = \sigma(x)^3\gamma_1$ defines the skew $\gamma_1$. Then we have $$\begin{aligned}
C &=1+\frac{1}{2}\frac{g_2(x)}{x}\sigma(x)^2+\frac{1}{6}\frac{g_3(x)}{x}\sigma(x)^3\gamma_1+\sum_{n=4}^\infty \frac{1}{n!}\frac{g_n(x)}{x} \mu_n(x).\label{eqn:closureseriesexpand}\end{aligned}$$
Suppose we are given an arbitrary distribution specified by its moments $\mu_n(x)$. Then the requirement that closure is satisfied in the form of the right hand side of Eq. \[eqn:closureseries\] converging to $1$ exactly imposes strict constraints on the function $g(x)$, so that only for a highly specific choice of $g$ and therefore $f$ is closure achieved. Thus in general we do not expect closure to be satisfied for an arbitrary initial distribution $\rho_{Y|X}$.
We note that, since we expect the derivatives $g_n(x)$ and the moments $\mu_n(x)$ to grow considerably slower than $n!$ for functions $f$ and distributions $\rho_{Y|X}$ encountered at the LHC, we expect Eq. \[eqn:closureseries\] to converge, and Eq. \[eqn:closureseriesexpand\] gives the dominant contributions to the non-closure, i.e. $$\begin{aligned}
C \approx 1+\frac{1}{2}\frac{g_2(x)}{x}\sigma(x)^2+\frac{1}{6}\frac{g_3(x)}{x}\sigma(x)^3\gamma_1.\end{aligned}$$
If $\rho_{Y|X}$ is symmetric or near-symmetric, or if the third derivative of $g$ is small, such that $g_3(x)\sigma(x)\gamma_1 \ll g_2(x)$, then the dominant contribution to the non-closure is just $$\begin{aligned}
C \approx 1+\frac{1}{2}\frac{g_2(x)}{x}\sigma(x)^2.\end{aligned}$$
We further note that $$\begin{aligned}
g_2(x) &= (f^{-1})^{(2)}(f(x)) = -\frac{f''(x)}{f'(x)^3}\nonumber\\
\rightarrow C &\approx 1-\frac{1}{2}\frac{f''(x)}{f'(x)^3}\frac{\sigma(x)^2}{x}.\hspace{1 cm}\Box
\label{eqn:closureseriesgaussian}\end{aligned}$$
Calibrated Resolution of the Mean {#sec:calibrated_resolution_calculation}
=================================
[*The calibrated resolution of jets reconstructed from truth jets with $E_\text{T} = x$ and $f(x)=f_{me}(x)$ is given to first order by $\frac{\sigma(x)}{f'(x)}$.*]{}
[**Derivation.**]{} We note that, expanding $f^{-1}(y)$ about $y=f(x)$ out to one derivative, and using the definitions of $g_n(x)$ and $\mu_n(x)$ from the previous section, $$\begin{aligned}
(f^{-1}(y))^2 \approx g_0(x)^2+2g_0(x)g_1(x)(y-f(x))+g_1(x)^2(y-f(x))^2,\end{aligned}$$ so that $$\begin{aligned}
\mathbb{E}\left[Z^2\middle| X=x\right]&=\int dy \rho_{Y|X}(y|x) (f^{-1}(y))^2\nonumber\\
&\approx\int dy \rho_{Y|X}(y|x) \left(g_0(x)^2+2g_0(x)g_1(x)(y-f(x))+g_1(x)^2(y-f(x))^2\right)\nonumber\\
&=g_0(x)^2\mu_0(x)+2g_0(x)g_1(x)\mu_1(x)+g_1(x)^2\mu_2(x)\nonumber\\
&=g_0(x)^2+g_1(x)^2\sigma(x)^2.\hspace{5mm}\text{($\mu_1=0$ by construction)}\end{aligned}$$ Out to one derivative we also have that (as derived in the previous section) $$\begin{aligned}
\mathbb{E}\left[Z\middle| X=x\right]^2 &\approx g_0(x)^2\nonumber\\
\rightarrow \sigma\left(Z|X=x\right)^2 &= \mathbb{E}\left[Z^2\middle| X=x\right]-\mathbb{E}\left[Z\middle| X=x\right]^2\nonumber\\
&\approx g_1(x)^2\sigma(x)^2.\end{aligned}$$ Then, $$\begin{aligned}
g_1(x) = (f^{-1})'(f(x)) &= \frac{1}{f'(x)}\nonumber\\
\rightarrow \sigma\left(Z|X=x\right)^2 &\approx \frac{\sigma(x)^2}{f'(x)^2}\nonumber\\
\rightarrow \hat{\sigma}(x)=\sigma\left(Z|X=x\right) &\approx \frac{\sigma(x)}{f'(x)}. \hspace{1 cm} \Box \label{eqn:resolution}\end{aligned}$$
Closure of the Mode {#sec:calibrated_mode_calculation}
===================
[*The closure of jets reconstructed from truth jets with $E_\text{T} = x$ and $f(x)=f_{mo}(x)$ is given to first order by $C\approx 1+\frac{f''(x)}{f'(x)^3}\frac{\tilde{\sigma}(x)^2}{x}$.*]{}
[**Derivation.**]{} As a reminder for the reader, for brevity, we will let $\rho_Y(y)=\rho_Y(y|x)$ and $\rho_Z(z)=\rho_Z(z|x)$, and let the parameter $x$ be understood.
We begin by supposing that the closure is not much different than 1, so that we can examine $\rho_Z(z)$ in the vicinity of $z=x$ to find the mode $z^*$. Expanding Eq. \[eqn:newdist\] about to second order in $(z-x)$: $$\begin{aligned}
\rho_Z(z) &= f'(z)\rho_Y(f(z))\nonumber\\
&\approx \left[f'(x)+(z-x)f''(x)+\frac{(z-x)^2}{2}f'''(x)\right]\nonumber\\
&\times\left[\rho_Y(f(x))+(z-x)\rho_Y'(f(x))f'(x)+\frac{(z-x)^2}{2}\rho_Y''(f(x))f'(x)^2\right].\end{aligned}$$ We note from the condition Eq. \[eqn:modedef\] that $\rho_Y'(f(x))=0$, so $$\begin{aligned}
\rho_Z(z)&\approx \left[f'(x)+(z-x)f''(x)+\frac{(z-x)^2}{2}f'''(x)\right]\nonumber\\
&\times\left[\rho_Y(f(x))+\frac{(z-x)^2}{2}\rho_Y''(f(x))f'(x)^2\right]\nonumber\\
&\approx f'(x)\rho_Y(f(x))+(z-x)f''(x)\rho_Y(f(x))\nonumber\\
&+\frac{(z-x)^2}{2}\left[f'''(x)\rho_Y(f(x))+f'(x)^3\rho_Y''(f(x))\right],\end{aligned}$$ so that $$\begin{aligned}
\rho'_Z(z)&\approx f''(x)\rho_Y(f(x))+(z-x)\left[f'''(x)\rho_Y(f(x))+f'(x)^3\rho_Y''(f(x))\right].
\label{eqn:drhoz}\end{aligned}$$ Then the closure condition Eq. \[eqn:modeclosuredef\] gives $$\begin{aligned}
\rho'_Z(z^*)&=0\nonumber\\
\rightarrow z^* &\approx x-\frac{f''(x)\rho_Y(f(x))}{f'''(x)\rho_Y(f(x))+f'(x)^3\rho_Y''(f(x))},\end{aligned}$$ i.e. the mode of $\rho_Z(z)$ occurs at $z=z^*$. Then the closure is $$\begin{aligned}
C &= \frac{z^*}{x}\nonumber\\
&\approx 1-\frac{1}{x}\frac{f''(x)\rho_Y(f(x))}{f'''(x)\rho_Y(f(x))+f'(x)^3\rho_Y''(f(x))}\nonumber\\
&=1-\frac{1}{x}\frac{f''(x)\frac{\rho_Y(f(x))}{\rho_Y''(f(x))}}{f'''(x)\frac{\rho_Y(f(x))}{\rho_Y''(f(x))}+f'(x)^3}\nonumber\\
&=1+\frac{f''(x)}{f'(x)^3-\tilde{\sigma}(x)^2f'''(x)}\frac{\tilde{\sigma}(x)^2}{x}.
\label{eqn:mode_closure_df3}\end{aligned}$$
In practice we find that for typical response functions, higher derivatives of $f$ tend to vanish. A comparison between the two terms in the denominator of Eq. \[eqn:mode\_closure\_df3\] can be found in Figure \[fig:d\_comp\] for the toy model considered in \[sec:toy\_model\]; we find that $f'(x)^3 \gg \tilde{\sigma}(x)^2f'''(x)$. Thus, in practice we recommend the approximation $$\begin{aligned}
C\approx 1+\frac{f''(x)}{f'(x)^3}\frac{\tilde{\sigma}(x)^2}{x}.\hspace{1 cm} \Box
\label{eqn:mode_closure_simple}\end{aligned}$$ The agreement between the actual and estimated closure in Figure \[fig:mode\_closure\] also confirms this approximation. Thus, in the body of this text Eq. \[eqn:mode\_closure\_simple\] is presented as the result, even though Eq. \[eqn:mode\_closure\_df3\] is technically more precise.
![A comparison of derivative values using a toy model similar to conditions in ATLAS or CMS. In blue, $f'(x)^3$. In red, $\tilde{\sigma}(x)^2f'''(x)$. For details of the model, see \[sec:toy\_model\].[]{data-label="fig:d_comp"}](d_comp.pdf){width="90.00000%"}
Resolution of the Mode {#sec:mode_resolution_calculation}
======================
[*The resolution of jets reconstructed from truth jets with $E_\text{T} = x$ and $f(x)=f_{mo}(x)$ is given to first order by $\hat{\tilde{\sigma}}(x)\approx \frac{\tilde{\sigma}(x)}{f'(x)}.$*]{}
[**Derivation.**]{} From Eq. \[eqn:drhoz\] we have $$\begin{aligned}
\rho''_Z(z)&\approx f'''(x)\rho_Y(f(x))+f'(x)^3\rho_Y''(f(x)).\end{aligned}$$ Then the resolution is given as $$\begin{aligned}
\hat{\tilde{\sigma}}(x)^2 &= -\frac{\rho_Z(z^*)}{\rho_Z''(z^*)}\nonumber\\
&\approx-\frac{f'(x)\rho_Y(f(x))}{f'''(x)\rho_Y(f(x))+f'(x)^3\rho_Y''(f(x))}\nonumber\\
&=\frac{f'(x)\tilde{\sigma}(x)^2}{f'(x)^3-f'''(x)\tilde{\sigma}(x)^2}.\end{aligned}$$ Following the discussion in \[sec:calibrated\_mode\_calculation\], we simplify the denominator to get the approximation $$\begin{aligned}
\tilde{\sigma}(x)^2 &\approx \frac{\tilde{\sigma}(x)^2}{f'(x)^2}\nonumber\\
\rightarrow \tilde{\sigma}(x) &\approx \frac{\tilde{\sigma}(x)}{f'(x)}.\hspace{1 cm} \Box\end{aligned}$$
Iterated Numerical Inversion Calculation {#sec:iterated}
========================================
[*The closure $C_\text{new}(x)$ after iterating numerical inversion is not necessarily closer to 1 than the closure $C(x)$ after performing numerical inversion once.*]{}
[**Derivation.**]{} We limit ourselves to the case that we are using the modes of the distributions $Y|X=x$ and $Z|X=x$ to calibrate, as in practice that is what is used at ATLAS and CMS for numerical inversion.
We use the estimation of the closure of the mode Eq. \[eqn:closure\_mode\_text\]: $$\begin{aligned}
C(x) &\approx 1+\frac{f''(x)}{f'(x)^3}\frac{\tilde{\sigma}(x)^2}{x}\nonumber\\
\rightarrow |C(x)-1|&\approx \left|\frac{f''(x)}{f'(x)^3}\frac{\tilde{\sigma}(x)^2}{x}\right|.\end{aligned}$$ We use the iterated numerical inversion response $$\begin{aligned}
f_{\text{new}}(x) &= C(x)x\nonumber\\
&\approx x+\frac{f''(x)}{f'(x)^3}\tilde{\sigma}(x)^2\\
\rightarrow f'_{\text{new}}(x) &\approx 1-3\frac{f''(x)^2}{f'(x)^4}\tilde{\sigma}(x)^2\\
\rightarrow f''_{\text{new}}(x) &\approx 12\frac{f''(x)^3}{f'(x)^5}\tilde{\sigma}(x)^2.\end{aligned}$$ Where we have ignored higher derivatives of $f(x)$[^7] and derivatives of $\sigma(x)$[^8]. We also have the estimation of the resolution of the calibrated distribution Eq. \[eqn:resolutionmode\_text\] $$\begin{aligned}
\hat{\tilde{\sigma}}(x) \approx \frac{\tilde{\sigma}(x)}{f'(x)},\end{aligned}$$
So that we can estimate the closure after iterating numerical inversion as $$\begin{aligned}
C_\text{new}(x) &\approx 1+\frac{f''_\text{new}(x)}{f'_\text{new}(x)^3}\frac{\hat{\tilde{\sigma}}(x)^2}{x}\nonumber\\
&\approx 1+12\frac{f''(x)^3}{f'(x)^5}\tilde{\sigma}(x)^2\frac{\tilde{\sigma}(x)^2}{f'(x)^2}\frac{1}{x}\nonumber\\
&=1+\frac{12}{x}\frac{f''(x)^3}{f'(x)^7}\tilde{\sigma}(x)^4\\
\rightarrow |C_\text{new}(x)-1| &\approx \left|\frac{12}{x}\frac{f''(x)^3}{f'(x)^7}\tilde{\sigma}(x)^4\right|\\
\rightarrow \frac{|C_\text{new}(x)-1|}{|C(x)-1|} &\approx \frac{12f''(x)^2\tilde{\sigma}(x)^2}{f'(x)^4}.\label{eqn:iterated_closure_ratio_app}\end{aligned}$$ If the ratio in Eq. \[eqn:iterated\_closure\_ratio\_app\] is greater than 1, then the closure gets worse after a second iteration of numerical inversion. $\hspace{1 cm} \Box$
Corrected Numerical Inversion Calculation {#sec:corrected_numerical_inversion_calculation}
=========================================
With $Y\mapsto Z_\text{corr} = g^{-1}(Y)$, we will get a corrected calibrated distribution $\rho_{Z_\text{corr}|X}(z|x)$. For brevity, let $\rho_{Z_\text{corr}}(z)=\rho_{Z_\text{corr}|X}(z|x)$, where it is understood we are examining the distributions around a particular value of $x$. We will again require that $g'(x)>0$, so that $$\begin{aligned}
\rho_{Z_\text{corr}}(z) = g'(z)\rho_Y(g(z)).\end{aligned}$$ The closure condition is then equivalent to the condition $$\begin{aligned}
\rho'_{Z_\text{corr}}(x) = 0,\end{aligned}$$ i.e., the mode of the distribution $Z_\text{corr}|X=x$ occurs at $x$. We have that $$\begin{aligned}
\rho_{Z_\text{corr}}'(z) = g''(z)\rho_Y(g(z))+g'(z)^2\rho'_Y(g(z)),\end{aligned}$$ so that the closure condition requires $$\begin{aligned}
0 &=\rho_{Z_\text{corr}}'(x)\nonumber\\
&=g''(x)\rho_Y(g(x))+g'(x)^2\rho'_Y(g(x))\nonumber\\
\rightarrow 0 &=g''(x)+g'(x)^2\frac{\rho'_Y(g(x))}{\rho_Y(g(x))}.\end{aligned}$$ We suppose that $g(x)$ is close to $f(x)$, $g(x)=f(x)+\alpha(x)$, with $|\alpha(x)|\ll \tilde{\sigma}(x)$. Then we have directly from the supposition that the distribution $Y|X=x$ is approximately Gaussian about its mode $f(x)$ with width $\tilde{\sigma}(x)$ that $$\begin{aligned}
\frac{\rho'_Y(g(x))}{\rho_Y(g(x))} &= -\frac{\left(g(x)-f(x)\right)}{\tilde{\sigma}(x)^2}.\end{aligned}$$ Then, the closure condition gives $$\begin{aligned}
0 &=g''(x)+g'(x)^2\frac{\rho'_Y(g(x))}{\rho_Y(g(x))}\nonumber\\
&=g''(x)-g'(x)^2\frac{g(x)-f(x)}{\tilde{\sigma}(x)^2}.\end{aligned}$$
Corrected Numerical Inversion Parameterization {#sec:corrected_numerical_inversion_parameterization}
==============================================
We parameterize the corrected calibration function $g(x) = g(x;f(x);a_1,...,a_n)$. For the toy model used in this note, we use the parameterization $$\begin{aligned}
g(x) = f(x)+\frac{a_1}{1+\exp(\frac{x-a_2}{a_3})}.
\label{eqn:app_parameterization}\end{aligned}$$
In the model considered here, and for the response functions at the LHC, the closure goes to $1$ for large $x$ and moves away from $1$ for small $x$, a natural result of Eq. \[eqn:closure\_mode\_text\]. Thus, the parameterization in Eq. \[eqn:app\_parameterization\] includes a “turn-off” to recover $g(x)=f(x)$ at large $x$ (with $a_3>0$).
In practice, there is some smallest value $x=x'$ which is being studied, and which per the discussion in the above paragraph tends to have the largest non-closure. The value $x'=20$ GeV is used in this note, which is the lowest calibrated $E_\text{T}$ at current conditions at the LHC. For the corrected calibration curve shown in Figure \[fig:mode\_closure\_bigs\], the parameters $a_1,a_2,a_3$ are scanned over to minimize the non-closure at this value $x'$. For the corrected calibration curve shown in Figure \[fig:mode\_closure\_bigs\], the values $a_2=a_3=x'=20$ GeV and $a_1 = 5$ GeV were used.
Toy Model of the ATLAS/CMS Response Function {#sec:toy_model}
============================================
All the “Proofs” quoted in this note are valid in general, regardless of the response function $R(x)$ and the underlying distributions $Y|X=x$ (within the assumptions outlined in Section \[sec:assumptions\]). We also expect that the “Derivations”, which are all approximate formulas, to apply in a wide variety of cases. In order to visualize some of the results, and verify the approximations, a particular model was needed in order to get numerical values. All figures made in this note were derived from a simple model of the ATLAS or CMS jet $E_\text{T}$ response function[^9]. After specifying $f(x)$ and the distributions $Y|X=x$, the calibrated distributions were constructed using the analytic form of the calibrated distributions Eq. \[eqn:newdist\]. Then the various moments were found numerically for the calibrated distribution at each value $x$.
The response function was guided both by physical intuition and by the intention to reasonably simulate response functions published by ATLAS [@Aad:2011he] and CMS [@Chatrchyan:2011ds; @Khachatryan:2016kdb]. When there is only a small amount of energy already in a detector cell, the detector only reconstructs a small fraction of the energy put into it, because of noise thresholds and the non-compensating nature of the ATLAS and CMS detectors. Whereas if there is already a lot of energy in a detector cell, the detector reconstructs almost all of the energy put into it. Thus $f'(x)$ was designed to be low at low values of $x$ and then to rise steadily to 1 at high values of $x$. This intuition does not directly apply to jets that directly use tracking information (e.g. particle-flow jets in CMS), but the for the sake of simplicity only one (calorimeter) jet definition is used for illustration.
$f'(x)$ was the integrated to get $f(x)$ and divided by $x$ to get $R(x)$. The resulting $R(x)$ distribution approximately corresponds to the $R=0.4$ anti-$k_t$ [@Cacciari:2008gp] central jet response at the EM scale available in Ref. [@Aad:2011he] (e.g. Fig. 4a). The shapes of $f'(x)$ and $R(x)$ in this model can be seen in Figure \[fig:model\].
![The toy model used in this note to simulate conditions in ATLAS or CMS. The left plot shows $f'(x)$ and the right plot shows $R(x)$.[]{data-label="fig:model"}](model.pdf){width="90.00000%"}
In this simplified model, the distributions $Y|X=x\sim\mathcal{N}(f(x),\sigma(x))$ were used. In ATLAS and CMS, $Y|X=x$ is approximately Gaussian. The constant value of $\sigma(x)=7$ GeV was used, corresponding to a calibrated resolution (Fig. \[fig:mean\_resolution\]) of about 50% at $E_\text{T}=20$ GeV. This is consistent with e.g. Ref. [@Aad:2015ina] and has the property that $\sigma'(x) = 0$, which should be the case if pileup is the dominant contributor to the resolution of low $E_\text{T}$ jets.
[^1]: Jets have been calibrated in previous experiments, such as the Tevatron CDF [@Bhatti:2005ai] and D0 [@Abazov:2013hda] experiments, but the methods were significantly different and so this note focuses on the general purpose LHC experiments.
[^2]: Capital letters represent random variables and lower case letters represent realizations of those random variables, i.e. $X=x$ means the random variable $X$ takes on the (non-random) value $x$.
[^3]: Note that we do not require that $Y|X=x$ is exactly Gaussian, only that it is approximately Gaussian, which is true for a wide range of energies and jet reconstruction algorithms at ATLAS and CMS. In particular, there are non-negligible (but still often small) asymmetries at low and high $E_\text{T}$ at ATLAS and CMS [@Aad:2011he; @Chatrchyan:2011ds; @Khachatryan:2016kdb]. In any case, even if $Y|X=x$ [*is*]{} Gaussian, $Z|X=x$ is in general [*not*]{} Gaussian, for non-linear response functions; see \[sec:lemma\].
[^4]: In practice it is necessary to condition on a small range of $X$, e.g. $X\in[x,(1+\epsilon)x]$. If $\epsilon$ is large then there can be complications from the changing of $f(x)$ over the specified range and from the shape of the prior distribution of $X$ over the specified range. These challenges can be solved by generating large enough Monte Carlo datasets. We therefore assume that $\epsilon \ll 1$ and consider complications from finite $\epsilon$ beyond the scope of this paper.
[^5]: We have $a>0$ from the assumption that $f'(x)>0$.
[^6]: As noted in the derivation, this equation also assumes the following: that the underlying distribution $Y|X=x$ is approximately Gaussian in the vicinity of its mode $f(x)$; and that the correction is small, with $|g(x)-f(x)|\ll \tilde{\sigma}(x)$.
[^7]: See, e.g., Figure \[fig:d\_comp\].
[^8]: For this specific counterexample, we are examining the case that $\sigma'(x)=0$, which is realistic for high pileup conditions.
[^9]: Energies are measured with calorimeters and momenta are measured with tracking detectors. In-situ corrections using momentum balance techniques constrain the momentum. For small-radius QCD jets, the $E_\text{T}$ and $p_\text{T}$ are nearly identical. Since the simulation-based correction of calorimeter jets is used here as a model, the $E_\text{T}$ is used throughout.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Given a graph $G$, the of $G$, denoted by $h(G)$, is the largest integer $k$ such that $G$ contains the complete graph $K_k$ as a minor. A in $G$ is an induced cycle of length at least four. Hadwiger’s Conjecture from 1943 states that for every graph $G$, $h(G)\ge \chi(G)$, where $\chi(G)$ denotes the chromatic number of $G$. In this paper we establish more evidence for Hadwiger’s conjecture by showing that if a graph $G$ with independence number $\alpha(G)\ge3$ has no hole of length between $4$ and $2\alpha(G)-1$, then $h(G)\ge\chi(G)$. We also prove that if a graph $G$ with independence number $\alpha(G)\ge2$ has no hole of length between $4$ and $2\alpha(G)$, then $G$ contains an odd clique minor of size $\chi(G)$, that is, such a graph $G$ satisfies the odd Hadwiger’s conjecture.'
author:
- |
Zi-Xia Song[^1] and Brian Thomas$^\dagger$ [^2][^3]\
Department of Mathematics\
University of Central Florida\
Orlando, FL 32816, USA\
title: 'Hadwiger’s conjecture for graphs with forbidden holes'
---
Introduction
============
All graphs considered in this paper are finite, and have no loops or parallel edges. We begin with some definitions. Let $G$ be a graph. The $\overline{G}$ of $G$ is the graph with vertex set $V(G)$, such that two vertices are adjacent in $G$ if and only if they are non-adjacent in $\overline{G}$. A in $G$ is a set of vertices all pairwise adjacent. A in $G$ is a set of vertices all pairwise non-adjacent. We use $\chi(G)$, $\omega(G)$, and $\alpha(G)$ to denote the chromatic number, the clique number, and the independence number of $G$, respectively. Given a graph $H$, we say that $G$ is if $G$ has no induced subgraph isomorphic to $H$. For a family $\mathcal{F}$ of graphs, we say that $G$ is $\mathcal{F}$-free if $G$ is $F$-free for every $F\in \mathcal{F}$. A graph $H$ is a of a graph $G$ if $H$ can be obtained from a subgraph of $G$ by contracting edges. In those circumstances we also say that $G$ has an $H$ .
In 1943, Hadwiger [@had] conjectured that for every integer $t\ge0$, every graph either can be $t$-colored or has a $K_{t+1}$ minor. Hadwiger’s conjecture is perhaps the most famous conjecture in graph theory, as pointed out by Paul Seymour in his recent survey [@Seymour]. It suggests a far reaching generalization of the Four Color Theorem [@4ct1; @4ct2; @4ct3]. Hadwiger’s conjecture is trivially true for $t\le2$, and reasonably easy for $t=3$, as shown by Dirac [@Dirac1952]. However, for $t\ge4$, Hadwiger’s conjecture implies the Four Color Theorem. Wagner [@wagner] proved that the case $t=4$ of Hadwiger’s conjecture is, in fact, equivalent to the Four Color Theorem, and the same was shown for $t=5$ by Robertson, Seymour, and Thomas [@RST] in 1993. Hadwiger’s conjecture remains open for $t\ge6$. As pointed out by Paul Seymour [@Seymour] in his recent survey on Hadwiger’s conjecture, proving that graphs with no $K_7$ minor are $6$-colourable is thus the first case of Hadwiger’s conjecture that is still open. It is not even known yet whether every graph with no $K_7$ minor is $7$-colorable. Kawarabayashi and Toft [@kt] proved that every graph with no $K_7$ or $K_{4,\, 4}$ minor is $6$-colorable. Jakobsen [@Jakobsen1972; @Jakobsen1983] proved that every graph with no $K_7^{=}$ minor is $6$-colorable and every graph with no $K_7^{-}$ minor is $7$-colorable, where for any integer $p>0$, $K_p^{-}$ (resp. $K_p^{=}$) denotes the graph obtained from $K_p$ by removing one edge (resp. two edges). Recently Albar and Gonçalves [@AG2015] proved that every graph with no $K_7$ minor is $8$-colorable and every graph with no $K_8$ minor is $10$-colorable. Their proof is computer-assisted. In [@roleksong2], Rolek and the first author obtained a short and computer-free proof of Albar and Gonçalves’ results and extended it to the next step by showing that every graph with no $K_9$ minor is $12$-colorable. Rolek and Song [@roleksong2] also proved that every graph with no $K_8^=$ minor is $8$-colorable and every graph with no $K_8^-$ minor is $9$-colorable. For more information on Hadwiger’s conjecture, the readers are referred to an earlier survey by Toft [@Toft] and a very recent informative survey due to Seymour [@Seymour].
Hadwiger’s conjecture has also been verified to be true for some special classes of graphs. In 2004, Reed and Seymour [@rs2004] proved that Hadwiger’s conjecture holds for line graphs (possibly with parallel edges). A graph $G$ is a if for every vertex $v$, the set of neighbors of $v$ can be covered by two cliques, namely the vertex set of the neighborhood of $v$ can be partitioned into two cliques. A graph $G$ is if $G$ does not contain $K_{1,3}$ as an induced subgraph. It is easy to verify that the class of line graphs is a proper subset of the class of quasi-line graphs, and the class of quasi-line graphs is a proper subset of the class of claw-free graphs. Recently quasi-line graphs attracted more attention, see [@CP; @CF2007; @CF2008; @ELTquasi]. In particular, Chudnovsky and Seymour [@CP] gave a constructive characterization of quasi-line graphs; Chudnovsky and Fradkin [@CF2008; @CF2010] proved in 2008 that Hadwiger’s conjecture holds for quasi-line graphs and proved in 2010 that if $G$ is claw-free, then $G$ contains a clique minor of size at least $\lceil\frac{2}3\chi(G)\rceil$.
One particular interesting case of Hadwiger’s conjecture is when graphs have independence number two. It has attracted more attention recently (see Section 4 in Seymour’s survery [@Seymour] for more information). As stated in his survey, Seymour believes that if Hadwiger’s conjecture is true for graphs $G$ with $\alpha(G)=2$, then it is probably true in general. Plummer, Stiebitz, and Toft [@pst] proved that Hadwiger’s conjecture holds for every $H$-free graph $G$ with $\alpha(G)=2$, where $H$ is any graph on four vertices and $\alpha(H)=2$. Later, Kriesell [@kriesell] extended their result and proved that Hadwiger’s conjecture holds for every $H$-free graph $G$ with $\alpha(G)=2$, where $H$ is any graph on five vertices and $\alpha(H)=2$.
One strengthening of Hadwiger’s conjecture is to consider the odd-minor variant of Hadwiger’s conjecture. We say that a graph $G$ has an of size at least $k$ if there are $k$ vertex-disjoint trees in $G$ such that every two of them are joined by an edge, and in addition, all the vertices of the trees are two-colored in such a way that the edges within the trees are bichromatic, but the edges between trees are monochromatic (and hence the vertices of all trivial trees must receive the same color, where a tree is if it has one vertex only). We say that $G$ has an if $G$ has an odd clique minor of size at least $k$. It is easy to see that any graph that has an odd $K_k$ minor certainly contains $K_k$ as a minor.
Gerards and Seymour (see [@jt], page 115.) proposed a well-known strengthening of Hadwiger’s conjecture: for every integer $t\ge0$, every graph either can be $t$-colored or has an odd $K_{t+1}$ minor. This conjecture is referred to as “the odd Hadwiger’s conjecture". The odd Hadwiger’s conjecture is substantially stronger than Hadwiger’s Conjecture. It is trivially true for $t\le 2$. The case $t=3$ was proved by Catlin [@catlin] in 1978. Guenin [@guenin] announced at a meeting in Oberwolfach in 2005 a solution of the case $t=4$. It remains open for $t\ge5$. Kawarabayashi and the first author [@ks2] proved that every graph $G$ on $n$ vertices with $\alpha(G)\ge2$ has an odd clique minor of size at least $\lceil n/(2\alpha(G)-1)\rceil$. For more information on the odd Hadwiger’s conjecture, the readers are referred to the very recent survey of Seymour [@Seymour].
In this paper, we establish more evidence for Hadwiger’s conjecture and the odd Hadwiger’s conjecture. We first prove that if a graph $G$ with independence number $\alpha(G)$ has no induced cycles of length between $4$ and $2\alpha(G)$, then $G$ satisfies the odd Hadwiger’s conjecture. We then prove that if a graph $G$ with independence number $\alpha(G)\ge3$ has no induced cycles of length between $4$ and $2\alpha(G)-1$, then $G$ satisfies Hadwiger’s conjecture. We prove these two results in Section 2.
We need to introduce more notation. Given a graph $G$, the (resp. ) of $G$, denoted by $h(G)$ (resp. $oh(G)$), is the largest integer $k$ such that $G$ contains the complete graph $K_k$ as a minor (resp. an odd minor). We use $|G|$, $\delta(G)$, and $\Delta(G)$ to denote the number of vertices, the minimum degree, and the maximum degree of $G$, respectively. Given vertex sets $A, B \subseteq V(G)$, we say that $A$ is (resp. ) $B$ if for every $a \in A$ and every $b \in B$, $ab \in E(G)$ (resp. $ab \notin E(G)$). The subgraph of $G$ induced by $A$, denoted $G[A]$, is the graph with vertex set $A$ and edge set $\{xy \in E(G) : x, y \in A\}$. We denote by $G \less A$ the subgraph of $G$ induced on $V(G) \less A$. If $A = \{a\}$, we simply write $G \less a$. An of $G$ is a path with one end in $A$, the other end in $B$, and no internal vertices in $A\cup B$. A graph $G$ is if $\chi(G)=t$ and $\chi(G\less v)<t$ for any $v\in V(G)$. One can easily see that for any $t$-critical graph $G$, $\delta(G)\ge t-1$. A in a graph $G$ is an induced cycle of length at least four. An in $G$ is an induced subgraph isomorphic to the complement of a hole. A graph $G$ is if $\chi(H)=\omega(H)$ for every induced subgraph $H$ of $G$. We use $C_k$ to denote a cycle on $k$ vertices. We shall need the following two results. Theorem \[spgt\] is the well-known Strong Perfect Graph Theorem [@spgt] and Theorem \[quasi\] is a result of Chudnovsky and Fradkin [@CF2008].
\[spgt\] A graph $G$ is perfect if and only if it has no odd hole and no odd antihole.
\[quasi\] If $G$ is a quasi-line graph, then $h(G)\ge\chi(G)$.
We shall need the following corollary.
\[spgt1\] If $G$ is $\{C_4, C_5, C_7, \dots, C_{2\alpha(G)+1}\}$-free, then $G$ is perfect.
One can easily see that $G$ is $C_k$-free for all $k \ge 2\alpha(G) +2$. Since $G$ is $C_4$-free, we see that $G$ is $\overline{C}_k$-free for all $k \ge 7$. Note that $C_5=\overline{C}_5$. Hence $G$ is $C_k$-free and $\overline{C}_k$-free for all $k \ge 5$ odd and so $G$ is perfect by Theorem \[spgt\], as desired. height3pt width6pt depth2pt\
Main Results
============
A graph $G$ is an if $G$ can be obtained from $H$ by replacing each vertex of $H$ by a clique of order at least one and two such cliques are complete to each other if their corresponding vertices in $H$ are adjacent. Under these circumstances, we define $s(G)$ to be the size of the smallest clique used to replace a vertex of $H$. We first prove a lemma.
\[inflation\] Let $G$ be an inflation of an odd cycle $C$. Then $\chi(G)\le \omega(G)+s(G)$ and $oh(G)\ge\chi(G)$.
Let $G$ and $C$ be given as in the statement and let $x_0, x_1, x_2, \dots, x_{2t}$ be the vertices of $C$ in order, where $t\ge1$ is an integer. The statement is trivially true when $t=1$. So we may assume that $t\ge2$. Let $A_0, A_1, A_2, \dots, A_{2t}$ be vertex-disjoint cliques such that $A_i$ is used to replace $x_i$ for all $i\in\{0,1, \dots, 2t\}$. We may assume that $s(G)=|A_0|$. Then $|A_0|=\min\{|A_i|: \, 0\le i\le 2t\}$. Since $t\ge2$, we have $\omega(G)=\omega(G\less A_0)$. One can easily see that $G\less A_0$ is an inflation of a path on $2t$ vertices and so $\chi(G\less A_0)=\omega(G\less A_0)$. Therefore $\chi(G)\le\chi(G\less A_0)+|A_0|\le\omega(G\less A_0)+|A_0|=\omega(G)+s(G)$.
We next show that $G$ contains an odd clique minor of size $\omega(G)+|A_0|$. Without loss of generality, we may assume that $\omega(G\less A_0)=|A_1|+|A_2|$. By the choice of $|A_0|$, there are $|A_0|$ pairwise vertex-disjoint $(A_0, A_3)$-paths (each on $2t -1$ vertices) in $G\less (A_1\cup A_{2})$. Let $T_1, \dots ,T_{|A_0|}$ be $|A_0|$ such paths and let $T_{|A_0|+1}, \dots,
T_{|A_0|+\omega(G)}$ be $\omega(G)$ pairwise disjoint trivial trees (i.e., trees with one vertex only) in $A_1\cup A_2$. Now coloring all the vertices in $A_{4} \cup A_{6} \cup \cdots \cup A_{2t}$ by color 1 and all the other vertices in $G$ by color 2, we see that $T_1, \dots , T_{|A_0|}, T_{|A_0|+1}, \dots , T_{|A_0|+\omega(G)}$ yield an odd clique minor of size $|A_0|+\omega(G)\ge \chi(G)$, as desired. height3pt width6pt depth2pt\
We next use Lemma \[inflation\] to prove that every graph $G$ with $\alpha(G)\ge2$ and no hole of length between $4$ and $2\alpha(G)$ satisfies the odd Hadwiger’s conjecture, the following.
\[t1\] Let $G$ be a graph with $\alpha(G) \ge 2$. If $G$ is $\{C_4, \,C_5, \, C_6, \, \dots , \, C_{2\alpha(G)}\}$-free, then $oh(G)\ge\chi(G)$.
Let $\alpha = \alpha(G)$. We first assume that $G$ contains no odd hole of length $2\alpha +1$. By Corollary \[spgt\], $G$ is perfect and so $G$ contains a clique (and thus an odd clique minor) of size $\chi(G)$. So we may assume that $G$ contains an odd hole of length $2\alpha +1$, say $C$, with vertices $ v_0, v_1, \dots, v_{2\alpha}$ in order. We next prove that for every $w\in V(G\less C)$, $w$ is either complete to $C$ or adjacent to exactly three consecutive vertices on $C$. Since $\alpha(G)=\alpha$, we see that $w$ is adjacent to at least one vertex on $C$. If $w$ is complete to $C$, then we are done. So we may assume that $wv_0\notin E(G)$ but $wv_1\in E(G)$. Then $w$ is not adjacent to $v_{2\alpha}, v_{2\alpha -1}, \dots , v_4$ as $G$ is $\{C_4, \,C_5, \, \dots , \, C_{2\alpha}\}$-free. Hence $w$ must be adjacent to $v_2, v_3$ because $\alpha(G)=\alpha$. This proves that for any $w\in V(G\less C)$ not complete to $C$, $w$ is adjacent to precisely three consecutive vertices on $C$.
Let $J$ (possibly empty) denote the set of vertices in $G$ that are complete to $C$, and $A_i$ (possibly empty) denote the set of vertices in $G$ adjacent to $v_i, v_{i+1}, v_{i+2}$, where $i=0,1,\dots, 2\alpha$ and all arithmetic on indices here and henceforth is done modulo $2\alpha +1$. Since $G$ is $C_4$-free, we see that $G[J]$ is a clique and $G[A_i]$ is a clique for all $i\in\{0,1,\dots, 2\alpha\}$. Note that $\{J, V(C), A_0, A_1, \dots , A_{2\alpha}\}$ partitions $V(G)$. Since $\alpha(G) = \alpha$ and $G$ is $\{C_4, \,C_5, \, \dots , \, C_{2\alpha}\}$-free, one can easily check that $A_i$ is complete to $A_{i+1}\cup A_{i-1}$, and anti-complete to all $A_j$, where $j\ne i+1,i, i-1$. We now show that $J$ is complete to all $A_i$. Suppose there exist $a\in J$ and $b\in A_i$ for some $i\in\{0, 1,\dots ,2\alpha\}$, say $i=0$, such that $ab\notin E(G)$. Then $G[\{a, v_0, b, v_2\}]$ is an induced $C_4$, a contradiction. Thus $J$ is complete to all $A_i$ and so $J$ is complete to $V(G)\less J$. Let $A_i^*=A_i\cup \{v_{i+1}\}$ for $i=0, 1, \dots ,2\alpha$. Then $A_i^*\not= \emptyset$ for all $i\in\{0,1,\dots, 2\alpha\}$ and $G\less J$ is an inflation of an odd cycle of length $2\alpha +1$. By Lemma \[inflation\], $oh(G\less J)\ge \chi(G\less J)$ and so $oh(G)\ge oh(G\less J)+|J|\ge\chi(G\less J)+|J|=\chi(G)$, as desired.
This completes the proof of Theorem \[t1\]. height3pt width6pt depth2pt\
Finally we prove that every graph $G$ with $\alpha(G)\ge3$ and no hole of length between $4$ and $2\alpha(G)-1$ satisfies Hadwiger’s conjecture.
\[main\] Let $G$ be a graph with $\alpha(G)\ge 3$. If $G$ is $\{C_4, \, C_5, C_6, \dots, C_{2\alpha(G)-1}\}$-free, then $h(G)\ge \chi(G)$.
Suppose for a contradiction that $h(G)< \chi(G)$. Let $G$ be a counterexample with $|V(G)|$ as small as possible. Let $n :=|V(G)|$, $t :=\chi(G)$, and $\alpha :=\alpha(G)$. By Theorem \[spgt\] and the fact that $h(G)\ge\omega(G)$, $G$ is not perfect. Since $G$ is $\{C_4, \, C_5, C_7, \dots, C_{2\alpha-1}\}$-free, by Corollary \[spgt1\], we see that $G$ must contain an odd hole, say $C$, with $2\alpha+1$ vertices.
\[e:mindeg\] () $G$ is $t$-critical and so $\delta(G)\ge t-1$.
Suppose that there exists $x\in V(G)$ such that $\chi(G\less x)=t$. If $\alpha(G\less x)=\alpha$, then $G\less x$ is $\{C_4, \, C_5, C_6, \dots, C_{2\alpha(G\less x)-1}\}$-free. By the minimality of $G$, we have $h(G\less x)\ge \chi(G\less x)=t$ and so $h(G)\ge h(G\less x)\ge t$, a contradiction. Thus $\alpha(G\less x)=\alpha-1$. Then $G\less x$ is $\{C_4, \, C_5, C_7, \dots, C_{2\alpha(G\less x)+1}\}$-free. By Corollary \[spgt1\], $G\less x$ is perfect and so $h(G)\ge h(G\less x)\ge\omega(G\less x)=\chi(G\less x)=t$, a contradiction. Thus $G$ is $t$-critical and so $\delta(G)\ge t-1$. height3pt width6pt depth2pt
\[e:maxdeg\] () $\Delta(G)\le n-2$.
Suppose there exists a vertex $x$ in $G$ with $d(x)=n-1$. Then $\chi(G\less x)=t-1$ and by the minimality of $G$, $h(G\less x)\ge \chi(G\less x)=t-1$ and so $h(G)\ge h(G\less x) +1\ge (t-1)+1=t$, a contradiction. height3pt width6pt depth2pt
\[e:clique\] () $\omega(G)\le t-2$.
Suppose that $\omega(G)\ge t-1$. Since $h(G)< t$, we see that $ \omega(G)=t-1$. Let $H\subseteq G$ be isomorphic to $K_{t-1}$. Then $C$ and $H$ have at most two vertices in common, and if $|C\cap H|=2$, then the two vertices in $C\cap H$ must be adjacent on $C$. Let $P$ be a subpath of $C\less H$ on $2\alpha-1$ vertices. Then $P$ is an induced path in $G\less H$. Since $\alpha(G)=\alpha$, we see that every vertex in $H$ must have a neighbor on $P$. By contracting the path $P$ into a single vertex, we see that $h(G)\ge t$, a contradiction. height3pt width6pt depth2pt\
Let $ v_0, v_1, \dots, v_{2\alpha}$ be the vertices of $C$ in order. We next show that
\[e:nbr\] () For every $w\in V(G\less C)$, either $w$ is complete to $C$, or $w$ is adjacent to exactly three consecutive vertices on $C$, or $w$ is adjacent to exactly four consecutive vertices on $C$.
Since $\alpha(G)=\alpha$, we see that $w$ is adjacent to at least one vertex on $C$. Suppose that $w$ is not complete to $C$. We may assume that $wv_0\notin E(G)$ but $wv_1\in E(G)$. Then $w$ is not adjacent to $v_{2\alpha}, v_{2\alpha -1}, \dots , v_5$ because $G$ is $\{C_4, \,C_5, \, \dots , \, C_{2\alpha-1}\}$-free. If $wv_4\in E(G)$, then $w$ must be adjacent to $v_2, v_3$ because $G$ is $\{C_4, \, C_5\}$-free. If $wv_4\notin E(G)$, then again $w$ must be adjacent to $v_2, v_3$ because $\alpha(G)=\alpha$. Thus $w$ is adjacent to either $v_1, v_2, v_3$ or $v_1, v_2,v_3, v_4$ on $C$, as desired. height3pt width6pt depth2pt
Let $J$ denote the set of vertices in $G$ that are complete to $C$. For each $i\in I : =\{0,1,\dots, 2\alpha\}$, let $A_i\subseteq V(G\less C)$ (possibly empty) denote the set of vertices in $G$ adjacent to precisely $v_i, v_{i+1}, v_{i+2}$ on $C$, and let $B_i\subseteq V(G\less C)$ (possibly empty) denote the set of vertices in $G$ adjacent to precisely $v_i, v_{i+1}, v_{i+2}, v_{i+3}$ on $C$, where all arithmetic on indices here and henceforth is done modulo $2\alpha+1$. By , $\{J, V(C), A_0, A_1, \dots, A_{2\alpha}, B_0, B_1, \dots, B_{2\alpha}\}$ partitions $V(G)$.
\[e:J\] () $J=\emptyset$.
Suppose that $J\not=\emptyset$. Let $a\in J$. By , there exists $b\in V(G)\less (V(C)\cup J)$ such that $ab\notin E(G)$. By , we may assume that $b$ is adjacent to $v_0, v_1, v_2$. Then $G[\{a, v_0, b, v_2\}]$ is an induced $C_4$ in $G$, a contradiction. height3pt width6pt depth2pt
The fact that $\alpha(G) = \alpha$ implies that\
\[e:ABclique\] () For each $i\in I$, both $G[A_i]$ and $G[B_i]$ are cliques; and $A_i$ is complete to $A_{i-1}\cup A_{i+1}$.\
Since $G$ is $\{C_4, \,C_5, \, \dots , \, C_{2\alpha-1}\}$-free, one can easily check that\
\[e:Aac\] () For each $i\in I$, $A_i$ is anti-complete to each $A_j$, where $j\in I\less\{i-2, i-1, i, i+1, i+2\}$; and\
\[e:Bac\] () For each $i\in I$, $B_i$ is complete to $B_{i-1}\cup A_{i}\cup A_{i+1}\cup B_{i+1}$ and anti-complete to each $B_j$, where $j\in I\less\{ i-1, i, i+1\}$.\
We shall also need the following:\
\[e:oneneighbor\] () For each $i\in I$, if $B_i\not=\emptyset$, then $B_{j}=\emptyset$ for any $j\in I\less\{i-2, i-1, i, i+1, i+2\}$.
Suppose $B_j$ is not empty for some $j\in I\less\{ i+2, i+1, i, i-1, i-2\}$. We may assume that $j>i$. Let $a\in B_i$ and $b\in B_{j}$. Then $G[\{v_i, a, v_{i+3},\dots, v_j, b, v_{j+3}, \dots, v_{i-1}\}]$ is an odd hole with $2\alpha-1$ vertices if $ab\notin E(G)$, and $G[\{v_i, a, b, v_{j+3}, \dots, v_{i-1}\}]$ is a hole with $2\alpha+1-j+i\ge4$ vertices if $ab\in E(G)$. In both cases, we obtain a contradiction. height3pt width6pt depth2pt\
With an argument similar to that of , we see that\
\[e:Nneighbor\] () For each $i\in I$, if $B_i\not=\emptyset$, then $B_{i}$ is anti-complete to $A_j$ for any $j\in I\less\{i-1, i, i+1, i+2\}$.\
We next show that\
\[e:partition\] () For any $i\in I$, if $A_i\ne\emptyset$, then each vertex in $A_i$ is either anti-complete to $A_{i+2}$ or anti-complete to $A_{i-2}$.
Suppose there exists a vertex $x\in A_i$ such that $x$ is adjacent to a vertex $y\in A_{i-2}$ and a vertex $z\in A_{i+2}$. Then $G[ \{x, y,z\}\cup (V(C)\less \{v_{i-1}, v_i, v_{i+1}, v_{i+2}, v_{i+3}\})]$ is an odd hole with $2\alpha-1$ vertices, a contradiction. height3pt width6pt depth2pt\
\[e:A[j-1]{}A[j+2]{}\] () For any $i\in I$, if $B_i\ne\emptyset$, then every vertex in $B_i$ is either complete to $A_{i-1}$ or complete to $A_{i+2}$.
Suppose for a contradiction, say $B_2\ne\emptyset$, and there exists a vertex $b\in B_2$ such that $b$ is not adjacent to a vertex $a_1\in A_1$ and a vertex $a_4\in A_4$. By , $A_1$ is anti-complete to $A_4$. Thus $G$ contains a stable set $\{b, a_1, a_4,v_0\}$ of size four if $\alpha=3$ or stable set $\{b, a_1, a_4, v_0, v_7, v_9, \dots, v_{2\alpha-1}\}$ of size $\alpha+1$ if $\alpha\ge4$, a contradiction. height3pt width6pt depth2pt\
\[e:3B\] () There exists an $i\in I$ such that $B_j= \emptyset$ for any $j\in I\less \{i, i+1, i+2\}$.
This is obvious if $B_k=\emptyset$ for any $k\in I$. So we may assume that $B_k\ne\emptyset$ for some $k\in I$, say $B_2\neq \emptyset$. Then by , $B_j=\emptyset$ for all $j=5, 6, \dots, 2\alpha$. By again, either $B_0\ne \emptyset$ or $B_4\ne \emptyset$ but not both. By symmetry, we may assume that $B_4=\emptyset$. Similarly, either $B_0\ne \emptyset$ or $B_3\ne \emptyset$ but not both. Thus either $B_j=\emptyset$ for all $j\in I\less \{0,1,2\}$ or $B_j=\emptyset$ for all $j\in I\less \{1,2, 3\}$. height3pt width6pt depth2pt\
By , we may assume that $B_j=\emptyset$ for all $j\in I\less \{1,2,3\}$. For any $A_i\ne\emptyset$, where $i\in I$, let $A_i^1=\{a\in A_i: a \text{ has a neighbor in } A_{i-2}\}$, $A_i^3=\{a\in A_i: a\, \text{ has a neighbor in } A_{i+2}\}$, and $A_i^2=A_i\less (A_i^1\cup A_i^3)$. Then $A_i^2$ is anti-complete to $A_{i-2}\cup A_{i+2}$. By , $A_i^1$ is anti-complete to $A_{i+2}$ and $A_i^3$ is anti-complete to $A_{i-2}$. Clearly, $\{A_i^1, A_i^2, A_i^3\}$ partitions $A_i$. Next, for any $B_j\ne\emptyset$, where $j\in \{1,2,3\}$, by , let $B_j^1=\{b\in B_j: b\, \text{ is complete to } A_{j-1}\}$ and $B_j^2=\{b\in B_j: b\, \text{ is complete to } A_{j+2}\}$. Clearly, $B_j^1$ and $B_j^2$ are not necessarily disjoint. Note that $B_j^1$ and $B_j^2$ are not symmetrical because $B_j^1$ is complete to $A_{j-1}$ and $B_j^2$ is complete to $A_{j+2}$.\
\[e:BjA[j-1]{}\] () For any $j\in\{1,2,3\}$, $B_j$ is anti-complete to $A_{j-1}^1\cup A_{j+2}^3$.
Suppose there exist a vertex $b\in B_j$ and a vertex $a\in A_{j-1}^1\cup A_{j+2}^3$ such that $ba\in E(G)$. By the definitions of $A_{j-1}^1$ and $A_{j+2}^3$, we see that $a$ has a neighbor, say $c$, in $A_{j-3}$ if $a\in A_{j-1}^1$, or in $A_{j+4}$ if $a\in A_{j+2}^3$. Now $G[ \{b, a,c\}\cup (V(C)\less \{v_{j-2}, v_{j-1}, v_j, v_{j+1}, v_{j+2}\})]$ is an odd hole of length $2\alpha-1$ if $a\in A_{j-1}^1$, or $G[ \{b, a,c\}\cup (V(C)\less \{v_{j+1}, v_{j+2}, v_{j+3}, v_{j+4}, v_{j+5}\})]$ is an odd hole of length $2\alpha-1$ if $a\in A_{j+2}^3$. In either case, we obtain a contradiction. height3pt width6pt depth2pt\
\[e:quasi-line\] () $G$ is a quasi-line graph.
It suffices to show that for any $x\in V(G)$, $N(x)$ is covered by two cliques. By , $J=\emptyset$. Since $B_j^1$ and $B_j^2$ are not symmetrical for all $j\in\{1,2,3\}$, we consider the following four cases.\
[**Case 1:**]{} $x\in A_i$ for some $i\in I\less\{0, 1,2,3,4,5\}$.\
In this case, $x\in A_i^k$ for some $k\in\{1,2,3\}$. We first assume that $k=1$. Then $x\in A_i^1$. By and the definition of $A_i^1$, $x$ is anti-complete to $A_{i+2}$ and so $N[x]\subseteq A_{i-2}\cup A_{i-1}\cup A_i\cup A_{i+1}\cup\{v_{i}, v_{i+1}, v_{i+2}\}$. We see that $N(x)$ is covered by two cliques $G[(A_{i-2}\cap N(x))\cup A_{i-1}\cup \{v_i\}]$ and $G[(A_i\less x)\cup A_{i+1}\cup\{v_{i+1}, v_{i+2}\}]$. By symmetry, the same holds if $k=3$. So we may assume that $k=2$. By the definition of $A_i^2$, $x$ is anti-complete to $A_{i-2}\cup A_{i+2}$. Thus $N[x]=A_{i-1}\cup A_i\cup A_{i+1}\cup\{v_i, v_{i+1}, v_{i+2}\}$ and so $N(x)$ is covered by two cliques $G[A_{i-1}\cup (A_i\less x)\cup\{v_i\}]$ and $G[A_{i+1}\cup \{v_{i+1}, v_{i+2}\}]$.\
[**Case 2:**]{} $x\in A_i$ for some $i\in \{0,1,2,3,4,5\}$.\
In this case, we first assume that $i=0$. Then $x\in A_0^k$ for some $k\in\{1,2,3\}$. Assume that $x\in A_0^1$. Then $x$ is anti-complete to $B_1$ by and anti-complete to $A_2$ by . Thus $N[x]\subseteq A_{2\alpha-1}\cup A_{2\alpha}\cup A_0\cup A_1\cup \{v_0, v_1, v_2\}$. One can see that $N(x)$ is covered by two cliques $G[(A_{2\alpha-1}\cap N(x))\cup A_{2\alpha}\cup \{v_0\}]$ and $G[(A_0\less x)\cup A_1\cup\{v_1, v_2\}]$. It can be easily checked that $N(x)$ is covered by two cliques $G[A_{2\alpha}\cup (A_0\less x)\cup\{v_0\}]$ and $G[A_1\cup (B_1\cap N(x))\cup\{v_1, v_2\}]$ if $k=2$; and by two cliques $G[A_{2\alpha}\cup (A_0\less x)\cup\{v_0\}]$ and $G[A_1\cup (B_1\cap N(x))\cup(A_2\cap N(x))\cup\{v_1, v_2\}]$ if $k=3$.\
Next assume that $i=1$. Then $x\in A_1^k$ for some $k\in\{1,2,3\}$ and $N[x]\subseteq A_{2\alpha}\cup A_0\cup A_1\cup B_1\cup A_2\cup B_2\cup A_3\cup\{v_1, v_2, v_3\}$. One can see that $N(x)$ is covered by two cliques $G[(A_{2\alpha}\cap N(x))\cup A_0\cup \{v_1\}]$ and $G[(A_1\less x)\cup B_1\cup A_2\cup\{v_2, v_3\}]$ if $k=1$; by two cliques $G[A_0\cup (A_1\less x)\cup \{v_1\}]$ and $G[B_1\cup A_2\cup (B_2\cap N(x))\cup\{v_2, v_3\}]$ if $k=2$; and two cliques $G[A_0\cup (A_1\less x)\cup B_1^1\cup\{v_1, v_2\}]$ and $G[B_1^2\cup A_2\cup (B_2\cap N(x))\cup (A_3\cap N(x))\cup\{ v_3\}]$ if $k=3$.
Assume that $i=2$. Then $x\in A_2^k$ for some $k\in\{1,2,3\}$ and $N[x]\subseteq A_{0}\cup A_{1}\cup B_{1}\cup A_2\cup B_2\cup A_{3}\cup B_{3}\cup A_4\cup\{ v_2, v_3, v_4\}$. One can check that $N(x)$ is covered by two cliques $G[(A_0\cap N(x))\cup A_1\cup B_1^1\cup \{v_2\}]$ and $G[(A_2\less x)\cup B_1^2\cup B_2\cup A_3\cup\{v_3, v_4\}]$ if $k=1$; by two cliques $G[A_1\cup B_1\cup (A_2\less x)\cup \{v_2\}]$ and $G[B_2\cup A_3\cup (B_3\cap N(x))\cup\{v_3, v_4\}]$ if $k=2$; and by two cliques $G[A_1\cup B_1\cup (A_2\less x)\cup B_2^1\cup\{v_2, v_3\}]$ and $G[B_2^2\cup A_3\cup (B_3\cap N(x))\cup (A_4\cap N(x))\cup\{ v_4\}]$ if $k=3$.\
Assume that $i=3$. Then $x\in A_3^k$ for some $k\in\{1,2,3\}$ and $N[x]\subseteq A_1\cup B_{1}\cup A_2\cup B_2\cup A_{3}\cup B_{3}\cup A_4\cup A_5\cup\{ v_3, v_4, v_5\}$. One can check that $N(x)$ is covered by two cliques $G[(A_1\cap N(x)) \cup (B_1\cap N(x))\cup A_2\cup B_2^1\cup \{v_3\}]$ and $G[B_2^2\cup (A_3\less x)\cup B_3\cup A_4\cup\{v_4, v_5\}]$ if $k=1$; and by two cliques $G[(B_1\cap N(x))\cup A_2\cup B_2\cup \{v_3\}]$ and $G[(A_3\less x)\cup B_3 \cup A_4\cup\{v_4, v_5\}]$ if $k=2$. So we may assume that $x\in A_3^3$. By , we have $A_3^3$ is anti-complete to $B_1$. Thus $N(x)$ is covered by two cliques $G[A_2\cup B_2\cup (A_3\less x)\cup B_3^1\cup\{v_3, v_4\}]$ and $G[B_3^2\cup A_4\cup (A_5 \cap N(x))\cup\{ v_5\}]$ if $k=3$.\
Assume that $i=4$. Then $x\in A_4^k$ for some $k\in\{1,2,3\}$ and $N[x]\subseteq A_2\cup B_{2}\cup A_3\cup B_3\cup A_4\cup A_5\cup A_6\cup\{ v_4, v_5, v_6\}$. One can see that $N(x)$ is covered by two cliques $G[(A_2\cap N(x)) \cup (B_2\cap N(x))\cup A_3\cup B_3^1\cup \{v_4\}]$ and $G[B_3^2\cup (A_4\less x)\cup A_5\cup\{v_5, v_6\}]$ if $k=1$, and by two cliques $G[(B_2\cap N(x))\cup A_3\cup B_3\cup \{v_4\}]$ and $G[(A_4\less x) \cup A_5\cup\{v_5, v_6\}]$ if $k=2$. So we may assume that $x\in A_4^3$. By , we have $A_4^3$ is anti-complete to $B_2$. Thus $N(x)$ is covered by two cliques $G[A_3\cup B_3\cup (A_4\less x)\cup\{v_4, v_5\}]$ and $G[ A_5\cup (A_6 \cap N(x))\cup\{ v_6\}]$ if $k=3$.\
Finally assume that $i=5$. Then $x\in A_5^k$ for some $k\in\{1,2,3\}$ and $N[x]\subseteq A_3\cup B_{3}\cup A_4\cup A_5\cup A_6\cup A_7\cup\{ v_5, v_6, v_7\}$. One can check that $N(x)$ is covered by two cliques $G[(A_3\cap N(x)) \cup (B_3\cap N(x))\cup A_4\cup \{v_5\}]$ and $G[(A_5\less x)\cup A_6\cup\{v_6, v_7\}]$ if $k=1$, and by two cliques $G[(B_3\cap N(x))\cup A_4\cup \{v_5\}]$ and $G[(A_5\less x) \cup A_6\cup\{v_6, v_7\}]$ if $k=2$. So we may assume that $x\in A_5^3$. By , we have $A_5^3$ is anti-complete to $B_3$. Thus $N(x)$ is covered by two cliques $G[ A_4\cup (A_5\less x)\cup \{v_5, v_6\}]$ and $G[ A_6\cup (A_7\cap N(x))\cup\{v_7\}]$ if $k=3$.
This completes the proof of Case 2.\
[**Case 3:**]{} $x\in B_j$ for some $j\in \{1,2,3\}$.\
In this case, first assume that $j=1$. Then $x\in B_1^k$ for some $k\in\{1,2\}$, and $N[x]\subseteq A_0\cup A_1\cup B_1\cup A_2\cup B_2\cup A_3\cup\{v_1, v_2, v_3, v_4\}$. We see that $N(x)$ is covered by two cliques $G[A_0\cup A_1\cup (B_1^1\less x)\cup \{v_1, v_2\}]$ and $G[B_1^2\cup A_2\cup B_2\cup (A_3\cap N(x))\cup\{v_3, v_4\}]$ if $k=1$; and by two cliques $G[(A_0\cap N(x))\cup A_1\cup B_1^1\cup \{v_1, v_2\}]$ and $G[(B_1^2\less x)\cup A_2\cup B_2\cup A_3\cup\{v_3, v_4\}]$ if $k=2$.
Next assume that $j=2$. Then $x\in B_2^k$ for some $k\in\{1,2\}$, and $N[x]\subseteq A_1\cup B_1\cup A_2\cup B_2\cup A_3\cup B_3\cup A_4\cup \{ v_2, v_3, v_4, v_5\}$. One can see that $N(x)$ is covered by two cliques $G[A_1\cup B_1\cup A_2\cup (B_2^1\less x)\cup \{v_2, v_3\}]$ and $G[B_2^2\cup A_3\cup B_3\cup (A_4\cap N(x))\cup\{v_4, v_5\}]$ if $k=1$; and by two cliques $G[(A_1\cap N(x))\cup B_1\cup A_2\cup B_2^1\cup \{v_2, v_3\}]$ and $G[(B_2^2\less x)\cup A_3\cup B_3\cup A_4\cup\{v_4, v_5\}]$ if $k=2$.
Finally assume $j=3$. Then $x\in B_3^k$ for some $k\in\{1,2\}$, and $N[x]\subseteq A_2\cup B_2\cup A_3\cup B_3\cup A_4\cup A_5\cup \{v_3, v_4, v_5, v_6\}$. We see that $N(x)$ is covered by two cliques $G[A_2\cup B_2\cup A_3\cup (B_3^1\less x)\cup \{v_3, v_4\}]$ and $G[B_3^2\cup A_4\cup (A_5\cap N(x))\cup\{v_5, v_6\}]$ if $k=1$; and by two cliques $G[(A_2\cap N(x))\cup B_2\cup A_3\cup B_3^1\cup \{v_3, v_4\}]$ and $G[(B_3^2\less x)\cup A_4\cup A_5\cup\{v_5, v_6\}]$ if $k=2$.\
[**Case 4:**]{} $x\in V(C)$.\
In this case, let $x=v_i$ for some $i\in I$. First assume that $i\ne 1,2,3,4,5,6$. Then $N(v_{i})=A_{i-2}\cup A_{i-1}\cup A_{i}\cup\{v_{i-1}, v_{i+1}\}$ and so $N(v_{i})$ is covered by two cliques $G[A_{i-2}\cup A_{i-1}\cup\{v_{i-1}\}]$ and $G[A_{i}\cup\{ v_{i+1}\}]$. Next assume that $i\in\{ 1,2,3,4,5,6\}$. One can easily check that $N(v_1)$ is covered by two cliques $G[A_{2\alpha}\cup A_0\cup\{v_0\}]$ and $G[A_1\cup B_1\cup\{v_2\}]$; $N(v_2)$ by two cliques $G[A_{0}\cup A_1\cup\{v_1\}]$ and $G[B_1\cup A_2\cup B_2\cup\{v_3\}]$; $N(v_3)$ by two cliques $G[A_{1}\cup B_1\cup A_2\cup\{v_2\}]$ and $G[B_2\cup A_3\cup B_3\cup\{v_4\}]$; $N(v_4)$ by two cliques $G[ B_1\cup A_2\cup B_2\cup\{v_3\}]$ and $G[A_3\cup B_3\cup A_4\cup\{v_5\}]$; $N(v_5)$ by two cliques $G[ B_2\cup A_3\cup B_3\cup\{v_4\}]$ and $G[A_4\cup A_5\cup\{v_6\}]$; and $N(v_6)$ by two cliques $G[ B_3\cup A_4\cup\{v_5\}]$ and $G[A_5\cup A_6\cup\{v_7\}]$, respectively.
This proves that $G$ is a quasi-line graph. height3pt width6pt depth2pt\
By , $G$ is a quasi-line graph. By Theorem \[quasi\], $h(G)\ge \chi(G)=t$, a contradiction. This completes the proof of Theorem \[main\]. height3pt width6pt depth2pt\
[**Remark.**]{} We made no use of the fact $\omega(G)\le t-2$ in the proof of Theorem \[main\]. We kept it in the proof in the hope that one might be able to find a short proof of Theorem \[main\] without using the fact that Hadwiger’s conjecture is true for quasi-line graphs (namely Theorem \[quasi\]).
Acknowledgements {#acknowledgements .unnumbered}
================
The authors would like to thank the anonymous referee for many helpful comments.
[99]{} B. Albar, D. Gonçalves, On triangles in $K_r$-minor free graphs, arXiv:1304.5468. K. Appel and W. Haken, Every planar map is four colorable, Part I. Discharging, Illinois J. Math., 21 (1977) 429-490. K. Appel, W. Haken and J. Koch, Every planar map is four colorable, Part II. Reducibility, Illinois J. Math., 21 (1977) 491-567. J. Balogh, A. V. Kostochka, N. Prince and M. Stiebitz, The Erdős-Lovász Tihany conjecture for quasi-line graphs, Discrete Math 309 (2009) 3985-3991. P. A. Catlin, A bound on the chromatic number of a graph, Discrete Math 22 (1978) 81-83. M. Chudnovsky and A. O. Fradkin, Coloring quasi-line graphs, J. Graph Theory, 54 (2007) 41-50. M. Chudnovsky and A. O. Fradkin, Hadwiger’s conjecture for quasi-line graphs, J. Graph Theory 59 (2008) 17-33. M. Chudnovsky and A. O. Fradkin, An approximate version of Hadwiger’s conjecture for claw-free graphs, J. of Graph Theory, 63 (2010) 259-278 M. Chudnovsky and P. D. Seymour, Claw-free graphs VII- Quasi-line graphs, J. Combin. Theory Ser. B, 102 (2012) 1267-1294. M. Chudnovsky, N. Robertson, P. Seymour and R. Thomas, The strong perfect graph theorem, Annals of Math, 164 (2006) 51-229. G. A. Dirac, A property of $4$-chromatic graphs and some remarks on critical graphs, J. London Math. Soc., 27 (1952) 85-92. B. Guenin, Graphs without odd-$K_5$ minors are $4$-colorable, in preparation. T. R. Jensen and B. Toft, Graph Coloring Problems, Wiley-Interscience, 1995. I. T. Jakobsen, On certain homomorphism properties of graphs I, Math. Scand., 31 (1972) 379-404. I. T. Jakobsen, On certain homomorphism properties of graphs II, Math. Scand., 52 (1983) 229-261. H. Hadwiger, Über eine Klassifikation der Streckenkomplexe, Vierteljahrsschr. Naturforsch, Ges. Zürich, 88 (1943) 133-142. K. Kawarabayashi and Z-X. Song, Some remarks on odd Hadwiger’s conjecture, Combinatorica, 27 (2007) 429-438. K. Kawarabayashi, B. Toft, Any $7$-chromatic graph has a $K_7$ or $K_{4,4}$ as a minor, Combinatorica, 25 (2005) 327-353. M. Kriesell, On Seymour’s strengthening of Hadwiger’s conjecture for graphs with certain forbidden subgraphs, Discrete Math., 310 (2010) 2714-2724. M.D. Plummer, M. Stiebitz and B. Toft, On a special case of Hadwiger’s Conjecture, Discuss. Math. Graph Theory, 23 (2003) 333-363. B. Reed and P. Seymour, Hadwiger’s conjecture for line graphs, European J. Math., 25 (2004) 873-876. N. Robertson, D. P. Sanders, P. D. Seymour and R. Thomas, The four-color theorem, J. Combin. Theory Ser. B, 70 (1997) 2-44. N. Robertson, P. D. Seymour and R. Thomas, Hadwiger’s conjecture for $K_6$-free graphs, Combinatorica, 13 (1993), 279-361. M. Rolek and Z-X. Song, Coloring graphs with forbidden minors, submitted. Available at arXiv:1606.05507 P. Seymour, Hadwiger’s conjecture, to appear. B. Toft, A survey on Hadwiger’s Conjecture, in : [*Surveys in Graph Theory*]{} (edited by G. Chartrand and M. Jacobson), Congr. Numer., 115 (1996) 249-283. K. Wagner, Über eine Eigenschaft der ebenen Komplexe, Math. Ann., 114 (1937) 570-590.
[^1]: Corresponding Author. Email: [email protected]
[^2]: Supported by the UCF Research and Mentoring Program (RAMP) for undergraduate students.
[^3]: Current address: Department of Mathematics, University of Virginia, Charlottesville, VA 22904
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'The hemisphere soft function is calculated to order $\alpha_s^2$. This is the first multi-scale soft function calculated to two loops. The renormalization scale dependence of the result agrees exactly with the prediction from effective field theory. This fixes the unknown coefficients of the singular parts of the two-loop thrust and heavy-jet mass distributions. There are four such coefficients, for 2 event shapes and 2 color structures, which are shown to be in excellent agreement with previous numerical extraction. The asymptotic behavior of the soft function has double logs in the $C_FC_A$ color structure, which agree with non-global log calculations, but also has sub-leading single logs for both the $C_FC_A$ and $C_F T_F n_f$ color structures. The general form of the soft function is complicated, does not factorize in a simple way, and disagrees with the Hoang-Kluth ansatz. The exact hemisphere soft function will remove one source of uncertainty on the $\alpha_s$ fits from $e^+e^-$ event shapes.'
---
**The two-loop hemisphere soft function\
**
[Randall Kelley and Matthew D. Schwartz]{}\
\
\
[*Instituto de Física Teórica UAM/CSIC\
Universidad Autónoma de Madrid\
Cantoblanco, E-28049 Madrid, España*]{}\
\
\
Introduction {#sec:intro}
============
There has been significant activity in the last few years in the effective field community to perform accurate calculations of event shapes for $e^+ e^-$ colliders. At high energy, the hadronic final states in $e^+ e^-$ collisions are dominated by the formation of jets of particles and are described by perturbative QCD. Comparison of theoretic calculations of event shapes with the experimentally measured values has lead to some of the most precise measurements of the strong coupling constant $\alpha_s$. The NNLO fixed order calculations in [@GehrmannDeRidder:2005cm; @GehrmannDeRidder:2007bj; @GehrmannDeRidder:2007hr; @Weinzierl:2008iv] allow the prediction of many events shapes to order $\alpha_s^3$. Advances in Soft-Collinear Effective Theory (SCET) [@Bauer:2000yr; @Bauer:2001yt; @Beneke:2002ph; @Fleming:2007qr] have allowed for resummation of large logarithmic corrections to thrust [@Schwartz:2007ib; @Becher:2006qw; @Becher:2008cf] and heavy jet mass [@Chien:2010kc] to ${\rm N^3LL}$ accuracy and non-perturbative considerations were included for thrust in [@Abbate:2010xh]. These results have been used to extract a value of $\alpha_s$ that is competitive with the world average [@Yao:2006px].
Dijet event shapes such as thrust and heavy jet mass demonstrate singular behavior when calculated perturbatively at fixed order due to the appearance of large logarithmic corrections. These large logarithms invalidate a naive expansion in $\alpha_s$ and thus need to be resummed to provide accurate predictions in the dijet limit. In the dijet limit, there is a clear separation between scales. Effective theory techniques rely on a separation between kinematic scales and, through renormalization group (RG) evolution, logarithms of the ratio of these scales can be resummed. Each of the relevant scales is described by different physics, each of which can be calculated using a different theory. The contribution from each can be shown to factorize into a hard contribution, due to physics at the center of mass energy $Q$, a jet function, due to physics at the jet scale, and a soft function which describes soft gluon emission. The hard and jet functions are known to 2-loops. However, the soft function relevant for thrust or heavy jet mass is only partially known beyond 1-loop [@Hoang:2008fs; @Chien:2010kc]. In this paper, the perturbative soft function is computed analytically to order $\alpha_s^2$.
Soft functions have been studied for many years, not just in SCET. These soft functions are defined as matrix elements of Wilson lines. For resummation up to the next-to-leading logarithmic order (NLL), all that is needed about the soft function is its anomalous dimension. This can be extracted either from renormalization-group invariance or from the virtual graphs. For example, such calculations have been done for thrust [@Becher:2008cf], direct photon [@Becher:2009th], and dijet production [@Kidonakis:1997gm; @Aybat:2006mz; @Kelley:2010qs; @Kelley:2010fn]. To go beyond NLL, one needs the finite parts of these soft functions, which are more difficult to calculate because the real emission graphs are needed, and these involve often complicated phase-space cuts. In all cases calculated at 2-loops so far, such as Drell-Yan [@Korchemsky:1993uz; @Belitsky:1998tc] or $b\to s \gamma$ [@Becher:2005pd], the real emission graphs only involve one scale. Multi-scale soft functions, where different constraints are placed on gluons or quarks going in different directions, such as the hemisphere soft function, are likely to play an important role in hadron collisions [@Ellis:2010rw; @Kelley:2011tj]. At order $\alpha_s$, the multiple scales are irrelevant, since only one gluon can be emitted. At order $\alpha_s^2$ or beyond, there can be real emission graphs depending on multiple scales at the same time. It has been suggested [@Hoang:2008fs] that the soft function should depend only on logarithms of these scales, such as $\ln^2(k_L/k_R)$. Whether more complicated scale-independent terms, such as $\text{Li}_2(-{k_L/k_R}) + \text{Li}_2(-{k_R/k_L})$ might appear has been an open question. Understanding the form of these soft functions in more detail will be important for LHC precision jet physics at NNLL and beyond [@Kelley:2011tj].
The hemisphere soft function $S(k_L,k_R,\mu)$ is the probability to have soft radiation with small component $k_L$ going into the left hemisphere and soft radiation with small component $k_R$ going into the right hemisphere. More precisely, in $e^+e^- \to $ hadron events at center-of-mass energy $Q$, in the limit that all radiation is much softer than $Q$, the cross section is given by matrix elements of Wilson lines. These Wilson lines point in the direction of two back-to-back light-like quarks which come from the Born process $e^+e^- \to \bar{q} q$. Each quark direction defines a hemisphere, which we call left and right and denote with the light-like 4-vectors $n^\mu$ and $\bar{n}^\mu$. If the total radiation in the left (right) hemisphere is $P_L^\mu$ ($P_R^\mu$), then $S(k_L,k_R,\mu)$ is the matrix element squared to have $k_L= n \cdot P_L$ and $k_R = \bar{n} \cdot P_R$, with all other degrees of freedom integrated over.
The hemisphere soft function is known to have many interesting properties and is conjectured to have others. The factorization theorem for the full hemisphere mass distribution implies that the Laplace transform of the soft function should factorize into the form $$\label{softfact}
\tilde{s}(L_1, L_2, \mu ) = {\widetilde{s}_{\mu}}(L_1) {\widetilde{s}_{\mu}}(L_2) {\widetilde{s}_{f}}(L_1 - L_2)$$ where $L_1 = \ln x_L \mu$ and $L_2 = \ln x_R \mu$, with $x_R$ and $x_L$ the Laplace conjugate variables to $k_L$ and $k_R$. The anomalous dimension of the soft function and the function $\tilde{s}_\mu(L)$ are known exactly to 3-loop order. The function $\tilde{s}_{f}(L)$ is known exactly only to order $\alpha_s$. Hoang and Kluth [@Hoang:2008fs] argued that at order $\alpha_s^2$ the function $\tilde{s}_{f}(L)$ must be a polynomial of at most 2nd order in $L$, i.e. ${\widetilde{s}_{f}}(L) = c_{2}^S + c_{2L}^S L^2$. In this paper, we show that this Hoang-Kluth ansatz does not hold; ${\widetilde{s}_{f}}(L)$ is much more complicated. Certain moments of ${\widetilde{s}_{f}}(L)$ contribute to the coefficients of $\delta(\tau)$ and $\delta(\rho)$ in the thrust and heavy-jet mass distributions. These moments were fit numerically in [@Hoang:2008fs] and [@Chien:2010kc] using numerical calculations of the singular behaviour of these distributions in full QCD with the program [event 2]{}. In this paper, we produce these moments analytically and find that they are in excellent agreement with the most accurate available numerical fit [@Chien:2010kc].
Any $L$ dependence at large $L$ in ${\widetilde{s}_{f}}(L)$ turns into large logarithmic behavior of the hemisphere mass distribution ([*i.e.*]{} $\ln(M_L/M_R$)). Since all of the $\mu$ dependence is in ${\widetilde{s}_{\mu}}(L)$, these large logs are not determined by RG invariance and correspond to so-called “non-global logs”. Dasgupta and Salam calculated the non-global logs for the related left-hemisphere mass distribution in full QCD [@Dasgupta:2001sh] and found no non-global logs (up to order $L^2$) for the $C_F n_f T_F$ color structure and an $L^2$ term with coefficient $-\frac{4\pi^2}{3}$ for the $C_F C_A$ term. We show below that the asymptotic behavior of ${\widetilde{s}_{f}}(L)$ in the full soft function is indeed of the form $-\frac{4\pi^2}{3} L^2$ for the $C_F C_A$ color structure. We also find that both this color structure and the $C_F n_f T_F$ one have additional non-global single logs. These are especially interesting because the soft function is symmetric in $L \to -L$, which seems to forbid a linear term. The linear term appears through a complicated analytic function involving polylogarithms which actually asymptotes to $|L|$.
This paper is organized as follows. In section \[sec:shapes\] we review the factorization formula for the hemisphere mass distribution and its thrust and heavy-jet mass projections. Section \[sec:calculation\] computes the soft function in dimensional regularization. The calculation is complicated, so the results are summarized separately in \[sec:summary\]. Section \[sec:integrating\] discusses the result and presents the renormalized result for the integrated soft function, which can be compared directly to the predictions from SCET. Section \[sec:thrust\] gives the previously missing terms in the singular parts of the 2-loop thrust and heavy jet mass distributions, and compares to previous numerical estimates. Section \[sec:hemi\] gives the full integrated hemisphere soft function which is compared to previous conjectures. The asymptotic form of this distribution, which exhibits non-global logs, is discussed in Section \[sec:asym\]. Section \[sec:exp\] has some comments on predicting higher order terms with non-Abelian exponentiation. Conclusions and implications are discussed in Section \[sec:conc\].
Event Shapes and Factorization in SCET \[sec:shapes\]
=====================================================
The hemisphere soft function appears in the factorization theorem for the hemisphere mass distribution. The hemispheres are defined with respect to the thrust axis. Thrust itself is defined by $$T =
\max_{\mathbf n}
\left(
\frac
{
\sum_{i} | {\mathbf p}_i \cdot {\mathbf{n}} |
}
{
\sum_{i} | \mathbf{p}_i |
}
\right),$$ where the sum is over all momentum 3-vectors $\mathbf{p}_i$ in the event. The thrust axis is the unit 3-vector $\mathbf{n}$ that maximizes the expression in parentheses. We then define the light-like 4-vectors $n^\mu = (1,\mathbf{n})$ and $\bar{n}^\mu = (1,-\mathbf{n})$. In the dijet limit $T \to 1$ and it is therefore more convenient to define $\tau = 1 - T$ as the thrust variable so that $\tau$ is small in the dijet limit.
Once the thrust axis is known, we divide the event into two hemispheres defined by the plane perpendicular to the thrust axis. We define $P_L^{\mu}$ and $P_R^{\mu}$ to be the 4-vector sum of all of the radiation going into each hemisphere and $M_L = \sqrt{P_L^2}$ and $M_R = \sqrt{P_R^2}$ to be the hemisphere invariant masses. When both $M_{L}$ and $M_{R}$ are small compared to the center-of-mass energy, $Q$, the hemisphere mass distribution factorizes into [@Fleming:2007qr] $$\label{FacTheorem}
\frac{1}{\sigma_0}\frac{\rd^2 \sigma}{\rd M_L^2 \rd M_R^2}
= H(Q^2, \mu)
\int \rd k_L \rd k_R
J( M_L^2 - Qk_L, \mu) J( M_R^2 - Qk_R, \mu)
S(k_L, k_R, \mu)\,.$$ Here, $\sigma_0$ is the tree level total cross section. $H(Q^2, \mu)$ is the hard function which accounts for the matching between QCD and SCET. $J(p^2)$ is the inclusive jet function which accounts for the matching between an effective field theory with soft and collinear modes to a theory with only soft modes. Finally, the object of interest, $S(k_L, k_R, \mu)$ is the hemisphere soft function, which is derived by integrating out the remaining soft modes.
In the threshold limit (small hemisphere masses), the thrust axis aligns with the jet axis and thrust can be written as the sum of the two hemisphere masses, $$\tau = \frac{M_L^2 + M_R^2}{Q^2} + \mathcal{O}\left( \frac{ M_{L,R}^4}{Q^4} \right)$$ Heavy jet mass $\rho$ is defined to be the larger of the two hemisphere masses, normalized to the center of mass energy $Q$, $$\rho =
\frac{1}{Q^2}
\max( M_L^2 , M_R^2 ) .$$ When $\rho$ is small, both hemisphere masses are small and the event appears as two pencil-like, back to back jets.
The factorization formula can be used to calculate thrust and heavy jet mass in the dijet limit as integrals over the doubly differential hemisphere mass distribution. Explicitly, $$\label{def:thrust}
\frac{\rd \sigma}{\rd \tau}
= Q^2 \int \rd M_L^2 \rd M_R^2
\frac{\rd^2 \sigma}{\rd M_L^2 \rd M_R^2}
\delta (Q^2 \tau - M_L^2 - M_R^2 )$$ and $$\label{def:hjm}
\frac{\rd \sigma}{\rd \rho}
= Q^2 \int \rd M_L^2 \rd M_R^2
\frac{\rd^2 \sigma}{\rd M_L^2 \rd M_R^2}
\left[
\delta (Q^2 \rho - M_L^2)\theta( M_L^2 - M_R^2)
+
\delta (Q^2 \rho - M_R^2)\theta( M_R^2 - M_L^2)
\right]\,.$$ The thrust distribution can be written so that it depends not on the full hemisphere soft function but on the thrust-soft function, defined as $$S_T(k,\mu) = \int \rd k_L \rd k_R S(k_L ,k_R,\mu) \delta(k-k_L -k_R)\,.$$ Since the thrust soft function is dimensionless and its $\mu$ dependence is determined by renormalization group invariance, the $k$ dependence is also completely known. Thus at each order in $\alpha_s$ only one number, the constant part, is unknown. In contrast, for the heavy jet mass distribution, the full $k_L$ and $k_R$ dependence of the soft function is needed for the factorization theorem. In particular, for resummation to N$^3$LL order, only one number is needed for thrust (the constant in the 2-loop thrust soft function), which has been fit numerically, but for heavy-jet mass a function is needed [@Chien:2010kc]. In this paper we compute both the number and the function.
Calculation of the Soft Function \[sec:calculation\]
====================================================
The soft function is defined as $$S(k_L, k_R, \mu)
\equiv
\frac{1}{N_c} \sum_{X_s}
\delta( k_R - n \cdot P_s^{R} )
\delta( k_L - {{\bar{n}}}\cdot P_s^{L} )
{\left< 0 \right|} \overline{Y}_{{{\bar{n}}}} Y_n {\left| X_s \right>}
{\left< X_s \right|} Y^{\dagger}_n \overline{Y}^{\dagger}_{{{\bar{n}}}} {\left| 0 \right>} ,$$ where $P_s^{L,R}$ is the total momentum of the final state ${\left| X_s \right>}$ in the left and right hemisphere, respectively. The Wilson lines $Y_n$ and $\overline{Y}_{{{\bar{n}}}}$ are defined by $$\begin{aligned}
Y^{\dagger}_n(x)
= P \exp \left( ig \int_0^{\infty}\!\! ds\ n \cdot A_s(n s + x) \right)
&&
\overline{Y}^{\dagger}_{{{\bar{n}}}}(x)
= P \exp \left( ig \int_0^{\infty}\!\! ds\ {{\bar{n}}}\cdot \overline{A}_s(n s + x) \right) ,\end{aligned}$$ where $P$ denotes path ordering and $A_s = A^{a}_s T^a$ $(\overline{A}_s = A_s^{a} \overline{T}^{a})$ are gauge fields in the fundamental (anti-fundamental) representation. The soft function can be factorized into a perturbative (partonic) part and non-perturbative part which has support of order $\Lambda_{QCD}$ [@Abbate:2010xh].
The authors of [@Hoang:2008fs] observed that the form of the soft function is constrained by the non-Abelian exponentiation theorem and RG invariance, which puts constraints on powers of logarithms of $\mu$. The theorem also restricts the $C_F^n$ color structure in the soft function to be completely determined by the one-loop result. Beyond this, however, the soft function is unconstrained. The one-loop calculation was done in [@Schwartz:2007ib; @Fleming:2007xt]. The main result of this paper is the calculation of the perturbative part of the hemisphere soft function to order $\alpha_s^2$. Since the order $\alpha_s^2$ color structure $C_F^2$ is given in [@Hoang:2008fs], we will only calculate the $C_F C_A$ and the $C_F n_f T_F$ terms.
$C_F C_A$ color structure
-------------------------
The order $\alpha_s^2$ calculation involves pure virtual graphs, pure real emission graphs, and interference between the two. The pure virtual contributions to the soft function give scaleless integrals which convert IR divergences to UV divergences, and are not explicitly written. The diagrams needed to compute the pure real emission contributions are shown in Figs. \[fig:1\]-\[fig:4\], whereas the interference graphs between the order $\alpha_s$ real and virtual emission amplitudes are shown in Fig. \[fig:5\].
The integrals corresponding to the two diagrams in Fig. \[fig:1\] (and the twin of diagram [[A]{}]{} obtained by interchanging $k$ and $q$) are given, in $d = 4-2{\epsilon}$ dimensions using the $\overline{\text{MS}}$ scheme, by $$\begin{aligned}
&I_{{{{\color{darkblue}}A}}}
= 2
(-4g^4) \left( C_F^2 - \frac{C_F C_A}{2}\right)
\left(
\frac{\mu^2 e^{\gamma_E}}{4\pi}
\right)^{2{\epsilon}}
\int \frac{\rd^d q}{(2\pi)^d}
\int \frac{\rd^d k}{(2\pi)^d}
\frac{1}{k^- k^+ q^- q^+ }
F(k_L, k_R) \end{aligned}$$ and $$\begin{aligned}
&I_{{{{\color{darkblue}}B}}}
=
-4g^4
\left(
\frac{\mu^2 e^{\gamma_E}}{4\pi}
\right)^{2{\epsilon}}
\int \frac{\rd^d q}{(2\pi)^d}
\int \frac{\rd^d k}{(2\pi)^d}
\frac{1}{(k^+ + q^+) (k^- + q^-) }
{\nonumber \\}& \qquad \times
\left\{
\left( C_F^2 - \frac{C_F C_A}{2}\right)
\left( \frac{1}{k^-k^+} + \frac{1}{q^- q^+} \right)
+
C_F^2
\left( \frac{1}{k^-q^+} + \frac{1}{q^- k^+} \right)
\right\}
F(k_L, k_R) , \end{aligned}$$ where $k^- = {{\bar{n}}}\cdot k$, $k^+ = n \cdot k$ and $F(k_L, k_R)$ contains the $\delta(q^2)$ and $\delta(k^2)$ factors which put the emitted gluons on shell and the phase-space restrictions in the definition of the hemisphere soft function. Explicitly, $F(k_L, k_R)$ is given by $$\begin{aligned}
\label{cut1}
F(k_L, k_R)
&= \frac{1}{2!}
(-2\pi i)^2 \delta(k^2) \delta(q^2)
{\nonumber \\}&\times
\Bigl[
\Theta( k^- - k^+ )
\Theta( q^+ - q^- )
\delta( k^+ - k_R)
\delta( q^- - k_L)
{\nonumber \\}&\ \ +
\Theta( k^+ - k^- )
\Theta( q^- - q^+ )
\delta( k^- - k_L)
\delta( q^+ - k_R)
{\nonumber \\}&\ \ +
\Theta( k^- - k^+ )
\Theta( q^- - q^+ )
\delta( k^+ + q^+ - k_R)
\delta(k_L)
{\nonumber \\}&\ \ +
\Theta( q^+ - q^- )
\Theta( k^+ - k^- )
\delta( k^- + q^- - k_L)
\delta(k_R)
\Bigr] .\end{aligned}$$ In each diagram, the momentum has been routed so that the 4-vectors $k$ and $q$ correspond to the final state gluons. The gluonic contribution to the $C_F C_A$ color factor will be symmetric in $k$ and $q$ due to the fact that the radiated gluons are identical particles. In the first diagram, a factor of two has been added since the integrand is unchanged after $k \leftrightarrow q$, whereas in the second diagram, the SCET Feynman rules for two gluon emission from a single soft Wilson line automatically account for $k\leftrightarrow q$. The factor of $1/2!$ needed for averaging over $k \leftrightarrow q$ is in $F(k_L, k_R)$. Not shown in Fig. \[fig:1\] is the graph that corresponds to the complex conjugate of diagram [[B]{}]{}. This diagram gives the same integral as diagram [[B]{}]{}. Since we are interested in the $C_F C_A$ contribution, the linear combination of interest is $I_{{{{\color{darkblue}}A}}} + 2I_{{{{\color{darkblue}}B}}}$. Diagram [[A]{}]{} and its identical twin are self-conjugate and only contribute once because they represent the squares of tree-level Feynman diagrams.
(200,150)(-30,0)
(0,150)(75,225) (75,225)(150,300) (75,75)(0,150) (150,0)(75,75) (275,75)(200,0) (350,150)(275,75) (275,225)(350,150) (200,300)(275,225)
(75,225)(167,156)[5]{}[4]{} (183,144)(275,75)[5]{}[4]{} (175,150)(275,225)[5]{}[4]{} (75,75)(175,150)[5]{}[4]{}
(150,250)(150,50)[10]{} (120,250)(150,250)[10]{} (180,50)(150,50)[10]{}
(-10,110)\[lb\]
(85,45)(125,05) (225,05)(265,45)
(110,120)(140,145) (40,60)\[lb\]
(195,115)(225,91) (80,30)\[lb\]
(15,90)\[lb\] (15,25)\[lb\] (120,90)\[lb\] (120,25)\[lb\]
(350,150)(10,10)(10)[Black]{}[Gray]{} (0,150)(10,10)(10)[Black]{}[Gray]{}
(200,150)(-30,0)
(150,0)(0,150)
(0,150)(150,300)
(350,150)(200,0)
(200,300)(350,150)
(50,100)(175,150)[5]{}[4]{} (100,50)(175,150)[5]{}[4]{}
(300,200)(175,150)[5]{}[4]{} (250,250)(175,150)[5]{}[4]{}
(210,250)(210,80)[10]{} (180,250)(210,250)[10]{} (240,80)(210,80)[10]{}
(-10,110)\[lb\]
(85,45)(125,05) (225,05)(265,45)
(90,130)(130,148) (40,60)\[lb\]
(135,70)(170,110) (65,30)\[lb\]
(350,150)(10,10)(10)[Black]{}[Gray]{} (0,150)(10,10)(10)[Black]{}[Gray]{}
(200,150)(-30,0)
(0,150)(150,300)
(75,75)(0,150) (150,0)(75,75)
(275,75)(200,0) (350,150)(275,75)
(275,225)(350,150) (200,300)(275,225)
(200,250)(200,50)[10]{} (170,250)(200,250)[10]{} (230,50)(200,50)[10]{}
(175,150)(275,75)[5]{}[4]{} (175,150)(275,225)[5]{}[4]{} (75,75)(175,150)[5]{}[4]{}
(-10,110)\[lb\] (85,45)(125,05) (225,05)(265,45)
(110,120)(140,145) (25,60)\[lb\]
(235,125)(265,101) (105,47)\[lb\]
(235,170)(265,194) (105,64)\[lb\]
(350,150)(10,10)(10)[Black]{}[Gray]{} (0,150)(10,10)(10)[Black]{}[Gray]{}
(200,150)(-30,0)
(0,150)(75,225) (75,225)(150,300)
(150,0)(0,150)
(275,75)(200,0) (350,150)(275,75)
(275,225)(350,150) (200,300)(275,225)
(200,250)(200,50)[10]{} (170,250)(200,250)[10]{} (230,50)(200,50)[10]{}
(75,225)(175,150)[5]{}[4]{} (175,150)(275,75)[5]{}[4]{} (175,150)(275,225)[5]{}[4]{}
(-10,110)\[lb\]
(85,45)(125,05) (225,05)(265,45)
(350,150)(10,10)(10)[Black]{}[Gray]{} (0,150)(10,10)(10)[Black]{}[Gray]{}
(200,150)(-30,0)
(275,225)(350,150) (200,300)(275,225)
(50,100)(0,150) (100,50)(50,100) (150,0)(100,50)
(0,150)(150,300)
(350,150)(200,0)
(50,100)(175,150)[5]{}[4]{} (100,50)(175,150)[5]{}[4]{}
(175,150)(275,225)[5]{}[4]{}
(150,230)(150,50)[10]{} (120,230)(150,230)[10]{} (180,50)(150,50)[10]{}
(-10,110)\[lb\]
(85,45)(125,05) (225,05)(265,45)
(195,185)(225,209) (65,90)\[lb\]
(90,130)(130,148) (40,65)\[lb\]
(135,70)(170,110) (70,35)\[lb\]
(350,150)(10,10)(10)[Black]{}[Gray]{} (0,150)(10,10)(10)[Black]{}[Gray]{}
(200,150)(-30,0)
(275,75)(200,0) (350,150)(275,75)
(200,300)(350,150)
(150,0)(0,150)
(0,150)(150,300)
(175,150)(275,75)[5]{}[4]{}
(100,250)(175,150)[5]{}[4]{} (50,200)(175,150)[5]{}[4]{}
(150,250)(150,80)[10]{} (120,250)(150,250)[10]{} (180,80)(150,80)[10]{}
(-10,110)\[lb\]
(85,45)(125,05) (225,05)(265,45)
(350,150)(10,10)(10)[Black]{}[Gray]{} (0,150)(10,10)(10)[Black]{}[Gray]{}
There are four classes of diagrams involving the triple gauge coupling. Diagrams [[C]{}]{} and [[D]{}]{}, shown in Fig. \[fig:2\], give the following integrals, $$\begin{aligned}
I_{{{{\color{darkblue}}C}}} &=
-g^4 C_A C_F
\left(
\frac{\mu^2 e^{\gamma_E}}{4\pi}
\right)^{2{\epsilon}}
\int \frac{\rd^d q}{(2\pi)^d}
\int \frac{\rd^d k}{(2\pi)^d}
\frac{F(k_L, k_R)}{(k^- + q^-)(k+q)^2 }
\left(
\frac{k^- + 2q^-}{ k^+ q^- }
+
\frac{q^- + 2k^-}{ q^+ k^- }
\right)\end{aligned}$$ $$\begin{aligned}
I_{{{{\color{darkblue}}D}}}
&=
-g^4 C_A C_F
\left(
\frac{\mu^2 e^{\gamma_E}}{4\pi}
\right)^{2{\epsilon}}
\int \frac{\rd^d q}{(2\pi)^d}
\int \frac{\rd^d k}{(2\pi)^d}
\frac{F(k_L, k_R)}{(k^+ + q^+) (k+q)^2 }
\left(
\frac{2k^+ + q^+}{k^+ q^- }
+
\frac{2q^+ + k^+}{q^+ k^- }
\right)\end{aligned}$$ whereas, diagrams [[E]{}]{} and [[F]{}]{}, shown in Fig \[fig:3\], give $$\begin{gathered}
I_{{{{\color{darkblue}}E}}}
=
g^4
C_F C_A
\left(
\frac{\mu^2 e^{\gamma_E}}{4\pi}
\right)^{2{\epsilon}}
\int \frac{\rd^d q}{(2\pi)^d}
\int \frac{\rd^d k}{(2\pi)^d}
{\nonumber \\}\qquad \times
\left( \frac{1}{q^-} - \frac{1}{k^-} \right)
\frac{q^- - k^- }{(k^+ + q^+) (k^- + q^-) (k+q)^2 }
F(k_L, k_R)\end{gathered}$$ $$\begin{gathered}
I_{{{{\color{darkblue}}F}}}
=
g^4
C_F C_A
\left(
\frac{\mu^2 e^{\gamma_E}}{4\pi}
\right)^{2{\epsilon}}
\int \frac{\rd^d q}{(2\pi)^d}
\int \frac{\rd^d k}{(2\pi)^d}
{\nonumber \\}\qquad \times
\left( \frac{1}{k^+} - \frac{1}{q^+} \right)
\frac{k^+ - q^+ }{(k^+ + q^+) (k^- + q^-) (k+q)^2 }
F(k_L, k_R)\end{gathered}$$ Each of these diagrams has a complex conjugate and so they contribute twice.
There are three self-energy topologies, shown in Fig. \[fig:4\]. The gluon and ghost self-energy graphs contribute to integrals $I_{{{{\color{darkblue}}G}}}, I_{{{{\color{darkblue}}H}}}$ and $I_{{{{\color{darkblue}}I}}}$ below. $$\begin{gathered}
I_{{{{\color{darkblue}}G}}}
=
g^4 C_F C_A
\left(
\frac{\mu^2 e^{\gamma_E}}{4\pi}
\right)^{2{\epsilon}}
\int \frac{\rd^d q}{(2\pi)^d}
\int \frac{\rd^d k}{(2\pi)^d}
\frac{1}{(k^-+q^-)(k^++q^+)(k+q)^4 }
{\nonumber \\}\qquad \times
\Bigl[
q^+ [(d-6)q^- - (d+2)k^- ]
+k^+ [(d-6)k^- - (d+2)q^- ]
+16 k\cdot q
\Bigr]
F(k_L, k_R)\end{gathered}$$ $$\begin{gathered}
I_{{{{\color{darkblue}}H}}}
=
g^4 C_F C_A
\left(
\frac{\mu^2 e^{\gamma_E}}{4\pi}
\right)^{2{\epsilon}}
\int \frac{\rd^d q}{(2\pi)^d}
\int \frac{\rd^d k}{(2\pi)^d}
\frac{1}{(k^-+q^-)^2(k+q)^4 }
{\nonumber \\}\qquad \times
\Bigl[
2(d+2) q^-k^-
-(d-6) (k^-)^2
-(d-6) (q^-)^2
\Bigr]
F(k_L, k_R)\end{gathered}$$ $$\begin{gathered}
I_{{{{\color{darkblue}}I}}}
=
g^4 C_F C_A
\left(
\frac{\mu^2 e^{\gamma_E}}{4\pi}
\right)^{2{\epsilon}}
\int \frac{\rd^d q}{(2\pi)^d}
\int \frac{\rd^d k}{(2\pi)^d}
\frac{1}{(k^+ + q^+)^2(k+q)^4 }
{\nonumber \\}\qquad \times
\Bigl[
2(d+2) q^+k^+
-(d-6) (k^+)^2
-(d-6) (q^+)^2
\Bigr]
F(k_L, k_R)\end{gathered}$$ As usual, cutting Feynman diagrams removes any symmetry factors that were associated to the cut lines prior to cutting. It is also worth reminding the reader that, in order to consistently combine the ghost emission diagrams with the gluon emission diagrams, we have to double-count the ghosts (they do not have the $1/2!$ symmetry factor that the gluons do). Diagram [[G]{}]{} has a complex conjugate graph which must be included but diagrams [[H]{}]{} and [[I]{}]{}, like diagram [[A]{}]{}, represent squares of tree-level Feynman diagrams and are therefore self-conjugate.
Adding all of these contributions together, we have $$\begin{aligned}
\label{int:CA}
&S^{R}_{C_A}(k_L, k_R)
= I_{{{{\color{darkblue}}A}}} + I_{{{{\color{darkblue}}H}}} + I_{{{{\color{darkblue}}I}}} + 2(I_{{{{\color{darkblue}}B}}}+I_{{{{\color{darkblue}}C}}}+I_{{{{\color{darkblue}}D}}}+I_{{{{\color{darkblue}}E}}}+I_{{{{\color{darkblue}}F}}}+I_{{{{\color{darkblue}}G}}}) \qquad (\text{$C_F C_A$ part})
{\nonumber \\}&\qquad
= g^4 C_F C_A
\left(
\frac{\mu^2 e^{\gamma_E}}{4\pi}
\right)^{2{\epsilon}}
\int \frac{\rd^d q}{(2\pi)^d}
\int \frac{\rd^d k}{(2\pi)^d}
\Bigl\{
\frac{2}
{
(k\cdot q)^2
k^{-} k^{+} q^{-} q^{+}
(k^{-}+q^{-}) (k^{+}+q^{+})
}
{\nonumber \\}&\qquad \times
\Bigl[
-k \cdot q
\Bigl(
(k^{-})^2 q^{+} ( 2 k^{+}+q^{+} )
+2 k^{-} q^{-}
\left(
(k^{+})^2 - k^{+} q^{+}+(q^{+})^2
\right)
+k^{+}(q^{-})^2 (k^{+}+2 q^{+})
\Bigr)
{\nonumber \\}&\qquad
+2 (k \cdot q)^2
\Bigl( k^{-} (2 k^{+}+q^{+})
+q^{-} (k^{+}+2q^{+})
\Bigr)
\Bigr]
{\nonumber \\}&\qquad
+
(\epsilon -1)
\frac{2(k^{+} q^{-}-k^{-} q^{+})^2}
{
(k\cdot q)^2
(k^{-}+q^{-})^2 (k^{+}+q^{+})^2
}
\Bigr\}
F(k_L, k_R) \,.\end{aligned}$$ Before presenting the result for the $C_F C_A$ color factor, we briefly describe our general computational strategy. Normally, one expects scaleless integrals to be simpler than single scale integrals. In this particular case, the single scale integrals (with scale $k_L/k_R$) are actually much less technically demanding. This is true primarily because these contributions (see the first two terms of Eq. (\[cut1\])) are integrable at $\epsilon = 0$. It turns out that this special feature of the problem more than makes up for the fact that single scale integrals are generically harder to evaluate than scaleless integrals.
The calculation proceeds as follows for a single scale integral. First there will be an integral over angles (the integrand depends non-trivially on $k \cdot q$) that can be done analytically to all orders in $\epsilon$. It is then convenient to Taylor series expand the resulting hypergeometric functions using the HypExp package [@Huber:2005yg] for [Mathematica]{}. In fact, the whole integrand can be expanded in a Taylor series in $\epsilon$ and integrated term-by-term, due to the fact that the integral converges at $\epsilon = 0$. With a modest amount of knowledge of the basic functional identities satisfied by the polylogarithm functions, it is possible to do the resulting two-fold one-parameter integral in [Mathematica]{} and express the final result in terms of a minimal basis of transcendental functions. The results of our single scale calculations for both non-trivial color factors are tabulated in the Appendix.
The evaluation of a scaleless integral (originating from the last two terms of Eq. (\[cut1\])) begins in much the same way. Unfortunately, it quickly becomes clear that what remains after integrating over all angles has a non-trivial analytical structure (considered as a function of $\epsilon$). In particular, the integral diverges at $\epsilon = 0$. Expanding an integral of this class under the integral sign is significantly more complicated and requires new tools. To begin, one should transform all hypergeometric functions in the integrand and expose their singularity structure. In this fashion, one learns that there is a line of singularities within the region of integration. A well-known procedure called sector decomposition [@Heinrich:2008si] allows one to move singularities within the region of integration to singularities on the boundaries of the region of integration. Sector decomposition works as follows. Through a sequence of variable changes and interchanges of integration orders, all phase-space singularities are put into a canonical form. At this point, one can use an expansion in distributions to extract singularities in $\epsilon$ under the integral sign. Finally, the entire integrand can be expanded in $\epsilon$ in terms of distributions and ordinary functions and one can integrate the Laurent series term-by-term.
Once we understood the computational procedure described above, it was straightforward to evaluate the integrals of interest for the $C_F C_A$ color factor. The result has the form $$\label{result:CA}
S^{R}_{C_A}(k_L, k_R)
=
\left(\frac{\alpha}{4\pi} \right)^2
C_F C_A
\left[
\frac{\mu^{4{\epsilon}}}{( k_R k_L)^{1+2{\epsilon}}}
f_{C_A}\left(\frac{k_L}{k_R},{\epsilon}\right)
+
\left(
\frac{\mu^{4{\epsilon}}}{k_L^{1+4{\epsilon}}} \delta(k_R)
+
\frac{\mu^{4{\epsilon}}}{k_R^{1+4{\epsilon}}} \delta(k_L)
\right) g_{C_A}({\epsilon})
\right]. {\nonumber}$$ The first term corresponds to the first two terms in Eq. (\[cut1\]), those that account for the possibility that exactly one gluon is radiated into each hemisphere. It depends on $f_{C_A}(r,{\epsilon})$, a dimensionless function of $r = k_L/k_R$ and ${\epsilon}$. It can be written as an expansion in ${\epsilon}$ as $$\begin{aligned}
\label{fca:exp}
f_{C_A}(r,{\epsilon})
&=
f_{C_A}^{(0)}(r)
+ {\epsilon}f_{C_A}^{(1)}(r)
+ {\epsilon}^2 f_{C_A}^{(2)}(r) + \cdots .\end{aligned}$$ The expressions for $f_{C_A}^{(n)}(r)$ are quite lengthy and are given in the appendix for $n = 0, 1, 2$. The second term in Eq. (\[result:CA\]) accounts for the fact that both gluons can propagate into the same hemisphere and it has no non-trivial $k_L$ or $k_R$ dependence. $g_{C_A}({\epsilon})$ is simply a constant with $\epsilon$ expansion $$\begin{aligned}
g_{C_A}({\epsilon}) \label{gca}
&=
\frac{4}{\epsilon^3}
+\frac{22}{3 \epsilon ^2}
+\frac{1}{\epsilon} \left( \frac{134}{9} -\frac{4 \pi ^2}{3} \right)
-\frac{116 \zeta_3}{3}
+\frac{11 \pi ^2}{9}
+\frac{772}{27}
{\nonumber \\}& \qquad
+\left(\frac{484 \zeta_3}{9}
+\frac{4784}{81}
+\frac{67 \pi ^2}{27}
-\frac{137 \pi^4}{90}\right) \epsilon . \end{aligned}$$ The interference between the one-loop and tree-level single gluon emission amplitudes is shown in diagrams [[J]{}]{} and [[K]{}]{} of Fig. \[fig:5\]. The integrals associated with diagram [[J]{}]{} are scaleless and are set to zero in dimensional regularization. Diagram [[K]{}]{} gives the integral $$\begin{aligned}
I_{{{{\color{darkblue}}{{{\color{darkblue}}K}}}}}
&=
4
(-g^4) C_A C_F
\int \frac{\rd^d q}{(2\pi)^d}
\frac{1}{q^- }
\int \frac{\rd^d k}{(2\pi)^d}
\frac{2q^- - k^-}{k^+ (q^- - k^-) (q-k)^2 k^2 }
{\nonumber \\}&\qquad \times
(-2\pi i) \delta(q^2)
[ \Theta( q^- - q^+ ) \delta( q^+ - k_R ) \delta( k_L )
+
\Theta( q^+ - q^- ) \delta( q^- - k_L ) \delta( k_R ) ].\end{aligned}$$ There are 2 diagrams with the topology of diagram [[K]{}]{}. When they are considered with single real emission phase-space cuts, they can easily be mapped into each other and therefore give identical results. Both diagrams also have a complex conjugate graph and these obviously give equal contributions as well. This is why $I_{{{{\color{darkblue}}{{{\color{darkblue}}K}}}}}$ has an overall factor of 4 out front. After evaluating this integral, the real-virtual interference contribution becomes $$\begin{aligned}
\label{result:virt}
S^{\rm V}_{C_A}(k_L, k_R)
&=
\left(\frac{\alpha}{4\pi} \right)^2
C_F C_A
\left(
\frac{\mu^{4{\epsilon}}}{k_L^{1+4{\epsilon}}} \delta(k_R)
+
\frac{\mu^{4{\epsilon}}}{k_R^{1+4{\epsilon}}} \delta(k_L)
\right) v_{C_A}({\epsilon}) ,\end{aligned}$$ where $v_{C_A}({\epsilon})$ can be expanded in ${\epsilon}$ as $$\begin{aligned}
\label{vca}
v_{C_A}({\epsilon})
&=
-\frac{4}{\epsilon^3}
+\frac{2\pi^2}{\epsilon}
+\frac{32 \zeta_3}{3}
-{\epsilon}\frac{\pi ^4}{30} .\end{aligned}$$ It is worth noting that, in this case, the application of the optical theorem for Feynman diagrams is a bit subtle; one finds an explicit factor of ${\rm exp}(\pm i \pi \epsilon)$ after doing the $k$ integral (the sign of the phase depends on the precise pole prescription). Cutkosky’s rules still apply provided that one keeps only the appropriate projection of the complex phase. After a moment’s thought it becomes clear that the real part, $\cos(\pi \epsilon)$ ([*independent*]{} of the pole prescription), is what one needs to keep to complete the calculation and derive the above result.
The result of diagram [[L]{}]{}, including the complex conjugate graph, is given by $$\begin{aligned}
\label{result:ct}
S^{\rm Ren }(k_L, k_R)
&=
-
\left(\frac{\alpha}{4\pi} \right)^2
C_F
\left(
\frac{\mu^{2{\epsilon}}}{k_L^{1+2{\epsilon}}} \delta(k_R)
+
\frac{\mu^{2{\epsilon}}}{k_R^{1+2{\epsilon}}} \delta(k_L)
\right)
\frac{4e^{\gamma_E}}{{\epsilon}^2\Gamma(1-{\epsilon})}
\beta_0\end{aligned}$$ where $\beta_0 = \frac{11}{3} C_A - \frac{4}{3} n_f T_F$ is the first expansion coefficient of the QCD $\beta$-function, $\beta(g)/g = \frac{\alpha_s}{4\pi} \beta_0$. Finally, the total contribution to the $C_F C_A$ color factor is given by $$\begin{aligned}
\label{CAtotal}
S_{C_A}(k_L, k_R)
&=
S^{R}_{C_A}(k_L, k_R)
+
S^{V}_{C_A}(k_L, k_R)
+
S^{\rm Ren}_{C_A}(k_L, k_R), \end{aligned}$$ where $S^{\rm Ren}_{C_A}$ is the $C_F C_A$ part of $S^{\rm Ren}$.
$C_F n_F T_F$ color structure
-----------------------------
The diagrams involving a fermion loop contribute to the $C_F n_f T_F$ color factor and give integrals $\tilde{I}_{{{{\color{darkblue}}G}}}, \tilde{I}_{{{{\color{darkblue}}H}}},$ and $\tilde{I}_{{{{\color{darkblue}}I}}}$. The first topology in Fig. \[fig:4\], where the blob now represents a fermion loop, gives $$\begin{aligned}
\tilde{I}_{{{{\color{darkblue}}G}}}
&=
g^4 C_F n_f T_F
\left(
\frac{\mu^2 e^{\gamma_E}}{4\pi}
\right)^{2{\epsilon}}
\int \frac{\rd^d q}{(2\pi)^d}
\int \frac{\rd^d k}{(2\pi)^d}
\frac{4(k^+ q^- + k^- q^+ - 2k \cdot q)}{(k^+ + q^+) (k^- + q^-) (k+q)^4 }
F_{n_f}( k_L, k_R) .\end{aligned}$$ The phase-space cut is accounted for by $$\begin{gathered}
\label{cut2}
F_{n_f}(k_L, k_R)
=
(-2\pi i)^2 \delta(k^2) \delta(q^2)
{\nonumber \\}\times
\Bigl[
\Theta( k^- - k^+ )
\Theta( q^+ - q^- )
\delta( k^+ - k_R)
\delta( q^- - k_L)
+
\Theta( k^+ - k^- )
\Theta( q^- - q^+ )
\delta( k^- - k_L)
\delta( q^+ - k_R)
{\nonumber \\}\ \ +
\Theta( k^- - k^+ )
\Theta( q^- - q^+ )
\delta( k^+ + q^+ - k_R)
\delta(k_L)
+
\Theta( q^+ - q^- )
\Theta( k^+ - k^- )
\delta( k^- + q^- - k_L)
\delta(k_R)
\Bigr].\end{gathered}$$ The complex conjugate of this diagram gives the same result, so $\tilde{I}_{g}$ contributes twice. For the second and third topologies shown in Fig. \[fig:4\], we get $$\tilde{I}_{{{{\color{darkblue}}H}}}
=
g^4 C_F n_f T_F
\left(
\frac{\mu^2 e^{\gamma_E}}{4\pi}
\right)^{2{\epsilon}}
\int \frac{\rd^d q}{(2\pi)^d}
\int \frac{\rd^d k}{(2\pi)^d}
\frac{-8 k^+ q^+}{(k^+ + q^+)^2 (k+q)^4 }
F_{n_f}( k_L, k_R)$$ and $$\tilde{I}_{{{{\color{darkblue}}I}}}
=
g^4 C_F n_f T_F
\left(
\frac{\mu^2 e^{\gamma_E}}{4\pi}
\right)^{2{\epsilon}}
\int \frac{\rd^d q}{(2\pi)^d}
\int \frac{\rd^d k}{(2\pi)^d}
\frac{-8 k^- q^-}{(k^- + q^-)^2 (k+q)^4 }
F_{n_f}( k_L, k_R) \,.$$ The sum of these contributions is $$\begin{aligned}
\label{int:nf}
&S^{R}_{n_f}(k_L, k_R)
= 2\tilde{I}_{{{{\color{darkblue}}G}}} + \tilde{I}_{{{{\color{darkblue}}H}}} + \tilde{I}_{{{{\color{darkblue}}I}}}
{\nonumber \\}&\qquad
=
g^4 C_F n_f T_F
\left(
\frac{\mu^2 e^{\gamma_E}}{4\pi}
\right)^{2{\epsilon}}
\int \frac{\rd^d q}{(2\pi)^d}
\int \frac{\rd^d k}{(2\pi)^d}
{\nonumber \\}&\qquad \times
\frac{8}{(k+q)^4 }
\left(
\frac{k^+ q^- + k^- q^+ - 2k \cdot q}{(k^+ + q^+) (k^- + q^-)}
-
\frac{k^- q^- }{(k^- + q^-)^2}
-
\frac{k^+ q^+ }{(k^+ + q^+)^2}
\right)
F_{n_f}( k_L, k_R) \end{aligned}$$ Evaluating this integral gives $$\label{result:nf}
S^{R}_{n_f}(k_L, k_R)
=
\left(\frac{\alpha}{4\pi} \right)^2
C_F n_f T_F
\left[
\frac{\mu^{4{\epsilon}}}{( k_R k_L)^{1+2{\epsilon}}}
f_{n_f}\left(\frac{ k_L}{ k_R},{\epsilon}\right )
+
\left(
\frac{\mu^{4{\epsilon}}}{k_L^{1+4{\epsilon}}} \delta(k_R)
+
\frac{\mu^{4{\epsilon}}}{k_R^{1+4{\epsilon}}} \delta(k_L)
\right) g_{n_f}({\epsilon})
\right]. {\nonumber}$$ As in the $C_F C_A$ case, the first term corresponds to the quark and anti-quark propagating into different hemispheres and it depends on $r = k_L/k_R$ in a non-trivial way through a function $f_{n_f}(r,{\epsilon})$. $f_{n_f}(r,{\epsilon})$ can be expanded in a Taylor series in ${\epsilon}$ as $$\begin{aligned}
\label{fnf:exp}
f_{n_f}(r,{\epsilon})
&=
f_{n_f}^{(0)}(r)
+ {\epsilon}f_{n_f}^{(1)}(r) + \cdots .\end{aligned}$$ The expressions for $f_{n_f}^{(n)}(r)$ are given in the appendix for $n= 0, 1$. For the $C_F n_f T_F$ color factor $n = 2$ plays no role due to the fact that $f_{n_f}^{(n)}(0) = f_{n_f}^{(n)}(\infty) = 0$.
The second term Eq. (\[result:nf\]) is present because both the quark and anti-quark may propagate into the same hemisphere as well. As before, this contribution has no non-trivial $k_L$ or $k_R$ dependence. The constant $g_{n_f}$ has a series expansion $$\label{gnf}
g_{n_f} ({\epsilon})
=
-\frac{8}{3\epsilon^2}
-\frac{40}{9\epsilon}
-\frac{152}{27}
-\frac{4\pi ^2}{9}
+\left(
-\frac{952}{81}
-\frac{20 \pi ^2}{27}
-\frac{176 \zeta_3}{9}
\right) \epsilon .$$ The final contribution to this color factor is from the charge renormalization, diagram [[L]{}]{}, the results of which were given in Eq. (\[result:ct\]). Adding this contribution to the real emission contributions yields the final result for the $C_F n_f T_F$ color factor. It is $$\begin{aligned}
\label{CFtotal}
S_{n_f}(k_L, k_R)
&=
S^{R}_{n_f}(k_L, k_R)
+
S^{\rm Ren}_{n_f}(k_L, k_R),\end{aligned}$$ where $S^{\rm Ren}_{n_f}$ is the $C_F n_f T_F$ part of $S^{\rm Ren}$.
Summary of the Calculation \[sec:summary\]
------------------------------------------
In summary, we found that the 2-loop hemisphere soft function in $d=4-2\epsilon$ dimensions has the form $$\begin{aligned}
\label{result:summ}
S(k_L, k_R,\mu)
&=
\left(\frac{\alpha}{4\pi} \right)^2
\left[
\frac{\mu^{4{\epsilon}}}{( k_R k_L)^{1+2{\epsilon}}} f\left(\frac{k_L}{k_R},{\epsilon}\right)
+
\left(
\frac{\mu^{4{\epsilon}}}{k_L^{1+4{\epsilon}}} \delta(k_R)
+
\frac{\mu^{4{\epsilon}}}{k_R^{1+4{\epsilon}}} \delta(k_L)
\right) h({\epsilon})
\right. {\nonumber}\\
&
\left.
- 4C_F\beta_0 \left(
\frac{\mu^{2{\epsilon}}}{k_L^{1+2{\epsilon}}} \delta(k_R)
+
\frac{\mu^{2{\epsilon}}}{k_R^{1+2{\epsilon}}} \delta(k_L)
\right)\frac{e^{\gamma_E}}{{\epsilon}^2 \Gamma(1-{\epsilon})}
\right]\,.\end{aligned}$$ Here $f(r,{\epsilon})=f(1/r,{\epsilon})$ is the opposite-direction contribution (where the two gluons or two quarks go into opposite hemispheres) and $h({\epsilon})$ is the same-direction contribution. Since all the $\mu$ dependence is shown explicitly, $h({\epsilon})$ cannot depend on $k_L$ or $k_R$ by dimensional analysis. The second line is the contribution that comes from the interference of the first non-trivial term in the expansion of the charge renormalization constant and the $\mathcal{O}(\alpha_s)$ hemisphere soft function. It is proportional to $\beta_0 = \frac{11}{3} C_A - \frac{4}{3} T_F n_f$.
There are 3 color structures, $C_F^2, C_F C_A$ and $C_F n_f T_F$. The $C_F^2$ color structure is trivial – by non-Abelian exponentiation it is the square of the one-loop result. For the other two color structures the function $f(r,{\epsilon})$ is complicated. In both cases it is finite at ${\epsilon}=0$, and in the $C_F n_f T_F$ case, $f_{n_f}(0,{\epsilon}) = f_{n_f}(\infty,{\epsilon}) = 0$. We write $$\begin{aligned}
f(r,{\epsilon}) &= f^{(0)}(r) + {\epsilon}f^{(1)}(r) + {\epsilon}^2 f^{(2)}(r) \label{fexp}\end{aligned}$$ The expansions in ${\epsilon}$ of $f(r,{\epsilon})$ for the two color structures are given in the Appendix. Due to the fact that $f_{n_f}(0,{\epsilon}) = f_{n_f}(\infty,{\epsilon}) = 0$, $f^{(2)}_{n_f}(r)$ does not contribute to the renormalized soft function and is not given.
For the same direction contribution, $h({\epsilon})$, there are contributions from the real-emission diagrams and, for the $C_F C_A$ color structure, interference between tree-level real emission and one-loop real-virtual graphs. The real emission contributions we called $g({\epsilon})$, and are given in Eqs. and . The interference graphs are given by $v_{C_A}({\epsilon})$ in Eq. . Adding these terms we get for the $C_F C_A$ color structure $$\begin{gathered}
h_{C_A}({\epsilon}) =
\frac{22}{3 {\epsilon}^2}
+\frac{\frac{134}{9}+\frac{2 \pi ^2}{3}}{{\epsilon}}
- 28 \zeta_3+\frac{11 \pi ^2}{9}+\frac{772}{27}
+\left(\frac{484 \zeta_3}{9}+\frac{4784}{81}+\frac{67 \pi ^2}{27} -\frac{14 \pi ^4}{9}\right) {\epsilon}\end{gathered}$$ and for completeness, copying Eq. $$h_{n_f}({\epsilon})
=
-\frac{8}{3{\epsilon}^2}
-\frac{40}{9{\epsilon}}
-\frac{152}{27}
-\frac{4\pi ^2}{9}
+\left(
-\frac{952}{81}
-\frac{20 \pi ^2}{27}
-\frac{176 \zeta_3}{9}
\right) {\epsilon}.$$
Integrating the soft function \[sec:integrating\]
=================================================
Now we would like to expand and renormalize the soft function. At one-loop, all that is necessary for the expansion is the relation $$\begin{aligned}
\label{stardist}
\frac{\mu^{4{\epsilon}} }{k^{1+2{\epsilon}}} =
-\frac{1}{2{\epsilon}} \delta(k)
+ \left[ \frac{1}{k} \right]_{\ast}
- 2{\epsilon}\left[ \frac{\ln \frac{k}{\mu} }{k} \right]_{\ast} + \cdots,\end{aligned}$$ where the $\ast$-distributions are defined, for example, in [@Schwartz:2007ib]. Unfortunately, this expansion cannot be used separately for $k_L$ and $k_R$, since the region where they both go to zero is not well-defined. For example, what does $\delta(k_L)\delta(k_R)f(k_L/k_R)$ mean? If we take $k_L \to 0$ first, then $k_R \to 0$, then we pick up $f(0)$. If we take $k_L,k_R \to 0$ holding $k_L = k_R$, then we pick up $f(1)$. Unless $f(r)$ is constant, one must do the expansion more carefully.
A simple solution is just to expand in distributions of $p=k_L k_R$ and $r=k_L/k_R$. This expansion is well-defined, and can be used to integrate any observable, such as thrust or heavy jet mass against the hemisphere soft function. For example, consider the integrated soft function: $$\begin{aligned}
{ {\mathcal{R} }}(X,Y,\mu) \equiv
\int_{0}^{X} \rd k_L
\int_{0}^{Y} \rd k_R \
s(k_L, k_R,\mu) .\end{aligned}$$ This function contains the entire soft contribution to the integrated doubly differential hemisphere mass distribution. Since it is a function, rather than a distribution, we can use this integrated form to check the $\mu$-dependence and compare to previous predictions.
We can calculate ${ {\mathcal{R} }}(X,Y,\mu)$ using Eq. and the expansion in Eq. . For the same direction contribution (the real emission graphs, real/virtual interference graphs, and charge renormalization), the soft function is trivial to integrate in $d$ dimensions. For the opposite direction contribution, the integral of the distributions is complicated by the overlapping singularities. It is a straightforward exercise in sector decomposition [@Heinrich:2008si] to isolate the singularities and perform the integrations. The result can then be renormalized in $\overline{\text{MS}}$. We find, for the opposite direction contribution, $$\begin{gathered}
{ {\mathcal{R} }}(X,Y,\mu) = \left(\frac{\alpha_s}{4\pi}\right)^2 \Bigg\{
\frac{1}{4} f^{(2)}(0)
-\frac{1}{2} f^{(1)}(0) \ln \frac{XY}{\mu^2}
+\frac{1}{2} f^{(0)}(0) \ln^2 \frac{XY}{\mu^2}
\\
-\frac{1}{2} \int_0^1 \!\! \rd z\ \left[ \frac{1}{z} \right]_{+} f^{(1)}(z)
+ \int_0^1 \!\! \rd z\ \left[ \frac{\ln z}{z} \right]_{+} f^{(0)}(z)
+ \ln \frac{XY}{\mu^2} \int_0^1 \!\! \rd z\ \left[ \frac{1}{z} \right]_{+} f^{(0)}(z)
\\
-\frac{1}{2}
\int_{1}^{Y/X} \!\! \rd y
\int_{1}^{Y/X} \!\! \rd x \
\frac{f^{(0)}(x/y) - f^{(0)}(0) }{xy}
\Bigg\},\end{gathered}$$ where $f^{(n)}(r)$ refer to the coefficients in the expansion in Eq. . The final compiled results for ${ {\mathcal{R} }}(X,Y,\mu)$ for the different color structures, including the same-direction and opposite direction contributions, are given in Sec. \[sec:hemi\].
The integrated soft function directly gives us the $\alpha_s^2$ soft function contribution to the integrated order $\alpha_s^2$ heavy jet mass distribution, $$R_\rho(\rho,\mu) = \frac{1}{\sigma_0}\int_0^\rho \frac{\rd \sigma}{\rd \rho'} \rd\rho' = { {\mathcal{R} }}(\rho Q, \rho Q,\mu).$$ For thrust, the integrated distribution is not given in trivial way from the integrated soft function. However, it differs from the heavy-jet mass distribution only by a single finite integral $$R_\tau(\tau,\mu) = \frac{1}{\sigma_0}\int_0^\tau \frac{\rd \sigma}{\rd \tau'} \rd\tau'
= R_\rho(\tau,\mu) - \left(\frac{\alpha_s}{4\pi}\right)^2 \int_0^1\rd x \int_{1-x}^1\rd y \frac{f^{(0)}(x/y)}{xy}$$ which we can now compute for the $C_F C_A$ and $C_F n_f T_F$ color structures. Adding also the $C_F^2$ terms, which were already konwn, the result is $$\begin{aligned}
R_\tau(\tau,\mu)&= R_\rho(\tau,\mu) + \left(\frac{\alpha_s}{4\pi}\right)^2 \left[
-\frac{8\pi^4}{45}C_F^2
+\left(\frac{8}{3} - 8\zeta_3 \right) C_F n_f T_F
\right. {\nonumber \\}&\left.
+ \left(32 \text{Li}_4\frac{1}{2}+22 \zeta_3+28 \zeta_3 \ln2-\frac{4}{3}-\frac{38 \pi^4}{45}
+\frac{4 \ln^42}{3}-\frac{4}{3} \pi ^2 \ln ^2 2\right)
C_F C_A \right].\end{aligned}$$
Numerical check for thrust and heavy jet mass \[sec:thrust\]
============================================================
As a check on our results, we can use the soft function to calculate the soft contribution to the differential thrust and heavy jet mass distributions. The singular parts of these distributions at $\mathcal{O}(\alpha_s^2)$ were previously determined up to four numbers: the coefficients of $\delta(\tau)$ and $\delta(\rho)$ for the $C_F C_A$ and $C_F n_f T_F$ color structures. Until now these four numbers were unknown and had to be fit numerically using the [event 2]{} program. We can now use our results for the hemisphere soft function to replace these numerically fit numbers with analytical results. The coefficients of the $\delta$-functions are the same as the constant terms in $R_\rho(\rho)$ and $R_\tau(\tau)$, for which formulae were given in the previous section.
The unknown soft contributions to the coefficients of $\delta(\tau)$ and $\delta(\rho)$ were denoted $c_2^S$ and $c_{2\rho}^S$ in [@Chien:2010kc]. We find $$\begin{aligned}
{ { c^S_2} }\label{ctwoeq}
&=
\frac{\pi^4}{2}
C_F^2
+
\left(
-\frac{2140}{81}
-\frac{871 \pi ^2}{54}
+\frac{14 \pi^4}{15}
+\frac{286 \zeta_3}{9}
\right)
C_F C_A
{\nonumber \\}& \qquad
+
\left(
\frac{80}{81}
+\frac{154 \pi ^2}{27}
-\frac{104 \zeta_3}{9}
\right)
C_F n_f T_F,
\\
{ { c^S_{2\rho}} }\label{ctworeq}
&=
\frac{\pi^4}{2}
C_F^2
+
\left(
-\frac{2032}{81}
-\frac{871 \pi^2}{54}
+\frac{16 \pi^4}{9}
-\frac{4 \ln^42}{3}
+\frac{4}{3} \pi^2 \ln^2 2
-28 \zeta_3 \ln 2
\right.
{\nonumber \\}& \qquad
\left.
+\frac{88 \zeta_3}{9}
-32 \text{Li}_4\left(\frac{1}{2}\right)
\right)
C_F C_A
+
\left(
-\frac{136}{81}
+\frac{154 \pi ^2}{27}
-\frac{32 \zeta_3}{9}
\right)
C_F n_f T_F.\end{aligned}$$ These numbers were fit numerically in [@Becher:2008cf; @Chien:2010kc; @Hoang:2008fs] based on a method introduced in [@Becher:2008cf]. The procedure involves subtracting the singular parts of the thrust and heavy jet mass distributions, which are known analytically from SCET, up to delta-function terms, from the full QCD distributions for thrust and heavy jet mass calculated numerically with the program [event 2]{}. The difference is then integrated over and compared to the total cross section, which is known analytically, minus the analytic integral over the singular terms. The highest precision fits were done in [@Chien:2010kc] so we compare only to those. The result is $$\begin{aligned}
{ { c^S_2} }&= ( 48.7045 ) C_F^2
+
( -56.4990 ) C_F C_A
+
( 43.3905 ) C_F n_f T_F && (\text{analytic result} )
{\nonumber \\}&= ( 49.1 ) C_F^2
+
( -57.8 ) C_F C_A
+
( 43.4 ) C_F n_f T_F && (\text{fit result}~\cite{Chien:2010kc} )\end{aligned}$$ and $c_{2\rho}^{S}$ $$\begin{aligned}
{ { c^S_{2\rho}} }&= ( 48.7045 ) C_F^2
+
( -33.2286 ) C_F C_A
+
( 50.3403 ) C_F n_f T_F && (\text{analytic result} )
{\nonumber \\}&= ( 49.1 ) C_F^2
+
( -33.2) C_F C_A
+
( 50.2 ) C_F n_f T_F && (\text{fit result}~\cite{Chien:2010kc} ).\end{aligned}$$ The percent errors for these numbers are 0.8%, 2%, 0.02% for ${ { c^S_2} }$ and 0.8%, 0.08% and 0.2% for ${ { c^S_{2\rho}} }$ respectively, with an average error of around 0.5%. This is excellent agreement. Note that the $C_F^2$ terms were already known when the fits were done, so small errors were expected.
For completeness, the complete contributions of the soft function to $\delta(\rho)$ and $\delta(\tau)$, denoted by $D^{(\rho)}_{\delta}$ and $D^{(\tau)}_{\delta}$ at order $\alpha_s^2$ are $$\begin{aligned}
D^{(\tau)}_\delta &=
\left(\frac{\alpha_s}{4\pi}\right)^2 \Bigg\{
c^S_2 - \frac{4}{5}\pi^4 C^2_F + C_F C_A \left( \frac{352\zeta_3}{9}
+ \frac{268\pi^2}{27} - \frac{4\pi^4}{9}\right)
+ C_F T_F n_f \left(
-\frac{128\zeta_3}{9} - \frac{80\pi^2}{27} \right)\Bigg\} {\nonumber},
\\
D^{(\rho)}_\delta &=
\left(\frac{\alpha_s}{4\pi}\right)^2 \Bigg\{
c^S_{2\rho} - \frac{28}{45}\pi^4 C^2_F + C_F C_A \left( \frac{352\zeta_3}{9}
+ \frac{268\pi^2}{27} - \frac{4\pi^4}{9}\right)
+ C_F T_F n_f \left(
-\frac{128\zeta_3}{9} - \frac{80\pi^2}{27} \right)\Bigg\} .\end{aligned}$$ One can also use ${ { c^S_2} }$ and ${ { c^S_{2\rho}} }$ to get the complete coefficient of $\delta(\tau)$ and $\delta(\rho)$ including jet and hard function contributions, using Appendices C of Refs. [@Becher:2008cf] and [@Chien:2010kc].
Hemisphere mass distribution \[sec:hemi\]
=========================================
The numerical check performed in the previous section provides strong evidence that our analytical results are correct. With these results in hand, we can now compare to other features of the hemisphere mass distribution and the integrated hemisphere soft function, ${ {\mathcal{R} }}(X,Y,\mu)$.
The entire $\mu$-dependence of ${ {\mathcal{R} }}(X,Y,\mu)$ is predicted by SCET. Indeed, renormalization group invariance predicts that the differential soft function must factorize in Laplace space, as in Eq. . The Laplace transform is defined by $$\tilde{s}(x_L, x_R, \mu ) = \int_0^\infty \rd k_L \int_0^\infty \rd k_R S(k_L,k_R,\mu)e^{- x_L k_L e^{-\gamma_E}} e^{- x_R k_R e^{-\gamma_E}} ,$$ where the $\gamma_E$ factors are added in the definition to avoid their appearance elsewhere. The factorization theorem then implies $$\label{sfactb}
\tilde{s}(x_L, x_R, \mu ) = {\widetilde{s}_{\mu}}(\ln x_L \mu) {\widetilde{s}_{\mu}}(\ln x_R \mu) {\widetilde{s}_{f}}(x_L,x_R).$$ The RG-kernel ${\widetilde{s}_{\mu}}(L)$ is determined by the renormalization group invariance of the factorization formula, and is expressible in terms of the anomalous dimensions of the hard and jet functions, which are known up to $\alpha_s^3$. The finite part ${\widetilde{s}_{f}}(L)$, until now, has been known only to $\alpha_s$. This Laplace form leads to a simple expression for the integrated soft function in SCET [@Chien:2010kc] $${ {\mathcal{R} }}(X,Y,\mu) = \tilde{s}(\partial_{\eta_1}, \partial_{\eta_2},\mu)
\left(\frac{X}{\mu}\right)^{\eta_1}
\frac{e^{-\gamma_E\eta_1}}{\Gamma(\eta_1 +1)}
\left(\frac{Y}{\mu}\right)^{\eta_2}
\frac{e^{-\gamma_E\eta_2}}{\Gamma(\eta_2 +1)}
\Big|_{\eta_1 =\eta_2=0}.$$ The $\mu$-dependent terms in the order $\alpha_s^2$ integrated soft function calculated in this way agree exactly with the $\mu$-dependent terms in ${ {\mathcal{R} }}(X,Y,\mu)$. In fact, it is helpful to separate out those terms. To that end, we write the $\alpha_s^2$ terms as $${ {\mathcal{R} }}(X,Y,\mu) =\left(\frac{\alpha_s}{4\pi}\right)^2\left[{ {\mathcal{R} }}_\mu\left(\frac{X}{\mu},\frac{Y}{\mu}\right)+ { {\mathcal{R} }}_f\left(\frac{X}{Y}\right)\right],$$ where ${ {\mathcal{R} }}_\mu(X/\mu, Y/\mu)$ is the part coming directly from the ${\widetilde{s}_{\mu}}(L)$ terms and ${ {\mathcal{R} }}_f(X/Y)$ is the remainder, which comes from ${\widetilde{s}_{f}}(x_L,x_R)$. The result for ${ {\mathcal{R} }}_\mu(X/\mu, Y/\mu)$ is $$\begin{aligned}
&{ {\mathcal{R} }}_\mu\left(\frac{X}{\mu},\frac{Y}{\mu}\right)
=
\Bigg[
8\ln^{4}\frac{X}{\mu}-\frac{20}{3}\pi^{2}\ln^{2}\frac{X}{\mu}+16\ln^{2}\frac{X}{\mu}\ln^{2}\frac{Y}{\mu}{\nonumber \\}&\quad+64\zeta_3\ln\frac{XY}{\mu^{2}}+8\ln^{4}\frac{Y}{\mu}-\frac{20}{3}\pi^{2}\ln^{2}\frac{Y}{\mu}-\frac{28\pi^{4}}{45}
\Bigg]C^2_F
{\nonumber \\}&\quad
+\Bigg[
\frac{88}{9}\ln^{3}\frac{X}{\mu}+\frac{4}{3}\pi^{2}\ln^{2}\frac{X}{\mu}-\frac{268}{9}\ln^{2}\frac{X}{\mu}-\frac{22}{9}\pi^{2}\ln\frac{XY}{\mu^{2}}+\frac{808}{27}\ln\frac{XY}{\mu^{2}}{\nonumber \\}&\quad-28\zeta_3\ln\frac{XY}{\mu^{2}}+\frac{88}{9}\ln^{3}\frac{Y}{\mu}+\frac{4}{3}\pi^{2}\ln^{2}\frac{Y}{\mu}-\frac{268}{9}\ln^{2}\frac{Y}{\mu}+\frac{352\zeta_3}{9}-\frac{4\pi^{4}}{9}+\frac{268\pi^{2}}{27}
\Bigg]C_F C_A
{\nonumber \\}&\quad
+\Bigg[
-\frac{32}{9} \ln ^3\frac{X}{\mu }+\frac{80}{9} \ln ^2\frac{X}{\mu
}+\frac{8}{9} \pi ^2 \ln \frac{X Y}{\mu ^2}-\frac{224}{27} \ln
\frac{X Y}{\mu ^2}
{\nonumber \\}&~~~~~~~~~~~~~~~~~~~~~~~~~~\quad
-\frac{32}{9} \ln ^3\frac{Y}{\mu
}+\frac{80}{9} \ln ^2\frac{Y}{\mu }-\frac{128 \zeta_3}{9}-\frac{80 \pi ^2}{27}
\Bigg]C_F T_F n_f.\end{aligned}$$ The part of the soft function not determined by RG-invariance is represented entirely by ${\widetilde{s}_{f}}(x_L,x_R)$. This function is $\mu$-independent and can only depend on the ratio $x_L/x_R$ by dimensional analysis. Moreover, it is symmetric in $x_L \leftrightarrow x_R$, since the hemisphere soft function is symmetric in $k_L \leftrightarrow k_R$. Hoang and Kluth claimed [@Hoang:2008fs] that it should only have logarithms, and up to order $\alpha_s^2$, only have $\ln^0$ and $\ln^2$ terms. Their ansatz was that $${\widetilde{s}_{f}}(x_L,x_R)^{\text{Hoang-Kluth}} =
1
+ \left( \frac{\alpha_s}{4\pi} \right) c_1^{S}
+ \left( \frac{\alpha_s}{4\pi} \right)^2[{ { c^S_2} }+ { { c^S_{2L}} }\ln^2\frac{x_L}{x_R} ],$$ with ${ { c^S_1} }=-C_F\pi^2$ already known.
To check the Hoang-Kluth ansatz, the easiest approach is to look at the contribution of ${\widetilde{s}_{f}}(x_L,x_R)$ to ${ {\mathcal{R} }}(X,Y,\mu)$, which we called ${ {\mathcal{R} }}_f(X/Y)$. For the Hoang-Kluth ansatz, the result is $${ {\mathcal{R} }}_f(z)^{\text{Hoang-Kluth}} = { { c^S_2} }+ { { c^S_{2L}} }( \ln^2 z - \frac{\pi^2}{3}) .$$ The values of ${ { c^S_2} }$ and ${ { c^S_{2L}} }$ which get right the singular parts of the thrust and heavy jet mass distributions are given in Eqs. and with ${ { c^S_{2L}} }= \frac{3}{\pi^2}({ { c^S_2} }- { { c^S_{2\rho}} })$.
The exact answer, at order $\alpha_s^2$ is $$\begin{aligned}
\label{sf:exact}
&{ {\mathcal{R} }}_f(z) =
\frac{\pi^4}{2}
C_F^2
+
\left[
-88 \text{Li}_3(-z)-16 \text{Li}_4\left(\frac{1}{z+1}\right)-16
\text{Li}_4\left(\frac{z}{z+1}\right)+16 \text{Li}_3(-z) \ln
(z+1)
\right.
{\nonumber \\}&\qquad
\left.
+\frac{88 \text{Li}_2(-z) \ln (z)}{3}-8 \text{Li}_3(-z) \ln
(z)-16 \zeta (3) \ln (z+1)+8 \zeta (3) \ln (z)-\frac{4}{3} \ln^4(z+1)
\right.
{\nonumber \\}&\qquad
\left.
+\frac{8}{3} \ln (z) \ln^3(z+1)+\frac{4}{3} \pi^2 \ln^2(z+1)-\frac{4}{3} \pi^2 \ln^2(z)-\frac{4 \left(3 (z-1)+11 \pi^2 (z+1)\right) \ln (z)}{9 (z+1)}
\right.
{\nonumber \\}&\qquad
\left.
-\frac{506 \zeta (3)}{9}+\frac{16\pi^4}{9}-\frac{871 \pi^2}{54}-\frac{2032}{81}\right]C_F C_A
+
\left[
32 \text{Li}_3(-z)-\frac{32}{3} \text{Li}_2(-z) \ln (z)
\right.
{\nonumber \\}&\qquad
\left.
+\frac{8 (z-1) \ln (z)}{3 (z+1)}+\frac{16}{9} \pi^2 \ln (z)+\frac{184 \zeta (3)}{9}+\frac{154 \pi^2}{27}-\frac{136}{81}
\right]
C_F n_f T_F
$$ This is clearly very different from the Hoang-Kluth form.
Asymptotic behavior and non-global logs \[sec:asym\]
====================================================
The factorization theorem is valid in the dijet limit when the hemisphere masses are small compared to $Q$; however, there is no restriction on the relative size of the two masses. In addition to logarithms $\ln \frac{M_{L,R} }{\mu}$ required by RG invariance, there may be logarithms of the form $\ln \frac{M_{L}}{M_{R}}$ that enter at order $\alpha_s^2$. These logarithms cannot be predicted by RG invariance and are known as non-global logarithms. Salam and Dasgupta have shown that non-global logs appear in distributions such as the light jet mass. They argued that in the strongly-ordered soft limit, when $M_L \ll M_R \ll Q$, the leading non-global log should be $-(\frac{\alpha_s}{4\pi})^2\frac{4\pi^2}{3}
C_F C_A \ln^2\frac{M_L^2}{M_R^2}$ in full QCD. This double log was reproduced in [@scetCL].
Non-global logs must be present in SCET, since for small $M_L$ and $M_R$, the entire distribution is determined by soft and collinear degrees of freedom. The non-global logs cannot come from the hard function, which has no knowledge of either mass, or the jet function, since each jet function knows about only one mass. Thus, they must come from the soft function. Moreover since, by definition, they are not determined by RG invariance, they must be present in the $\mu$-independent part, ${ {\mathcal{R} }}_f(X/Y)$ of the integrated hemisphere soft function, ${ {\mathcal{R} }}(X,Y,\mu)$. This function was given explicitly in Eq. .
To see the non-global logs in ${ {\mathcal{R} }}_f(z)$ we can simply take the limit $z \to \infty$. Note that ${ {\mathcal{R} }}_f(z)={ {\mathcal{R} }}_f(\frac{1}{z})$ so this is also the limit $z\to 0$. The asymptotic limit of ${ {\mathcal{R} }}_f(z)$ for large or small $z$ is $$\begin{aligned}
\label{sf:large}
&{ {\mathcal{R} }}_f^{z\gg 1}(z) =
\frac{\pi^4}{2}
C_F^2
+
\left[
\left(
\frac{8}{3} - \frac{16\pi^2}{9}
\right) |\ln z|
+
-\frac{136}{81}
+\frac{154 \pi ^2}{27}
+
\frac{184 \zeta_3}{9}
\right]
C_F n_f T_F \\
&+ \left[
-\frac{4}{3} \pi ^2 \ln^2 z
+
\left(
-8 \zeta_3
-\frac{4}{3}
+\frac{44 \pi ^2}{9}
\right)
|\ln z|
-\frac{506 \zeta_3}{9}
+\frac{8 \pi^4}{5}
-\frac{871 \pi ^2}{54}
-\frac{2032}{81}
\right]
C_F C_A {\nonumber}.\end{aligned}$$ There are two important features to note in this expansion. First of all, in the $C_F C_A$ color structure there is a term $-\frac{4\pi^2}{3} \ln^2 z$, which is the leading non-global log found by Dasgupta and Salam and in [@scetCL]. But we also see that there are sub-leading non-global logs, of the form $|\ln z|$. The absolute value is necessary to keep the expression symmetric in $z\to \frac{1}{z}$. It is interesting to see how this sign flip comes out of the full analytic expression.
![The contribution of the part of the soft function not fixed by renormalization group invariance to the hemisphere mass distribution, ${ {\mathcal{R} }}_f(z)$ is shown. On the left is the $C_F n_f T_F$ color factor and on the right is $C_F C_A$, both as a function of $\ln z \equiv \ln \frac{X}{Y}$. The solid black curve is the exact result of Eq. (\[sf:exact\]). The dashed red curve is the plot of the small $\ln z$ expression of Eq. (\[sf:small\]) and the dotted blue curve give the large $\ln z$ behavior of Eq. (\[sf:large\]). The kink is due to a sign flip, since the linear term appears as $|\ln z|$.[]{data-label="fig:sf"}](SfcompareNf "fig:"){width="45.00000%"} ![The contribution of the part of the soft function not fixed by renormalization group invariance to the hemisphere mass distribution, ${ {\mathcal{R} }}_f(z)$ is shown. On the left is the $C_F n_f T_F$ color factor and on the right is $C_F C_A$, both as a function of $\ln z \equiv \ln \frac{X}{Y}$. The solid black curve is the exact result of Eq. (\[sf:exact\]). The dashed red curve is the plot of the small $\ln z$ expression of Eq. (\[sf:small\]) and the dotted blue curve give the large $\ln z$ behavior of Eq. (\[sf:large\]). The kink is due to a sign flip, since the linear term appears as $|\ln z|$.[]{data-label="fig:sf"}](SfcompareCA "fig:"){width="45.00000%"}
Next, let us look at $z\sim 1$. Here we find $$\begin{aligned}
\label{sf:small}
&{ {\mathcal{R} }}_f^{z\sim1}(z) =
\frac{\pi^4}{2}
C_F^2
+
\left[
\left(
-\frac{2}{3}
-\frac{4 \pi ^2}{3}
-4 \ln ^2 2
+\frac{44 \ln 2}{3}
\right)
\ln^2z
-32 \text{Li}_4\left(\frac{1}{2}\right)
+\frac{88 \zeta_3}{9}
\right.
{\nonumber \\}&\qquad
\left.
-28 \zeta_3 \ln (2)
-\frac{2032}{81}-\frac{871 \pi^2}{54}
+\frac{16 \pi ^4}{9}
-\frac{4 \ln ^4 2}{3}
+\frac{4}{3} \pi ^2 \ln ^2 2
\right]
C_F C_A
{\nonumber \\}&\qquad
+
\left[
\left(
\frac{4}{3}
-\frac{16 \ln 2}{3}
\right)
\ln^2z
+\frac{154 \pi ^2}{27}
-\frac{136}{81}
-\frac{32 \zeta_3}{9}
\right]
C_F n_f T_F + \mathcal{O}(\ln^3 z).\end{aligned}$$ We see there is a double logarithmic term for both the $C_F C_A$ and $C_F n_f T_F$ color structures. This is consistent with an analysis performed in [@Hoang:2008fs] of an observable $\rho^\alpha = \max(\alpha M_L^2,M_R^2)/Q^2$. They found that the integrated $\rho^\alpha$ distribution looked like $\ln^2\alpha$ for $\alpha\sim 1$. This quadratic behavior in $\ln \alpha$ corresponds exactly to the quadratic behavior in the $z\sim 1$ limit in .
We show in Figure \[fig:sf\] the exact finite function ${ {\mathcal{R} }}_f(z)$ and a comparison to its asymptotic behavior at small $\ln z$ and large $\ln z$ for the $C_F C_A$ and $C_F n_f T_F$ color structures. For both color structures, the exact curve is well approximated by a parabola for small $\ln z$. At large $\ln z$, for the $C_F n_f T_F$ color factor, the exact result approaches a linear function whereas the $C_F C_A$ color structure has $\ln^2 z$ dependence with a different coefficient than for the small $\ln z$ limit. The $C_F C_A$ term has a linear term as well.
As we have discussed, the integrated hemisphere soft function contributes directly to the doubly differential hemisphere mass distribution. In the limit where both hemisphere masses are small, and well separated, the soft function gives the dominant contribution. In this regime, we can read off that the leading non-global logarithms are given by ${ {\mathcal{R} }}_f^{z\gg 1}(M_L^2/M_R^2)$ in Eq.. The $\ln^2$ term has an identical coefficient to that found in [@Dasgupta:2001sh]. The subleading non-global logarithm is a new result.
Exponentiation \[sec:exp\]
==========================
These non-global logarithms become important when one scale becomes parametrically larger than the other. The separation of scales suggest that at the higher of the two scales, one may be able to match onto a new effective theory and then run the matching coefficient between the two scales. In fact, the $-\frac{4 \pi^2}{3} \ln^2 z$ term in this calculation has its origin in $f(0,0)$, where $f(r,{\epsilon})$ is the opposite-direction contribution to the hemisphere soft function, as in the Appendix. Indeed, we found that
$$f_{C_A}(0,{\epsilon}) = \frac{8\pi^2}{3} + \mathcal{O}({\epsilon}), \qquad
f_{nf}(0,{\epsilon}) = \mathcal{O}({\epsilon}^2),$$
which is consistent with the leading non-global logarithm only having the $C_F C_A$ color structure. Since it is the ${\epsilon}^0$ part of this expression which contributes, and there are double soft poles, the full expansion also has terms like $f(0,0)\ln^2\mu$. Thus $f(0,0)$ can be thought of as an anomalous dimension, providing hope that these non-global logs might be resummed in an effective theory. A consistent framework may require some kind of refactorization, like the one found for a related event shape, $\tau_\omega$, in [@Kelley:2011tj]. Ideas along these lines were suggested in talks by Chris Lee [@scetCL; @boostCL]. Lee and collaborators proposed that the leading non-global logs might be resummed with effective field theory, although no details were given.
There is actually good reason to believe the resummation of non-global logs is more challenging than the types of resummation done in SCET. To see this, we first consider the predictions from non-Abelian exponentiation. Non-Abelian exponentiation applies only to the case of pure QCD, without fermion loops. In this case, it says that the full soft function, in Laplace space, can be written as an exponential of 2-particle irreducible diagrams. At order $\alpha_s^n$, new contributions can appear only to the maximally non-Abelian color structure, $C_F C_A^{n-1}$. For example, at two-loops, this tells us that the $C_F^2$ color structure is given entirely by the exponential of the one-loop $C_F$ color structure. At 3-loops it predicts the entire $C_F^3$ and $C_F^2 C_A$ color structures.
To be more specific, the soft function in Laplace space factorizes as in Eq. , with the ${\widetilde{s}_{\mu}}$ terms and the ${\widetilde{s}_{f}}$ terms separately exponentiating, as explained in [@Hoang:2008fs]. So we can write $${\widetilde{s}_{f}}(x_L,x_R) = \exp \left[\frac{\alpha_s}{4\pi}( -\pi^2)C_F
+ \left(\frac{\alpha_s}{4\pi}\right)^2 \left( C_F n_f T_F{\widetilde{s}_{f}}^{(2,n_f)}(x_L,x_R)+ C_FC_A{\widetilde{s}_{f}}^{(2,C_A)}(x_L,x_R) \right) + \cdots \right]
\label{naeform}$$ where ${\widetilde{s}_{f}}^{(2,n_f)}$ and ${\widetilde{s}_{f}}^{(2,C_A)}$ are the Laplace transforms of the $C_F n_f T_F$ and $C_F C_A$ color structures in the 2-loop soft function.[^1] Such a rewriting has no content unless there is some restriction on the terms appearing in the exponent. Non-Abelian exponentiation tells us that the higher-order terms with $C_F$ and $C_A$’s only must be maximally non-Abelian, $C_F C_A^{n-1}$.
This implies, for example, that at 3-loops we know 2 color structures. Explicitly, $${\widetilde{s}_{f}}^{~\text{3-loop}}(x_L,x_R) = \left(\frac{\alpha_s}{4\pi}\right)^3\left[ C_F^3 \frac{(-\pi^2)^3}{6} + C_F^2 C_A (-\pi^2)
{\widetilde{s}_{f}}^{(2,C_A)}(x_L,x_R) + \cdots \right]$$ There are 4 remaining color structures, $C_F C_A^2$, $C_F n_f^2 T_F^2$, $C_F C_A n_f T_F$ and $C_F^2 n_f T_F$ which are still unknown. Actually, the $C_F n_f^2 T_F^2$ color structure at 3-loops should not be hard to compute, but there is no known general formula for how the $n_f$ color structures exponentiate (see [@Berger:2002sv] for some discussion).
From the exponentiation formula, one can read off the missing parts of the soft contribution to the 3-loop thrust and heavy-jet mass distributions. Indeed, for $n\ge 2$, we have $$c^S_n =C_F^n \frac{(-\pi^2)^n}{n!} +C_F^{n-1} C_A \frac{(-\pi^2)^{n-2}}{(n-2)!} \left[ { { c^S_2} }\Big| _{C_F C_A} \right] +\cdots \,,$$ and similarly for $c^2_{n\rho}$, with ${ { c^S_2} }$ and ${ { c^S_{2\rho}} }$ given in Eqs. and . These constants can be included in future $\alpha_s$ fits or, once the finite part of the 3-loop jet function is computed, compared to extractions from the full thrust distrubtion at NNLO [@Monni:2011gb].
Returning to the exponentiation of non-global logs, recall that the leading non-global log comes from ${\widetilde{s}_{f}}^{(2,C_A)}(x_L,x_R) = -\frac{4\pi^2}{3} L^2$, with $L= \ln\frac{ x_L}{x_R}$. Thus non-Abelian exponentiation predicts a series with terms $(C_F C_A \alpha_s^2 L^2)^{n-1}$, as well as cross-terms with the $C_F$ one-loop color structure which are subleading. The question is whether this is the entire resummation of the leading non-global log. It seems like the answer is no, since there is no apparent reason why 3-loop graphs cannot produce terms which scale like $\alpha_s^3 C_F C_A^2 L^3$ (or even $\alpha_s^3 L^4$) for large $L$. A clue that these terms do exist comes from the numerical resummation of the leading non-global log at large $N_c$ in [@Dasgupta:2001sh]. These authors found that the resummed distribution could be fit by an exponential, but it is numerically different from the pure $(C_F C_A \alpha_s^2 L^2)^{n-1}$ terms predicted by Eq. . Since $C_F$ and $C_A$ both scale as $N_c$ at large $N_c$, this implies that there must be a $\alpha_s^3 C_F C_A^2 L^3$ term at 3-loops (and no $\alpha_s^3 L^4$ term). Thus the resummation of even the leading non-global log may require a way to predict arbitrarily complicated color structures. It would be exciting to see how this can be done in the effective field theory framework.
Conclusions \[sec:conc\]
========================
In this paper, we have presented the complete calculation of the hemisphere soft function to order $\alpha_s^2$. This is the first 2-loop calculation of a soft function which depends on two scales in addition to the renormalization group scale $\mu$. The hemisphere soft function, $S(k_L,k_R,\mu)$, depends on the components of the momenta going into the left and right hemispheres. In a one-scale soft function, such as the Drell-Yan soft function, $S_{DY}(k,\mu)$ [@Korchemsky:1993uz; @Belitsky:1998tc; @Becher:2007ty], the thrust soft function $S_T(k,\mu)$ [@Schwartz:2007ib; @Becher:2008cf] or the direct photon soft function $S_{\gamma}(k,\mu)$ [@Becher:2009th], all of the $k$ dependence is fixed once the $\mu$-dependence is known. Since the $\mu$-dependence is fixed by RG invariance, these functions are often completely determined. For multi-scale soft functions, like the hemisphere soft function, there can be additional dependence on the ratio $r= k_L/k_R$. We worked out this dependence explicitly at order $\alpha_s^2$, and the result is more complicated than previously anticipated.
We performed a number of checks on our calculation. The $\mu$-dependence of the result was entirely known by virtue of the factorization theorem in SCET, and we have confirmed that the $\mu$-dependence of our hemisphere soft function matches the result obtained from factorization analysis. In addition, the result allows us to produce analytic expressions for all of the singular terms in the 2-loop thrust and heavy jet mass distributions. The constant terms in the singular distributions were previously unknown and had to be extracted from numerical fits [@Becher:2008cf; @Hoang:2008fs; @Chien:2010kc]. We found our analytical results to be in excellent agreement with the very precise recent numerical fit of [@Chien:2010kc].
The full hemisphere soft function produces the leading and sub-leading non-global logs in the hemisphere mass distribution. Previously, only the leading double-log term was known, from a calculation in the soft limit of full QCD [@Dasgupta:2001sh]. In this work we reproduced that double logarithm and, furthermore, showed the existence of a sub-leading single logarithm. This single logarithm, of $M_L/M_R$, is interesting because $\ln M_L/M_R$ seems like it should be forbidden by the $M_L \leftrightarrow M_R$ symmetry. Curiously, we find that the complicated behavior of the hemisphere mass distribution when $M_L \sim M_R$ allows the single log to flip sign and it manifests itself as $\ln[ \max(M_L,M_R)/\min(M_L,M_R)] = |\ln M_L/M_R|$. Our calculation is the first to exhibit a sub-leading non-global logarithm of this type.
Besides being of formal interest, the hemisphere soft function at $\mathcal{O}(\alpha_s^2)$ is a crucial component of the resummed heavy-jet mass distribution at N$^3$LL order. Previous fits to $\alpha_s$ at this order assumed a simple form for the soft function, using the Hoang-Kluth ansatz. We have shown that this ansatz is valid only in the limit that $k_L \sim k_R$. With the exact $\mathcal{O}(\alpha_s^2)$ soft function in hand, one source of uncertainty in the $\alpha_s$ fits to event shapes can be removed.
This work also has implications for calculations of distributions at hadron colliders. At hadron colliders, there are necessarily many more scales in relevant observables than at $e^+e^-$ machines. For example, jet sizes and veto scales play a critical role in many analysis [@Ellis:2009wj; @Ellis:2010rw; @Kelley:2011tj]. For multi-scale observables to be computed in effective field theory, we need a better understanding of multi-scale soft functions, such as this exact result on the 2-loop hemisphere soft function provides.
Acknowledgements
================
The authors would like to thank Y.-T. Chien, M. Dasgupta, C. Lee, K. Melnikov, G. Salam and I. Stewart for useful discussions and F. Petriello and I. Scimemi for collaboration on intermediate stages of this project. RK and MDS were supported in part by the Department of Energy, under grant DE-SC003916. HXZ was supported by the National Natural Science Foundation of China under grants No. 11021092 and No. 10975004 and the Graduate Student academic exchange program of Peking University.
Opposite direction contributions {#app:f}
=================================
The following are the first 3 terms in the ${\epsilon}$ expansion of $f_{C_A}$ from Eq. (\[fca:exp\]). $$\begin{aligned}
f_{C_A}^{(0)}(r)
&=
8
\left(
\frac{ r \left(11 r^2+21 r+12 \right) \ln (r)}{3 (r+1)^3}
+\frac{\pi^2 (r+1)^2+2 r}{3 (r+1)^2}
+\ln ^2(r+1)
\right.
{\nonumber \\}&\qquad
\left.
-\ln (r) \ln (r+1)
-\frac{11}{3} \ln(r+1)
\right),\end{aligned}$$ $$\begin{aligned}
f_{C_A}^{(1)}(r)
&=
\frac{8 \left(-11 r^3-9 r^2+9 r+11\right) \text{Li}_2(-r)}{3
(r+1)^3}+24 \text{Li}_3(-r)-16 \text{Li}_2(-r) \log (r)
{\nonumber \\}&
+\frac{4 r
\left(11 r^2+21 r+12\right) \log^2(r)}{3 (r+1)^3}-\frac{8 r
\left(67 r^2+141 r+60\right) \log (r)}{9 (r+1)^3}
{\nonumber \\}&
-\frac{4 \left(r^3
\left(11 \pi^2-36 \zeta (3)\right)+r^2 \left(-108 \zeta (3)+32+21
\pi^2\right)+4 r \left(-27 \zeta (3)+8+3 \pi^2\right)-36 \zeta
(3)\right)}{9 (r+1)^3}
{\nonumber \\}&
+\frac{8 \left(-11 r^3-9 r^2+9 r+11\right)
\ln (r+1) \ln (r)}{3 (r+1)^3}-4 \ln (r+1) \ln^2(r)+\frac{4}{9}
\left(134+3 \pi^2\right) \ln (r+1),\end{aligned}$$ $$\begin{aligned}
f_{C_A}^{(2)}(r)
&=
\frac{8 \left(67 r^3+81 r^2-81 r-67\right) \text{Li}_2(-r)}{9
(r+1)^3}-\frac{8 \left(55 r^3+117 r^2+81 r+11\right)
\text{Li}_3(-r)}{3 (r+1)^3}
{\nonumber \\}&
-\frac{32 \left(11 r^3+9 r^2-9
r-11\right) \text{Li}_3\left(\frac{1}{r+1}\right)}{3
(r+1)^3}-\frac{16 \left(11 r^3+9 r^2-9 r-11\right) \text{Li}_2(-r)
\ln (r+1)}{3 (r+1)^3}
{\nonumber \\}&
+\frac{8 \left(33 r^3+75 r^2+57 r+11\right)
\text{Li}_2(-r) \ln (r)}{3 (r+1)^3}-16
\text{Li}_4\left(\frac{1}{r+1}\right)-16
\text{Li}_4\left(\frac{r}{r+1}\right)
{\nonumber \\}&
-8 \text{Li}_2(-r) \ln^2(r)+16 \text{Li}_2(-r) \ln (r) \ln (r+1)+8 \text{Li}_3(-r) \ln
(r)+16 \text{Li}_3\left(\frac{1}{r+1}\right) \ln (r)
{\nonumber \\}&
+\frac{4
\left(-40 \left(4 r^2 (27 \zeta (3)-2)+r (189 \zeta (3)-8)+99 \zeta
(3)\right)+5 \pi^2 r \left(67 r^2+147 r+66\right)+33 \pi^4
(r+1)^3\right)}{135 (r+1)^3}
{\nonumber \\}&
-\frac{32 \left(12 r^2+21 r+11\right)
\ln^3(r+1)}{9 (r+1)^3}+\frac{4 r \left(11 r^2+21 r+12\right) \ln^3(r)}{9 (r+1)^3}-\frac{4}{3}
\ln^3(r) \ln (r+1)
{\nonumber \\}&
+\frac{16 \left(12 r^2+21 r+11\right) \ln (r) \ln^2(r+1)}{3 (r+1)^3}-\frac{4 r \left(67 r^2+141 r+60\right) \ln^2(r)}{9 (r+1)^3}
{\nonumber \\}&
+\frac{16 r \left(193 r^2+384 r+177\right) \ln
(r)}{27 (r+1)^3}+\frac{4 \left(-11 r^3-9 r^2+9 r+11\right) \ln^2(r) \ln (r+1)}{3 (r+1)^3}
{\nonumber \\}&
-\frac{8 \left(\pi^2 \left(66 r^3+90
r^2+9 r-33\right)+2 \left(193 r^3+561 r^2+561 r+193\right)\right)
\ln (r+1)}{27 (r+1)^3}
{\nonumber \\}&
+\frac{8 \left(67 r^3+69 r^2-93 r+3 \pi^2
(r+1)^3-67\right) \ln (r) \ln (r+1)}{9 (r+1)^3}-\frac{4}{3} \ln^4(r+1)
{\nonumber \\}&
+8 \ln^2(r) \ln^2(r+1)+16 \zeta (3) \ln
(r+1)-16 \zeta (3) \log (r)+\frac{32 r \ln^2(r+1)}{3 (r+1)^2}.\end{aligned}$$ The following are the first 2 terms in the ${\epsilon}$ expansion of $f_{n_f}$ from Eq. (\[fnf:exp\]). $$\begin{aligned}
f_{n_f}^{(0)}(r)
&=
-\frac{16 \left(2 r (r+1)-2 (r+1)^3 \ln (r+1)+r (r (2 r+3)+3) \ln (r)\right)}{3 (r+1)^3},\end{aligned}$$ $$\begin{aligned}
f_{n_f}^{(1)}(r)
&= \frac{8}{9 (r+1)^3}
\left(
-12 \left(r^3-1\right)
\text{Li}_2\left(-\frac{1}{r}\right)
-32 r^3 \ln (r+1)
+3 \pi ^2 r^2
+20 r^2
\right.
{\nonumber \\}& \quad \left.
-96 r^2 \ln(r+1)
-3 \left(4 r^3+3 r^2+3 r-2\right) \ln ^2(r)
+4 \ln (r)
\left(
3 \left(r^3-1\right) \ln (r+1)
\right.
\right.
{\nonumber \\}& \quad \left. \left.
+r \left(8 r^2+21r+3\right)
\right)
+3 \pi ^2 r
+20 r
-96 r \ln (r+1)
-32 \ln (r+1)
+2 \pi ^2
\right) .\end{aligned}$$
[99]{}
A. Gehrmann-De Ridder, T. Gehrmann and E. W. N. Glover, JHEP [**0509**]{}, 056 (2005) \[arXiv:hep-ph/0505111\].
A. Gehrmann-De Ridder, T. Gehrmann, E. W. N. Glover and G. Heinrich, Phys. Rev. Lett. [**99**]{}, 132002 (2007) \[arXiv:0707.1285 \[hep-ph\]\]. A. Gehrmann-De Ridder, T. Gehrmann, E. W. N. Glover and G. Heinrich, JHEP [**0712**]{}, 094 (2007) \[arXiv:0711.4711 \[hep-ph\]\]. S. Weinzierl, Phys. Rev. Lett. [**101**]{}, 162001 (2008) \[arXiv:0807.3241 \[hep-ph\]\].
C. W. Bauer, S. Fleming, D. Pirjol and I. W. Stewart, Phys. Rev. D [**63**]{}, 114020 (2001) \[arXiv:hep-ph/0011336\]. C. W. Bauer, D. Pirjol and I. W. Stewart, Phys. Rev. D [**65**]{}, 054022 (2002) \[arXiv:hep-ph/0109045\]. M. Beneke, A. P. Chapovsky, M. Diehl and T. Feldmann, Nucl. Phys. B [**643**]{}, 431 (2002) \[arXiv:hep-ph/0206152\]. S. Fleming, A. H. Hoang, S. Mantry and I. W. Stewart, Phys. Rev. D [**77**]{}, 074010 (2008) \[arXiv:hep-ph/0703207\]. M. D. Schwartz, Phys. Rev. D [**77**]{}, 014026 (2008) \[arXiv:0709.2709 \[hep-ph\]\]. T. Becher and M. Neubert, Phys. Lett. B [**637**]{}, 251 (2006) \[arXiv:hep-ph/0603140\]. T. Becher and M. D. Schwartz, JHEP [**0807**]{}, 034 (2008) \[arXiv:0803.0342 \[hep-ph\]\]. Y. T. Chien and M. D. Schwartz, JHEP [**1008**]{}, 058 (2010) \[arXiv:1005.1644 \[hep-ph\]\].
R. Abbate, M. Fickinger, A. H. Hoang, V. Mateu, I. W. Stewart, Phys. Rev. [**D83**]{}, 074021 (2011). \[arXiv:1006.3080 \[hep-ph\]\].
W. M. Yao [*et al.*]{} \[Particle Data Group\], J. Phys. G [**33**]{}, 1 (2006). A. H. Hoang and S. Kluth, arXiv:0806.3852 \[hep-ph\]. T. Becher and M. D. Schwartz, JHEP [**1002**]{}, 040 (2010) \[arXiv:0911.0681 \[hep-ph\]\].
N. Kidonakis, G. F. Sterman, Nucl. Phys. [**B505**]{}, 321-348 (1997). \[hep-ph/9705234\].
S. M. Aybat, L. J. Dixon, G. F. Sterman, Phys. Rev. [**D74**]{}, 074004 (2006). \[hep-ph/0607309\].
S. D. Ellis, C. K. Vermilion, J. R. Walsh, A. Hornig and C. Lee, arXiv:1001.0014 \[hep-ph\]. R. Kelley, M. D. Schwartz, Phys. Rev. [**D83**]{}, 033001 (2011). \[arXiv:1008.4355 \[hep-ph\]\].
R. Kelley, M. D. Schwartz, Phys. Rev. [**D83**]{}, 045022 (2011). \[arXiv:1008.2759 \[hep-ph\]\]. ,1002,040;
G. P. Korchemsky and G. Marchesini, Phys. Lett. B [**313**]{}, 433 (1993). A. V. Belitsky, Phys. Lett. B [**442**]{}, 307 (1998) \[arXiv:hep-ph/9808389\]. T. Becher and M. Neubert, Phys. Lett. B [**633**]{}, 739 (2006) \[arXiv:hep-ph/0512208\].
R. Kelley, M. D. Schwartz and H. X. Zhu, arXiv:1102.0561 \[hep-ph\]. M. Dasgupta and G. P. Salam, Phys. Lett. B [**512**]{}, 323 (2001) \[arXiv:hep-ph/0104277\].
S. Fleming, A. H. Hoang, S. Mantry and I. W. Stewart, Phys. Rev. D [**77**]{}, 114003 (2008) \[arXiv:0711.2079 \[hep-ph\]\]. T. Huber and D. Maitre, Comput. Phys. Commun. [**175**]{}, 122 (2006) \[arXiv:hep-ph/0507094\]. G. Heinrich, Int. J. Mod. Phys. A [**23**]{}, 1457 (2008) \[arXiv:0803.4177 \[hep-ph\]\]. C. Lee, A. Hornig, I. W. Stewart, J. R. Walsh, and S. Zuberi, “[Non-Global Logs in SCET]{}.” Talk presented at SCET 2011 Workshop, March 6–8, 2011, Carnegie Mellon University.
C. Lee, A. Hornig, I. W. Stewart, J. R. Walsh, and S. Zuberi, “[Non-Global Logs in SCET]{}.” Talk presented at BOOST 2011 Workshop, May 22-26, 2011, Princeton NJ.
T. Becher, M. Neubert and G. Xu, JHEP [**0807**]{}, 030 (2008) \[arXiv:0710.0680 \[hep-ph\]\]. S. D. Ellis, A. Hornig, C. Lee, C. K. Vermilion, J. R. Walsh, Phys. Lett. [**B689**]{}, 82-89 (2010). \[arXiv:0912.0262 \[hep-ph\]\].
A. Hornig, C. Lee, I. W. Stewart, J. R. Walsh, S. Zuberi, \[arXiv:1105.4628 \[hep-ph\]\].
C. F. Berger, Phys. Rev. D [**66**]{}, 116002 (2002) \[arXiv:hep-ph/0209107\].
P. F. Monni, T. Gehrmann, G. Luisoni, \[arXiv:1105.4560 \[hep-ph\]\].
[^1]: Although we have not computed the Laplace-space soft function directly, it was calculated by another group after the first version of this paper appeared [@Hornig:2011iu]. It has a qualitatively similar form to ${ {\mathcal{R} }}_f(\frac{x_L}{x_R})$.
|
{
"pile_set_name": "ArXiv"
}
|
[^1]
****
**Vardan Oganesyan**
**Abstract**
In this paper we propose a very effective method for constructing matrix commuting differential operators of rank 2 and vector rank (2,2). We find new matrix commuting differential operators $L$, $M$ of orders $2$ and $2g$ respectively.\
**Introduction**
Let us consider two differential operators $$L_n= \sum\limits^{n}_{i=0} u_i(x)\partial_x^i, \quad L_m= \sum\limits^{m}_{i=0} v_i(x)\partial_x^i,$$ where coefficients $u_i(x)$ and $v_i(x)$ are scalar or matrix valued functions. The commutativity condition $L_nL_m = L_mL_n$ is equivalent to a very complicated system of nonlinear differential equations. The theory of commuting ordinary differential operators was first developed in the beginning of the XX century in the works of Wallenberg [@Wal], Schur [@Schur].
If two differential operators with scalar or matrix valued coefficients commute, then there exists a nonzero polynomial $R(z,w)$ such that $R(L_n,L_m)=0$ (see [@Chaundy], [@Grin]). The curve $\Gamma$ defined by $R(z,w)=0$ is called the *spectral curve*. If $$L_n \psi=z\psi, \quad L_m \psi=w\psi,$$ then $(z,w) \in \Gamma$.
If coefficients are scalar functions, then for almost all $(z,w) \in \Gamma$, the dimension of the space of common eigenfunctions $\psi$ is the same. The dimension of the space of common eigenfunctions of two commuting scalar differential operators is called the *rank* of this pair. The rank is a common divisor of $m$ and $n$. The genus of the spectral curve of a pair of commuting operators is called the genus of this pair.
If the rank of two commuting scalar differential operators equals 1, then there are explicit formulas for coefficients of commutative operators in terms of Riemann theta-functions (see [@theta]).
The case when rank of scalar commuting operators is greater than $1$ is much more difficult. The first examples of commuting scalar differential operators of the nontrivial rank 2 and the nontrivial genus $g=1$ were constructed by Dixmier [@Dixmier] for the nonsingular elliptic spectral curve.
A general classification of commuting scalar differential operators was obtained by Krichever [@ringkrichever]. The general form of commuting scalar operators of rank 2 for an arbitrary elliptic spectral curve was found by Krichever and Novikov [@novkrich]. The general form of scalar commuting operators of rank 3 with arbitrary elliptic spectral curve was found by Mokhov [@Mokhov1],[@Mokhov2]. In [@Mironov] Mironov developed theory of self-adjoint scalar operators of rank 2 and found examples of commuting scalar operators of rank 2 and arbitrary genus. Using Mironov’s method many examples of scalar commuting operators of rank 2 and arbitrary genus were found (see [@Mironov2], [@Vartan], [@Vartan3], [@Davl], [@Zegl]). Moreover, examples of commuting scalar differential operators of arbitrary genus and arbitrary rank with polynomial coefficients were constructed by Mokhov in [@Mokhov4], [@Mokhov3].
Theory of commuting differential operators helps to find solutions of nonlinear partial differential equations from mathematical physics (see [@Smirnov], [@Smirnov4], [@Dubrovin3], [@Dubrovin4]). Also there are deep connections between theory of commuting scalar differential operators and Schottky problem(see [@Dubrovin2], [@Shiota]). The theory of commuting differential operators with polynomial coefficients has connections with the Dixmier conjecture and Jacobian conjecture (see [@belov], [@Tsushimoto]).
A general classification of commuting matrix differential operators was obtained by Grinevich [@Grin]. Grinevich considered two differential operators $$L= \sum\limits_{i=0}^{m}U_i\partial_x^i, \quad M =\sum\limits_{i=0}^{n}V_i\partial_x^i,$$ where $U_i$ and $V_i$ are smooth and complex-valued $s\times s$ matrices. Let us suppose the following conditions\
\
1) $det(U_m)\neq 0$.\
2) Eigenvalues $\lambda_1(x), ...,\lambda_s(x)$ of $U_m$ are distinct.\
3) Matrix $V_n$ is diagonalizable. Let $\mu_1(x),...,\mu_s(x)$ be eigenvalues of matrix $V_n$. Suppose that functions $\dfrac{\mu_i^m}{\lambda_i^n}$ are distinct constants for all $i=1,...,s$.\
\
Easy to see that if $L$ and $M$ commute, then $U_m$ and $V_n$ commute. If operators $L$ and $M$ commute, then $FLF^{-1}$ and $FMF^{-1}$ commute, where $F$ is matrix. We also can change variable. So, without loss of generality we can suppose that $$(U_m)_{ij} =\delta_{ij}\lambda_i, \quad (V_n)_{ij} = \delta_{ij}\mu_i, \quad tr(U_{m-1})=0$$ Let $\Gamma$ be the spectral curve of commuting matrix operators $L, M$. Spectral curve of matrix commuting operators can be reducible. Let $\Gamma_i$ be an irreducible component of the spectral curve. The dimension of the space of common eigenfunctions $$L\psi=z\psi, \quad M\psi=w\psi, \quad (z,w)\in \Gamma_i$$ is called the rank of commuting pair on $\Gamma_i$. Grinevich discovered that the spectral curve $\Gamma$ has $s$ points at infinity. So, $\Gamma = \bigcup_{i=1}^k\Gamma_i$, where $k\leqslant s$. Let $l_i$ be the rank of operators on $\Gamma_i$. Operators $L, M$ are called commuting operators of vector rank $(l_1,...,l_k)$, where $k\leqslant s$. Numbers $l_i$ are common divisors of $m$ and $n$. For more details see [@Grin]. Also see [@weikard3], [@weikard4], [@Dubrovin1].
If the rank of commuting matrix differential operators equals $1$, then there exists explicit formulas for coefficients in terms of Riemann theta-functions [@novkrich2].\
In this paper we propose a very effective method for constructing matrix commuting operators of rank 2 and vector rank (2,2). We find new commuting operators $L$, $M$ of orders $2$ and $2g$ respectively.\
**Acknowledgments**
The author wishes to express gratitude to Professor O. I. Mokhov for advices and help in writing this paper.
**Explicit examples of commuting matrix differential operators of rank 2**
Let us consider the operator $$L = E(x) \partial_x^2 + R(x)\partial + Q(x),$$ where $$E = \begin{pmatrix}
\lambda_1(x) & \lambda_3(x) \\
0 & \lambda_2(x) \\
\end{pmatrix}
, \quad
R = \begin{pmatrix}
r_1(x) & r_3(x) \\
r_2(x) & -r_1(x) \\
\end{pmatrix}
, \quad
Q = \begin{pmatrix}
q_1(x) & q_3(x) \\
q_2(x) & q_4(x)
\end{pmatrix}$$ We want to find an operator $M$ of order $2g$ such that $[L,M]=0$. Let us consider the operator $$\begin{gathered}
M = B_0(x)L^{g} + \left(A_1\left(x\right)\partial_x + B_1(x)\right)L^{g-1} + (A_2(x)\partial_x + B_2(x))L^{g-2} + ... +
\\
+(A_{g-1}(x)\partial_x + B_{g-1}(x))L + A_g(x)\partial_x + B_g(x),
\end{gathered}$$ where $$A_{g-k} = \begin{pmatrix}
a_1^{g-k}(x) & a_3^{g-k}(x) \\
a_2^{g-k}(x) &a_4^{g-k}(x) \\
\end{pmatrix}
, \quad
B_{g-k} = \begin{pmatrix}
b_1^{g-k}(x) & b_3^{g-k}(x) \\
b_2^{g-k}(x) & b_4^{g-k}(x) \\
\end{pmatrix}
, \quad
A_{0} = \begin{pmatrix}
0 & 0 \\
0 & 0 \\
\end{pmatrix}.$$ We note that in the formulas above the number $g-k$ is **index not degree**. The index ${g-k}$ says us that functions $a_i^{g-k}$ and $b_i^{g-k}$ are elements of matrices $A_{g-k}$ and $B_{g-k}$ respectively.
Let us note that if $[L,N]=0$, then $[L,MN]=LMN - MNL = LMN -MLN= [L,M]N$, where $L,M,N$ are matrix differential operators. So, we see that $$\begin{gathered}
\left[L,M\right] = [L,\sum\limits^{g}_{k=0}(A_{g-k}\partial_x + B_{g-k})L^k] = \sum\limits^{g}_{k=0}[L,(A_{g-k}\partial_x + B_{g-k})L^k] =\\ =\sum\limits^{g}_{k=0}[L,(A_{g-k}\partial_x + B_{g-k})]L^k.
\end{gathered}$$ Direct calculations show that $$\begin{gathered}
\left[L, \enskip A_{g-k}\partial_x + B_{g-k}] = [E\partial_x^2 + R\partial + Q, \enskip A_{g-k}\partial_x + B_{g-k}\right] = \\
\\
+(EA_{g-k} - A_{g-k}E)\partial_x^3 + (2EA_{g-k}' +EB_{g-k} + RA_{g-k} - A_{g-k}E' - A_{g-k}R -B_{g-k}E)\partial_x^2 + \\
+(EA_{g-k}'' + 2EB_{g-k}' + R_{g-k}A' + RB_{g-k} + QA_{g-k} - A_{g-k}R' - A_{g-k}Q - B_{g-k}R)\partial_x + \\
+(EB_{g-k}'' + RB_{g-k}' + QB_{g-k} - A_{g-k}Q' - B_{g-k}Q) = \\
\\
=K_{g-k}\partial_x^3 + P_{g-k}\partial_x^2 + T_{g-k}\partial_x + F_{g-k},
\end{gathered}$$ where $$\begin{gathered}
K_{g-k} = EA_{g-k} - A_{g-k}E,\\
P_{g-k} = 2EA_{g-k}' +EB_{g-k} + RA_{g-k} - A_{g-k}E' - A_{g-k}R -B_{g-k}E,\\
T_{g-k} = EA_{g-k}'' + 2EB_{g-k}' + R_{g-k}A' + RB_{g-k} + QA_{g-k} - A_{g-k}R' - A_{g-k}Q - B_{g-k}R,\\
F_{g-k} = EB_{g-k}'' + RB_{g-k}' + QB_{g-k} - A_{g-k}Q' - B_{g-k}Q.
\end{gathered}$$ Using the fact that $\partial_xL = E\partial_x^3 + (E' + R)\partial_x^2 + (R' + Q)\partial_x + Q'$ we get $$\begin{gathered}
K_{g-k}\partial_x^3 + P_{g-k}\partial_x^2 + T_{g-k}\partial_x + F_{g-k} = \\
\\
K_{g-k}E^{-1}\partial_xL + \left(P_{g-k} - K_{g-k}E^{-1}(E' + R)\right)\partial_x^2 + \\
\left(T_{g-k} - K_{g-k}E^{-1}(R' + Q)\right)\partial_x + (F_{g-k} - K_{g-k}E^{-1}Q' )=\\
\end{gathered}$$ $$\begin{gathered}
K_{g-k}E^{-1}\partial_xL + \left(P_{g-k} - K_{g-k}E^{-1}(E' + R)\right)E^{-1}L + \\
\left(T_{g-k} - K_{g-k}E^{-1}(R' + Q) -\left(P_{g-k} - K_{g-k}E^{-1}(E' + R)\right)E^{-1}R\right)\partial_x +
+\\ \left(F_{g-l} - K_{g-k}E^{-1}Q' - \left(P_{g-k} - K_{g-k}E^{-1}(E' + R)\right)E^{-1}Q \right) = \\
\\
\widetilde{K}_{g-k}\partial_xL + \widetilde{P}_{g-k}L + \widetilde{T}_{g-k}\partial_x + \widetilde{F}_{g-k},
\end{gathered}$$ where $$\begin{gathered}
\widetilde{K}_{g-k} = K_{g-k}E^{-1}\\
\widetilde{P}_{g-k} = \left(P_{g-k} - K_{g-k}E^{-1}(E' + R)\right)E^{-1}\\
\widetilde{T}_{g-k} = \left(T_{g-k} - K_{g-k}E^{-1}(R' + Q) - K_{g-k}E^{-1}(E' + R)E^{-1}R\right)\\
\widetilde{F}_{g-k} = \left(F_{g-k} - K_{g-k}E^{-1}Q' - K_{g-k}E^{-1}(E' + R)E^{-1}Q \right).
\end{gathered}$$ Finally we obtain $$\begin{gathered}
\left[L,M\right] = \sum\limits^{g}_{k=0}[L,(A_{g-k}\partial_x + B_{g-k})]L^k=\\
\\
=\left(\widetilde{K}_{0}\partial_x + \widetilde{P}_{0}\right)L^{g+1} + \left((\widetilde{T}_{0} + \widetilde{K}_{1} )\partial_x + (\widetilde{F}_{0} + \widetilde{P}_{1})\right)L^{g-1}+\\
+\left((\widetilde{T}_{1} + \widetilde{K}_{2} )\partial_x + (\widetilde{F}_{1} + \widetilde{P}_{2})\right)L^{g-1}+...+\\
+\left((\widetilde{T}_{g-1} + \widetilde{K}_{g} )\partial_x + (\widetilde{F}_{g-1} + \widetilde{P}_{g})\right)L + T_g\partial_x + F_g.
\end{gathered}$$ So, if $$\begin{gathered}
\widetilde{K}_0=0, \widetilde{P}_0=0, \widetilde{K}_1 = -\widetilde{T}_0, \widetilde{P}_1=-\widetilde{F}_0,...,\\
\widetilde{K}_m = -\widetilde{T}_{m-1}, \widetilde{P}_m = -\widetilde{F}_{m-1},...,\\
\widetilde{K}_g=-\widetilde{T}_{g-1}, \widetilde{P}_g=-\widetilde{F}_{g-1},\\
\widetilde{T}_g = 0, \widetilde{F}_g = 0,
\end{gathered}$$ then $[L,M]=0$.
Let us calculate $\widetilde{K}_{g-k}, \widetilde{P}_{g-k}, \widetilde{T}_{g-k}, \widetilde{F}_{g-k}$ from (3). This formulas are too hard for analyzing but in the sequel only special cases are considered. We are going to show that in some cases formulas (4) give a very effective methods for finding commuting operators. Let us describe the main idea. We know that $A_0=0$ because $M$ is operator of order $2g$. Let us take $B_0$ such that $\widetilde{K}_0 = 0$ and $\widetilde{P}_0 = 0$. Then we can calculate $\widetilde{T}_0, \widetilde{F}_0$. Using (4) we can find $\widetilde{K}_1, \widetilde{P}_1$, then $a^{1}_2, a^{1}_3, a^{1}_4, b^{1}_1, b^{1}_2, b^{1}_3, b^{1}_4$. And using (4) we can find $\widetilde{T}_1$ and $\widetilde{F}_1 $. So, we get recurrence relations $$\begin{cases}
a_i^{m+1} = g_i(a_1^m,a_2^m,a_3^m,a_4^m,b_1^m,b_2^m,b_3^m,b_4^m, r_1,r_2,r_3,q_1,q_2,q_3,q_4), \quad i=1,2,3,4\\
b_i^{m+1} = h_i(a_1^m,a_2^m,a_3^m,a_4^m,b_1^m,b_2^m,b_3^m,b_4^m, r_1,r_2,r_3,q_1,q_2,q_3,q_4) \quad i=1,2,3,4.
\end{cases}$$ We will see that if there exists $g$ such that $$\begin{cases}
a_i^{g+1} = 0\\
b_i^{g+1} = 0
\end{cases}
i=1,...,4$$ then the operator $L$ commutes with operator $M$.\
Let us suppose that $\lambda_1=1$, $\lambda_2 = -1$, $\lambda_3 = 0$, $r_2(x)=r_3(x)=0$, $q_3 = q_2$ and $q_4=-q_1$. Then we have $$\begin{gathered}
\widetilde{K}_{g-k} = \begin{pmatrix}
0 & -2a_3^{g-k} \\
-2a_2^{g-k} & 0
\end{pmatrix}, \quad
\widetilde{P}_{g-k} = \begin{pmatrix}
2(a_1^{g-k})' \quad & -2b_3^{g-k} - 2(a_3^{g-k})' \\
-2b_2^{g-k} - 2(a_2^{g-k})' \quad & 2(a_4^{g-k})' \\
\end{pmatrix},
\\
\widetilde{T}_{g-k} = \begin{pmatrix}
\widetilde{T}_{g-k,1}^1 \quad & \widetilde{T}_{g-k,2}^1 \\
\widetilde{T}_{g-k,1}^2 \quad & \widetilde{T}_{g-k,2}^2 \\
\end{pmatrix}, \quad
\widetilde{F}_{g-k} = \begin{pmatrix}
\widetilde{F}_{g-k,1}^1 \quad & \widetilde{F}_{g-k,2}^1 \\
\widetilde{F}_{g-k,1}^2 \quad & \widetilde{F}_{g-k,2}^2\\
\end{pmatrix},
\\
\\
\widetilde{T}_{g-k,1}^1 = a_2^{g-k}q_2 + a_3^{g-k}q_2 - r_1(a_1^{g-k})' + 2(b_1^{g-k})' - a_1^{g-k}r_1' + (a_1^{g-k})'',\\
\widetilde{T}_{g-k,2}^1 = -a_1^{g-k}q_2 + a_4^{g-k}q_2 - r_1(a_3^{g-k})' + 2(b_3^{g-k})' - a_3^{g-k}r_1' + (a_3^{g-k})'',\\
\widetilde{T}_{g-k,1}^2 = a_1^{g-k}q_2 - a_4^{g-k}q_2 + r_1(a_2^{g-k})' - 2(b_2^{g-k})' + a_2^{g-k}r_1' - (a_2^{g-k})'',\\
\widetilde{T}_{g-k,2}^2 = a_2^{g-k}q_2 + a_3^{g-k}q_2 + r_1(a_4^{g-k})' - 2(b_4^{g-k})' + a_4^{g-k}r_1' - (a_4^{g-k})'',
\end{gathered}$$ $$\begin{gathered}
\\
\widetilde{F}_{g-k,1}^1 = b_2^{g-k}q_2 + b_3^{g-k}q_2 - 2q_1(a_1^{g-k})' + 2q_2(a_3^{g-k})' + r_1(b_1^{g-k})' - q_1'a_1^{g-k} + q_2'a_3^{g-k} + (b_1^{g-k})'',\\
\widetilde{F}_{g-k,2}^1 = -b_1^{g-k}q_2 + b_4^{g-k}q_2 - 2q_2(a_1^{g-k})' - 2q_1(a_3^{g-k})' + r_1(b_3^{g-k})' - q_1'a_3^{g-k} - q_2'a_1^{g-k} + (b_3^{g-k})'',\\
\widetilde{F}_{g-k,1}^2 = b_1^{g-k}q_2 - b_4^{g-k}q_2 + 2q_1(a_2^{g-k})' - 2q_2(a_4^{g-k})' - r_1(b_2^{g-k})' + q_1'a_2^{g-k} - q_2'a_4^{g-k} -(b_2^{g-k})'',\\
\widetilde{F}_{g-k,2}^2 = b_2^{g-k}q_2 + b_3^{g-k}q_2 + 2q_2(a_2^{g-k})' + 2q_1(a_4^{g-k})' - r_1(b_4^{g-k})' + q_1'a_4^{g-k} + q_2'a_2^{g-k} -(b_4^{g-k})''
\end{gathered}$$ From (4) we get $$\widetilde{K}_{g-k+1} = -\widetilde{T}_{g-k}, \widetilde{P}_{g-k+1} = -\widetilde{F}_{g-k}.$$ So, we obtain recurrence relations $$\begin{gathered}
a_i^0(x) \equiv 0, \qquad i=1,...4,\\
b^0_2(x) = b^0_3 \equiv 0
\end{gathered}$$
$$\begin{gathered}
\widetilde{T}_{g-k,1}^1 = 0 \Leftrightarrow \\
b_1^{g-k} = -\dfrac{1}{2}\int\left(a_2^{g-k}q_2 + a_3^{g-k}q_2 - (a_1^{g-k})'r_1 - a_1^{g-k}r_1' + (a_1^{g-k})''\right)dx + C^{g-k}_1,
\end{gathered}$$
$$\begin{gathered}
\widetilde{T}_{g-k,2}^2 = 0 \Leftrightarrow \\
b_4^{g-k} = \dfrac{1}{2}\int\left( a_2^{g-k}q_2 + a_3^{g-k}q_2 + (a_4^{g-k})'r_1 + a_4^{g-k}r_1' - (a_4^{g-k})'' \right)dx + C^{g-k}_2,
\end{gathered}$$
$$\begin{gathered}
-2a_3^{g-k+1} = -\widetilde{T}_{g-k,2}^1 \Leftrightarrow \\
a_3^{g-k+1} = \dfrac{1}{2}\left(-a_1^{g-k}q_2 + a_4^{g-k}q_2 - (a_3^{g-k})'r_1 + 2(b_3^{g-k})' - a_3^{g-k}r_1' + (a_3^{g-k})''\right),
\end{gathered}$$
$$\begin{gathered}
-2a_2^{g-k+1} = -\widetilde{T}_{g-k,1}^2 \Leftrightarrow \\
a_2^{g-k+1} = \dfrac{1}{2}\left(a_1^{g-k}q_2 - a_4^{g-k}q_2 + (a_2^{g-k})'r_1 - 2(b_2^{g-k})' + a_2^{g-k}r_1' - (a_2^{g-k})'' \right),
\end{gathered}$$
$$\begin{gathered}
2(a_1^{g-k+1})' = -\widetilde{F}_{g-k,1}^1 \Leftrightarrow\\
a_1^{g-k+1} = -\dfrac{1}{2}\int\left(b_2^{g-k}q_2 + b_3^{g-k}q_2 - 2(a_1^{g-k})'q_1 + 2q_2(a_3^{g-k})' + (b_1^{g-k})'r_1 - \right.\\ \left.
- a_1^{g-k}q_1' + a_3^{g-k}q_2' + (b_1^{g-k})'' \right)dx + C^{g-k+1}_3,
\end{gathered}$$
$$\begin{gathered}
2(a_4^{g-k+1})' = -\widetilde{F}_{g-k,2}^2 \Leftrightarrow \\
a_4^{g-k+1} = -\dfrac{1}{2}\int\left(b_2^{g-k}q_2 + b_3^{g-k}q_2 + 2(a_2^{g-k})'q_2 + 2(a_4^{g-k})'q_1 - (b_4^{g-k})'r_1
+ \right. \\ \left. + a_4^{g-k}q_1' + a_2^{g-k}q_2' - (b_4^{g-k})'' \right)dx + C^{g-k+1}_4,
\end{gathered}$$
$$\begin{gathered}
-2b_3^{g-k+1} - (a_3^{g-k+1})' = -\widetilde{F}_{g-k,2}^1 \Leftrightarrow \\
b_3^{g-k+1} = \frac{1}{2}\left(-b_1^{g-k}q_2 + b_4^{g-k}q_2 - 2(a_1^{g-k})'q_2 - 2(a_3^{g-k})'q_1 + (b_3^{g-k})'r_1- \right. \\ \left. - a_3^{g-k}q_1' - a_1^{g-k}q_2' + (b_3^{g-k})''\right) - (a_3^{g-k+1})',
\end{gathered}$$
$$\begin{gathered}
-2b_2^{g-k+1} - (a_2^{g-k+1})' = -\widetilde{F}_{g-k,1}^2 \Leftrightarrow \\
b_2^{g-k+1} =\dfrac{1}{2}\left( b_1^{g-k}q_2 - b_4^{g-k}q_2 + 2(a_2^{g-k})'q_1 - 2(a_4^{g-k})'q_2 - (b_2^{g-k})'r_1+ \right. \\ \left. + a_2^{g-k}q_1' - a_4^{g-k}q_2' -(b_2^{g-k})'' \right) - (a_2^{g-k+1})'.
\end{gathered}$$
We see that $$\begin{cases}
\widetilde{T}_g = 0\\
\widetilde{F}_g = 0
\end{cases} \quad
\Leftrightarrow \quad
\begin{cases}
\widetilde{K}_{g+1} = 0\\
\widetilde{P}_{g+1} = 0
\end{cases}$$ where $$\begin{gathered}
\widetilde{K}_{g+1} = \begin{pmatrix}
0 & -2a_3^{g+1} \\
2a_2^{g+1} & 0
\end{pmatrix}, \quad
\widetilde{P}_{g+1} = \begin{pmatrix}
2(a_1^{g+1})' \quad & -2b_3^{g+1} - 2(a_3^{g+1})' \\
-2b_2^{g+1} - 2(a_2^{g+1})' \quad & 2(a_4^{g+1})' , \\
\end{pmatrix}
\end{gathered}$$ where $C_i^j$ are arbitrary constants. Let us note that if $a_i^0(x) = 0$, $b^0_2(x)= 0, b^0_3(x)= 0$ for all $i=1,...,4$, then from (6) and (7) we get that $b_1^0(x) = const$ and $b_4^0(x) = const$.\
\
We obtain the following theorem\
**Theorem 1.** *If there exists number $g$ and constants of integration $C_j^m$ such that $a_i^{g+1}=0, b^{g+1}_2=0, b^{g+1}_3 = 0$ for all $i=1,...,4$, then the operator* $$L = \begin{pmatrix}
1 & 0 \\
0 & -1
\end{pmatrix}
\partial_x^2 +
\begin{pmatrix}
r_1(x) & 0 \\
0 & -r_1(x)
\end{pmatrix}
\partial_x +
\begin{pmatrix}
q_1(x) & q_2(x) \\
q_2(x) & -q_1(x)
\end{pmatrix}$$ *commutes with operator* $$M=B_0L^g + (A_1\partial_x + B_1)L^{g-1} + ... + A_0\partial_x + B_0,$$ $$\begin{gathered}
B_0=\begin{pmatrix}
\mu_1 & 0 \\
0 & \mu_2
\end{pmatrix}, \quad
B_1 = \begin{pmatrix}
C_1^1 + \dfrac{C^1_3r_1(x)}{2} & -\dfrac{(\mu_1 - \mu_2)q_2}{2} \\
\dfrac{(\mu_1 -\mu_2)q_2}{2} & C_1^4 + \dfrac{C^1_4r_1(x)}{2}
\end{pmatrix},
\end{gathered}$$ *where $\mu_1, \mu_2$ are arbitrary constants and $C_i^j$ are some constants. We see that if $\mu_1 \neq \mu_2$ and $q_2\neq const$, then $B_1$ is not constant matrix hence $M$ is not polynomial in $L$*.\
\
**Theorem 2.** *The operator* $$\begin{gathered}
L = \begin{pmatrix}
1 & 0 \\
0 & -1
\end{pmatrix}
\partial_x^2 +
\begin{pmatrix}
\alpha_2x^2 + \alpha_0 & 0 \\
0 & -\alpha_2x^2 - \alpha_0
\end{pmatrix}
\partial_x +
\begin{pmatrix}
\beta x^2 + \alpha_2x & \gamma x \\
\gamma x & -\beta x^2 - \alpha_2x
\end{pmatrix} ,
\end{gathered}$$ where $$\gamma^2 = -n^2\alpha^2_2, \quad n \in \mathbb{N}$$ *and $\alpha_2, \alpha_0, \beta$ are arbitrary constants, commutes with differential operator (14) of order $4n$, where $g=2n$. The order of operator $M$ equals $4n$.*\
\
**Remark.** Calculations show that if $n \leqslant 3$, then the spectral curve of operators $L, M$ from Theorem 2 is nonsingular for almost all $\alpha_0, \alpha_2, \beta$ and is hyperelliptic. Hence $L$ and $M$ are operators of rank 2. In some cases spectral curve is reducible and we get commuting operators of rank $(2,2)$. Note that operators from Theorem 2 can’t be operators of rank 1. Also note that from Theorem 1 we see that the matrix operator $M$ from Theorem 2 is operator with polynomial coefficients.\
\
**Example 1.** If $n=1$ and $\mu_1=1, \mu_2=-1$, then the operator $$L = \begin{pmatrix}
1 & 0 \\
0 & -1
\end{pmatrix}
\partial_x^2 +
\begin{pmatrix}
\alpha_2x^2 + \alpha_0 & 0 \\
0 & -\alpha_2x^2 - \alpha_0
\end{pmatrix}
\partial_x +
\begin{pmatrix}
\beta x^2 + \alpha_2x & i\alpha_2 x \\
i\alpha_2 x & -\beta x^2 - \alpha_2x
\end{pmatrix}$$ commutes with operator $M = B_0L^2 + A_1\partial_xL + B_1L + A_2\partial_x + B_2$. Calculations show that $$\begin{gathered}
M = \begin{pmatrix}
1 & 0 \\
0 & -1
\end{pmatrix}
\partial_x^4 +
\begin{pmatrix}
2 (\alpha_2x^2 + \alpha_0) & 0 \\
0 & -2 (\alpha_2x^2 + \alpha_0)
\end{pmatrix}
\partial_x^3 +\\
+\begin{pmatrix}
\alpha_2^2x^4 + 2(\alpha_0\alpha_2 + \beta)x^2 + 6\alpha_2x + \alpha_0^2 & i\alpha_2x \\
i\alpha_2x & -\alpha_2^2x^4 - 2(\alpha_0\alpha_2 + \beta)x^2 - 6\alpha_2x - \alpha_0^2
\end{pmatrix}
\partial_x^2 +\\
+\begin{pmatrix}
m_1 & m_2 \\
m_2 & -m_1
\end{pmatrix}
\partial_x +
\begin{pmatrix}
h_1 & h_2 \\
h_2 & -h_1 + 2\beta - \alpha_0\alpha_2
\end{pmatrix} + C_1L +
\begin{pmatrix}
C_0 & 0 \\
0 & C_0
\end{pmatrix},\\
\\
m_1 = 2\alpha_2\beta x^4 + 4\alpha_2^2x^3 + 2 \alpha_0\beta x^2 + 4(\alpha_0\alpha_2 + \beta)x +4\alpha_2,\\
m_2 = i\alpha_2^2 x^3 + i\alpha_0\alpha_2x + i\alpha_2,\\
h_1 = \beta_2^2x^4 + 4\alpha_2\beta x^3 + \dfrac{3\alpha_2^2}{2}x^2 + 2\alpha_0\beta x + 4\beta,
\end{gathered}$$ $$\begin{gathered}
h_2 = i\alpha_2\beta x^3 + \dfrac{3}2{}i\alpha_2^2x^2 + \frac{i\alpha_2\alpha_0}{2},
\end{gathered}$$ where $C_1$ and $C_0$ are arbitrary constants. The spectral curve of operators $L,M$ has the form $$\left(w - C_1z -( C_0 - \dfrac{\alpha_2\alpha_0 - 2\beta}{2})\right)^2 = z^4 - (\alpha_0\alpha_2 - 2\beta)z^2 - \alpha_2\alpha_0\beta + \beta^2.$$ If we take $C_0=\dfrac{\alpha_2\alpha_0 - 2\beta}{2}$, $C_1 =0$, then we get $$w^2 = z^4 - (\alpha_0\alpha_2 - 2\beta)z^2 - \alpha_2\alpha_0\beta + \beta^2.$$ This spectral curve is nonsingular if $\alpha_2\alpha_0\beta(\alpha_2\alpha_0 - \beta) \neq 0$. So in nonsingular case we get that operators $L,M$ are operators of rank 2. If $\alpha_0=0$, then the spectral curve has the form $$w^2 = (z^2 + \beta)^2 \Leftrightarrow (w - z^2 - \beta)(w + z^2 + \beta)=0$$ We see that if $\alpha_0=0$, then the spectral curve is reducible. Note that $M\neq L^2 + \beta$ and $M\neq-L^2-\beta$ but $(M - L^2 - \beta)(M + L^2 + \beta) = 0$ and we have operators of vector rank (2,2).\
\
**Example 2.** If $n=1$ and $\mu_1=1, \mu_2=2$, then the operator $$L = \begin{pmatrix}
1 & 0 \\
0 & -1
\end{pmatrix}
\partial_x^2 +
\begin{pmatrix}
\alpha_2x^2 + \alpha_0 & 0 \\
0 & -\alpha_2x^2 - \alpha_0
\end{pmatrix}
\partial_x +
\begin{pmatrix}
\beta x^2 + \alpha_2x & i\alpha_2 x \\
i\alpha_2 x & -\beta x^2 - \alpha_2x
\end{pmatrix}$$ commutes with operator $M = B_0L^2 + A_1\partial_xL + B_1L + A_2\partial_x + B_2$. Direct calculations show that $$\begin{gathered}
M = \begin{pmatrix}
1 & 0 \\
0 & 2
\end{pmatrix}
\partial_x^4 +
\begin{pmatrix}
2(\alpha_2x^2 + \alpha_0) & 0 \\
0 & 4(\alpha_2x^2 + \alpha_0)
\end{pmatrix}
\partial_x^3 +\\
+\begin{pmatrix}
\alpha_2^2x^4 + 2(\alpha_0\alpha_2 + \beta)x^2 + 6\alpha_2x + \alpha_0^2 & -\dfrac{i\alpha_2x}{2} \\
-\dfrac{i\alpha_2x}{2} & 2\alpha_2^2x^4 + 4(\alpha_0\alpha_2 + \beta)x^2 + 12\alpha_2x + 2\alpha_0^2
\end{pmatrix}
\partial_x^2 +\\
+\begin{pmatrix}
m_1 & m_3 \\
m_2 & m_4
\end{pmatrix}
\partial_x +
\begin{pmatrix}
h_1 & h_3 \\
h_2 & h_4
\end{pmatrix} + C_1L +
\begin{pmatrix}
C_0 & 0 \\
0 & C_0
\end{pmatrix},\\
\\
m_1 = 2\alpha_2\beta x^4 + 4\alpha_2^2x^3 + 2 \alpha_0\beta x^2 + 4(\alpha_0\alpha_2 + \beta)x +4\alpha_2,\\
m_2 = -\dfrac{i\alpha_2^2 x^3}{2} - \dfrac{i\alpha_0\alpha_2x}{2} - \frac{7i\alpha_2}{2},\\
m_3 = -\dfrac{i\alpha_2^2 x^3}{2} - \dfrac{i\alpha_0\alpha_2x}{2} + \frac{5i\alpha_2}{2},
\end{gathered}$$ $$\begin{gathered}
m_4 = 4\alpha_2\beta x^4 + 8\alpha_2^2x^3 + 4\alpha_0\beta x^2 + 8(\alpha_0\alpha_2 + \beta)x + 8\alpha_2,\\
h_1 = \beta^2x^4 + 4\alpha_2\beta x^3 + \dfrac{3\alpha_2^2}{4}x^2 + 2\alpha_0\beta x + \beta + \frac{3\alpha_2\alpha_0}{2},\\
h_2 = -\dfrac{i\alpha_2\beta x^3}{2}- \dfrac{9i\alpha_2^2x^2}{4} - \frac{7i\alpha_2\alpha_0}{4},
\end{gathered}$$ $$\begin{gathered}
h_3 = -\dfrac{i\alpha_2\beta x^3}{2} + \dfrac{3i\alpha_2^2x^2}{4} + \frac{5i\alpha_2\alpha_0}{4}, \\
h_4 = 2\beta^2x^4 + 8\alpha_2\beta x^3 + \dfrac{9\alpha_2^2}{4}x^2 + 4\alpha_0\beta x + 4\beta + 2\alpha_2\alpha_0,
\end{gathered}$$ where $C_1$ and $C_0$ are arbitrary constants. If we take $C_1=0$ and $C_0 = 0$, then the spectral curve of operators $L,M$ has the form $$16w^2 - 8w(\alpha_2\alpha_0 - 2\beta + 6z^2) + 32z^4 + 16 (\alpha_2\alpha_0 - 2\beta)z^2 + \alpha_2^2\alpha_0^2 = 0$$ We see that the spectral curve is nonsingular for almost all $\alpha_2,\alpha_0.\beta$ and $L, M$ are operators of rank 2. If $\alpha_0=0$, then the spectral curve has the form $$(w-2z^2)(w-z^2 + \beta)=0$$ and $L, M$ are operators of vector rank $(2,2)$.\
\
**Theorem 3.** *Let $\wp(x)$ be the Weierstrass elliptic function satisfying the equation $(\wp'(x))^2 = 4\wp^3(x) + g_2\wp(x) $. The operator* $$\begin{gathered}
L = \begin{pmatrix}
1 & 0 \\
0 & -1
\end{pmatrix}
\partial_x^2 +
\begin{pmatrix}
0 & \alpha\wp(x) \\
\alpha\wp(x) & 0
\end{pmatrix} ,\\
\alpha^2 = 64n^4 - 4n^2, \quad n\in \mathbb{N}
\end{gathered}$$ *commutes with a differential operator (14), of order $4n$, where $g=2n$. The order of operator $M$ equals $4n$.*
**Proofs of Theorem 2 and Theorem 3**
We prove Theorem 2 and Theorem 3 using Theorem 1. Let us suppose that $C^k_2=C^k_3=C^k_4=0$ and $C_1^{2k+1}=0$ for all $k$. We know that $$\begin{gathered}
A_0 \equiv 0, \quad
B_0=\begin{pmatrix}
\mu_1 & 0 \\
0 & \mu_2
\end{pmatrix}.
\end{gathered}$$ Direct calculations using $(6) - (13)$ show that $$\begin{gathered}
a_1^1=a_2^1=a_3^1=a_4^1=0, \\
b^1_2 = \dfrac{\mu_1 - \mu_2}{2}q_2 , \quad b^1_3 = -\dfrac{\mu_1 - \mu_2}{2}q_2 =-b_2^1.
\end{gathered}$$ Then $$\begin{gathered}
a_1^2=a_4^2=0, \quad a^2_2=a^2_3 = -\dfrac{\mu_1 - \mu_2}{2}q_2' = -(b^1_2)' , \\
b^2_2 = b^2_3 = \dfrac{r_1a_2^{2} - (a_2^{2})'}{2}\\
a_1^3=a_2^3=a_3^3=a_4^3=0,\\
b^3_2 = -b^3_3
\end{gathered}$$ **Lemma 1.** If $k=2m+1$, then\
$$a_1^k=a_2^k=a^k_3=a^k_4=0, \quad b_2^k =-b_3^k$$ *If $k=2m$, then* $$a_1^k=a_4^k=0, \quad a^k_2=a^k_3=-(b_2^{k-1})', \quad b_3^k =b_2^k= \dfrac{r_1a_2^{k} - (a_2^{k})'}{2}$$ **Proof**\
We see that relations (18) and (19) is true when $k=1$ and $k=2$. Let us suppose that (18) and (19) is true for some $k$.\
If $k=2m+1$, then using $(6) - (13)$ we get $$\begin{gathered}
a_1^{2m+2}=a_4^{2m+2}=0, \quad a^{2m+2}_2=a^{2m+2}_3=-(b_2^{2m+1})', \quad b_3^{2m+2} =b_2^{2m+2}= \dfrac{r_1a_2^{2m+2} - (a_2^{2m+2})'}{2}.
\end{gathered}$$ If $k=2m$, then again using $(6) - (13)$ we have $$a_1^{2m+1}=a_2^{2m+1}=a^{2m+1}_3=a^{2m+1}_4=0 \quad \quad b_3^{2m+1} = -b_2^{2m+1}.$$ **The Lemma is proved.**\
**Proof of Theorem 2**
From (16) and (17) we get $$\begin{gathered}
a_1^1=a_2^1=a_3^1=a_4^1=0, \\
b^1_2 = \dfrac{\mu_1 - \mu_2}{2}q_2 = \dfrac{\mu_1 - \mu_2}{2}\gamma x, \quad b^1_3 = -\dfrac{\mu_1 - \mu_2}{2}q_2 = -\dfrac{\mu_1 - \mu_2}{2}\gamma x.
\end{gathered}$$ Then $$\begin{gathered}
a_1^2=a_4^2=0, \quad a^2_2=a^2_3 = -\dfrac{\mu_1 - \mu_2}{2}\gamma = -(b^1_2)' , \\
b^2_2 = b^2_3 = -\dfrac{\mu_1 - \mu_2}{4}\alpha_0\gamma - \dfrac{\mu_1 - \mu_2}{4}\alpha_2\gamma x^2,\\
a_1^3=a_2^3=a_3^3=a_4^3=0,\\
b^3_2 = -b^3_3 =\dfrac{2C^2_1 + (\mu_1 - \mu_2)(\alpha_0\alpha_2 - 2\beta)}{4}\gamma x + \dfrac{\mu_1 - \mu_2}{4}\gamma(\gamma^2 + \alpha_2^2) x^3
\end{gathered}$$ We want to prove that $L$ commutes with differential operator $(14)$, where $g=2n$. From Theorem 1 and Lemma 1 we know that we must prove that there exists constants $C_1^{2k}$ such that $b_2^{2n+1}\equiv 0$. Let us note that recurrence relations $(6) - (13)$ are linear in $a_i^{k+1}$ and $b_i^{k+1}$. Assume that $b_i^{2m-1} = x^{2m-1}$. Then we have $$\begin{gathered}
b_2^{2m} = (2m-1)(m-1)x^{2m-3} - \dfrac{\alpha_0 (2m-1)x^{2m-2}}{2} - \dfrac{\alpha_2(2m-1)x^{2m}}{2} = -b_3^{2m},\\
a_2^{2m} = -(b_2^{2m-1})' = -(2m-1)x^{2m-2} = a_3^{2m}.
\end{gathered}$$ Again using $(6) - (13)$ we obtain $$\begin{gathered}
a^{2m+1}_1=a^{2m+1}_2=a^{2m+1}_3=a^{2m+1}_4=0,\\
b_2^{2m+1} = \dfrac{(2m-1)(\alpha_2^2m^2 + \gamma^2)}{2m}x^{2m+1} + \\
+\dfrac{(\alpha_2\alpha_0 - 2\beta)(2m-1)^2}{2}x^{2m-1} + \dfrac{\alpha_0^2(2m-2)(2m-1)}{4}x^{2m-3} - \\
-\dfrac{(2m-1) (2m - 2) (2m-3) (2m-4)}{4}x^{2m-5} + \dfrac{C^{2m}_1\beta}{2}x.
\end{gathered}$$ From (20) we see that $b_2^3 = K_1^3x + K_3^3x^3$, where $K_1^3 = K_1^3(C_1^2)$ is constant and depends on $C_1^2$, $K_3^3 = \dfrac{\mu_1 - \mu_2}{4}\gamma(\gamma^2 + \alpha_2^2)$. Let us suppose that for some $m$ $$b^{2m-1}_2 = K_1^{2m-1}x + K_3^{2m-1}x^3 + ... + K_{2m-1}^{2m-1}x^{2m-1},$$ where $$K^{2m-1}_{2m-1} = -\dfrac{(\mu_1 - \mu_2)\prod\limits_{j=1}^{m-1}\left((2j-1)(\alpha_2j^2 + \gamma^2)\right)}{2^{m}(m-1)!},$$ $K_{2m-3}^{2m-1}$ is constant and depends on $C_1^2$, $K_{2m-5}^{2m-1}$ is constant and depends on $C_1^2, C_1^4$, $K_{2m-2j-1}^{2m-1}$ depends on $C_1^2,...,C_1^{2j}$ and $K_1^{2m-1}$ depends on $C_1^{2}, C_1^4,..., C_1^{2m-2}$. We see that it is true when $m=2$. Using (21) we get $$\begin{gathered}
b_2^{2m+1} = K_1^{2m+1}x + K_3^{2m+1}x^3 + ...+ K_{2m-1}^{2m+1}x^{2m-1} + K_{2m+1}^{2m+1}x^{2m+1},\\
K_1^{2m+1} = \dfrac{C^{2m}_1\beta}{2} - 30K_5^{2m-1} + \dfrac{3\alpha_0^2}{2}K_3^{2m-1} + \dfrac{\alpha_2\alpha_0 - 2\beta}{2}K_1^{2m-1}\\
K_3^{2m+1} = \dfrac{\alpha_2^2 + \gamma^2}{2}K_1^{2m-1} + \dfrac{9(\alpha_2\alpha_0 - 2\beta)}{2}K_3^{2m-1} + 5\alpha_0^2 K_5^{2m-1} - 210K_7^{2m-1}\\
...
\end{gathered}$$ $$\begin{gathered}
K_{2m-1}^{2m+1} = \dfrac{(2m-3)(\alpha_2^2(m-1)^2 + \gamma^2)}{2m-2}K_{2m-3}^{2m-1} + \dfrac{(\alpha_2\alpha_0 - 2\beta)(2m-1)^2}{2}K_{2m-1}^{2m-1}\\
K_{2m+1}^{2m+1} = \dfrac{(2m-1)(\alpha_2^2m^2 + \gamma^2)}{2m}K_{2m-1}^{2m-1}.
\end{gathered}$$ Easy to see that $K_{2m-1}^{2m+1}$ depends on constant of integration $C_1^2$ because $K_{2m-3}^{2m-1}$ depends on $C_1^2$. And $K_{2m-3}^{2m+1}$ depends on $C_1^2, C_3^4$. The last coefficient $K_1^{2m+1}$ depends on constants of integrations $C_1^2,..., C_1^{2m}$.\
Now let us consider $$b_2^{2n+1} = K_1^{2n+1}x + K_3^{2n+1}x^3 + ...+ K_{2n-1}^{2n+1}x^{2n-1} + K_{2n+1}^{2n+1}x^{2n+1}$$ We know that $\gamma^2 + n^2\alpha_2 = 0$ and hence $K_{2n+1}^{2n+1} = 0$. To prove Theorem 2 we must find constants $C_1^2,...,C_1^{2n}$ such that $K_1^{2n+1} = K_3^{2n+1}=...=K_{2n-1}^{2n+1}=0$. It is always possible because $K_{2n-1}^{2n+1}$ depends n $C_1^2$ and $K_{2n-3}^{2n+1}$ depends on $C_1^2, C_3^2$ etc. The last coefficient $K_1^{2n+1}$ depends on constants of integration $C_1^2,..., C_1^{2n}$.
**Theorem 2 is proved.**
**Proof of Theorem 3**
The proof of Theorem 3 coincides with the proof of Theorem 2. Let us prove that principle parts of functions $a_i^{2n+1}$ and $b_i^{2n+1}$ equals zero. We see from $(6) - (13)$ that $a_i^j$ and $b_i^j$ are elliptic functions for any $i,j$. Hence if principle parts of functions $a_i^{2n+1}$ and $b_i^{2n+1}$ equals zero, then $a_i^{2n+1}$ and $b_i^{2n+1}$ havn’t poles and hence are constants. In our case these constants are zeroes.
From (16) and (17) we get $$\begin{gathered}
a_1^1=a_2^1=a_3^1=a_4^1=0, \\
b^1_2 = \dfrac{\mu_1 - \mu_2}{2}q_2 = \dfrac{\mu_1 - \mu_2}{2x^2}\alpha + O(x^2), \quad b^1_3 = -\dfrac{\mu_1 - \mu_2}{2x^2}\alpha + O(x^2).
\end{gathered}$$ Then $$\begin{gathered}
a_1^2=a_4^2=0, \quad a^2_2=a^2_3 = \dfrac{\mu_1 - \mu_2}{x^3}\alpha + O(x) = -(b^1_2)' , \\
b^2_2 = b^2_3 = \dfrac{3(\mu_1 - \mu_2)}{2x^4}\alpha + \dfrac{(\mu_1 - \mu_2)g_2\alpha}{40} + O(x^4),\\
a_1^3=a_2^3=a_3^3=a_4^3=0,\\
b^3_2 = -b^3_3 =\dfrac{(\mu_1 - \mu_2)\alpha(\alpha^2 - 60)}{4x^6} + \dfrac{\alpha(80C_1^2 + 6(\mu_1 - \mu_2)g_2\alpha^2)}{160x^2} + O(x^2)
\end{gathered}$$ We mentioned before that recurrence relations $(6) - (13)$ are linear in $a_i^{k+1}$ and $b_i^{k+1}$. Assume that $b_3^{2m-1} = -b_2^{2m-1}= \dfrac{1}{x^{4m - 2}}$ and $a_1^{2m-1}=a_2^{2m-1}=a_3^{2m-1}=a_4^{2m-1}=0$. Then we have $$\begin{gathered}
b_2^{2m}=b_3^{2m} = \dfrac{8m^2 - 6m + 1}{x^{4m}},\\
a_2^{2m} = -(b_2^{2m-1})' = \dfrac{4m-2}{x^{4m-1}}.
\end{gathered}$$ Again using $(6) - (13)$ we obtain $$\begin{gathered}
a^{2m+1}_1=a^{2m+1}_2=a^{2m+1}_3=a^{2m+1}_4=0,\\
b_2^{2m+1} = \dfrac{(2m-1)(\alpha^2 + 4m^2 - 64m^4)}{2mx^{4m+2}} + \dfrac{const}{x^{4m-2}} + \dfrac{const}{x^{4m-6}}+...+\dfrac{C_1^{2m}\alpha}{2x^2} + O(x^2)
\end{gathered}$$ From (22) we see that $b_2^3 = \dfrac{K_6^3}{x^6} + \dfrac{K_2^3}{x^2} + O(x^2)$, where $K_2^3 = K_2^3(C_1^2)$ is constant and depends on $C_1^2$, $K_6^3 = \dfrac{(\mu_1 - \mu_2)\alpha(\alpha^2 - 60)}{4}$. Let us suppose that for some $m$ $$b^{2m-1}_2 = \dfrac{K_{4m-2}^{2m-1}}{x^{4m-2}} + \dfrac{K_{4m-6}^{2m-1}}{x^{4m-6}} + ... + \dfrac{K_{2}^{2m-1}}{x^2} + O(x^2),$$ where $$K^{2m-1}_{4m-2} = -\dfrac{(\mu_1 - \mu_2)\prod\limits_{j=1}^{m-1}\left((2j-1)(\alpha^2 + 4j^2 - 64j^4))\right)}{2^{m}(m-1)!},$$ $K_{4m-6}^{2m-1}$ is constant and depends on $C_1^2$, $K_{4m-10}^{2m-1}$ is constant and depends on $C_1^2, C_1^4$, $K^{2m-1}_{4m-4j-2}$ depends on $C_1^2,...,C_1^{2j}$ and $K_2^{2m-1}$ depends on $C_1^{2}, C_1^4,...,C_1^{2m-2}$. We see that it is true when $m=2$. Using (23) we get $$b_2^{2m+1} = K_1^{2m+1}x + K_3^{2m+1}x^3 + ...+ K_{2m-1}^{2m+1}x^{2m-1} + K_{2m+1}^{2m+1}x^{2m+1}.$$ Easy to see that $K_{4m-2}^{2m+1}$ depends on constant of integration $C_1^2$. And $K_{4m-6}^{2m+1}$ depends on $C_1^2, C_3^2$. The last coefficient $K_2^{2m+1}$ depends on constants of integrations $C_1^2,..., C_1^{2m}$.\
Now let us consider $$b_2^{2n+1} = K_1^{2n+1}x + K_3^{2n+1}x^3 + ...+ K_{2n-1}^{2n+1}x^{2n-1} + K_{2n+1}^{2n+1}x^{2n+1}$$ We know that $\alpha^2 + 4n^2 - 64n^4 = 0$ hence $K_{2n+1}^{4n+2} = 0$. To prove Theorem 3 we must find constants $C_1^2,...,C_1^{2n}$ such that $K_2^{2n+1} = K_6^{2n+1}=...=K_{4n-2}^{2n+1}=0$. It is always possible because $K_{4n-2}^{2n+1}$ depends on $C_1^2$ and $K_{4n-2}^{2n+1}$ depends on $C_1^2, C_3^2$ etc. The last coefficient $K_2^{2n+1}$ depends on constants of integration $C_1^2,..., C_1^{2n}$.
**Theorem 3 is proved.**
[10]{} Wallenberg, G.: Uber die Vertauschbarkeit homogener linearer Differentialausdrucke. Arch. Math. Phys. 4 (1903), 252–268. Schur, J.: Uber vertauschbare lineare Differentialausdrucke. Sitzungsber. der Berliner Math. Gesell. 4 (1905), 2–8. Burchnall J.-L., Chaundy T.W. Commutative ordinary differential operators. Proc. London Math. Soc. 21 (1923), 420-440; Proc. Royal Soc. London (A) 118 (1928), 557-583. Grinevich, P. G.: Vector rank of commuting matrix differential operators. Proof of S. P. Novikov’s criterion, Math USSR IZV, 1987, 28 (3), 445–465. Krichever, I. M.: Integration of nonlinear equations by the methods of algebraic geometry, Functional Analysis and Its Applications, 11: 1 (1977), 12–26. Dixmier J.: Sur les algebres de Weyl. Bulletin de la Societe Mathematique de France 96, 209–242 (1968) Krichever, I.M.: Commutative rings of ordinary linear differential operators, Functional Functional Analysis and Its Applications, 12:3 (1978), 175–185. Krichever, I. M., Novikov, S.P.: Holomorphic bundles over algebraic curves and non-linear equations, Russian Mathematical Surveys, 1980, 35:6, 53–79. Mokhov, O. I.: Commuting ordinary differential operators of rank 3 corresponding to an elliptic curve, Russian Math. Surveys, 37:4 (1982), 129–130. Mokhov, O. I.: Commuting differential operators of rank 3, and nonlinear differential equations, Math. USSR, Izvestiya,35:3 (1990), 629–655. Mironov, A. E.: Self-adjoint commuting differential operators and commutative subalgebras of the Weyl algebra, Invent. math. (2014) 197:417-431. Mironov, A. E.: Periodic and rapid decay rank two self-adjoint commuting differential operators, Amer. Math. Soc. Transl. Ser. 2, V. 234, 2014, P. 309–322. Oganesyan, V.: Commuting differential operators of rank 2 with polynomial coefficients, Functional Analysis and Its Applications, 2016, 50:1, 54–61. Oganesyan, V.: Commuting differential operators of rank 2 and arbitrary genus g with polynomial coefficients, International Mathematics Research Notices (2016), doi:10.1093/imrn/rnw085. Davletshina V.N.: Commuting differential operators of rank 2 with trigonometric coeffcients, Siberian Mathematical Journal, 56, 405-410 (2015) Mironov, A. E., Zheglov, A. B.: Commuting Ordinary Differential Operators with Polynomial Coefficients and Automorphisms of the First Weyl Algebra, International Mathematics Research Notices 2015 : rnv218. Mokhov, O. I.: Commuting ordinary differential operators of arbitrary genus and arbitrary rank with polynomial coefficients, American Mathematical Society Translations, Volume 234 (2014), 323-336. Mokhov, O. I.: On Commutative Subalgebras of the Weyl Algebra Related to Commuting Operators of Arbitrary Rank and Genus, Mathematical Notes, vol. 94:2 (2013), 298-300. Smirnov, A. O.: Finite-gap elliptic solutions of the KdV equation. Acta Appl. Math., 36 (1994), 125-166. Smirnov, A. O.: Real finite-gap regular solutions of the Kaup-Boussinesq equation, Theoret. and Math. Phys., 66 (1986), 19-31. Dubrovin, B. A, Natanzon, S. M.: Real theta-function solutions of the Kadomtsev–Petviashvili equation, Mathematics of the USSR-Izvestiya, 1989, 32:2, 269–288. Dubrovin, B. A, Natanzon, S. M.: Real two-zone solutions of the sine-Gordon equation, Functional Analysis and Its Applications, 1982, 16:1, 21–33. Dubrovin, B. A.: The kadomcev-petviasvili equation and the relations between the periods of holomorphic differentials on riemann surfaces, Mathematics of the USSR-Izvestiya, 1982, 19:2, 285–296. Shiota, T., Characterization of Jacobian varieties in terms of soliton equations. Invent. Math. 83 (1986), no. 2, 333–382. A. Belov-Kanel, M. Kontsevich, The Jacobian conjecture is stably equivalent to the Dixmier conjecture, Moscow Mathematical Journal 7 (2): 209–218. Y. Tsuchimoto, Endomorphisms of Weyl algebra and p-curvatures, Osaka J. Math. 42: 435–452. Gesztesy, F., Weikard, R.: A characterization of all elliptic algebro-geometric solutions of the AKNS hierarchy, Acta Mathematica, September 1998, Volume 181, Issue 1, pp 63-108. Weikard, R.: On commuting matrix differential operators, New York J. Math., vol. 8, 2002, pp. 9-30. Dubrovin, B. A.: Completely integrable Hamiltonian systems associated with matrix operators and Abelian varieties, Functional Analysis and Its Applications, 1977, 11:4, 265–277. Krichever, I.M.: Algebraic curves and commuting matricial differential operators, Functional Analysis and Its Applications, 1976, 10:2, 144–146.
Department of Geometry and Topology, Faculty of Mechanics and Mathematics, Lomonosov Moscow State University, Moscow, 119991 Russia.\
\
E-mail address: [email protected]
[^1]: This research was supported by the Russian Science Foundation under grant 16-11-10260 and was done at the Faculty of Mechanics and Mathematics, Department of Geometry and Topology of Lomonosov Moscow State University
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- Abdujappar Rusul
- Ali Esamdin
- Alim Kerim
- Dilnur Abdurixit
- Hongguang Wang
- 'Xiao-Ping Zheng'
title: 'Recovering the Pulse Profiles and Polarization Position Angles of Some Pulsars from Interstellar Scattering $^*$ '
---
Introduction {#sect:intro}
============
Interstellar medium (ISM) scattering broadens the intrinsic lower frequency pulse profiles of pulsar and causes flattening and distortion of the PPA curves ([@Li03]; [@Karastergiou09]) to some extent according to the observed frequency and the distribution scale of ISM which is located between pulsar and observer. The scattering of pulse profiles has been studied extensively since the first observation of scintillation of pulsars ([@scheuer68]). Since then, pulsar researchers developed a several ISM scattering model which were the thin screen model ([@rankin70]; [@komersaroff72]), the thick screen and the extended screen model ([@wiliamson72]), based on the observable effects of temporal broadening of pulse profiles and assumable scale of a scattering screen in the ISM. The scattering effect of the pulse broadening and the flattening of PPA curves are studied frequently by many authors ([@komersaroff72]; [@Rickett77]; [@Li03] etc) and the distortion of PPA curves with orthogonal jumps by [@Karastergiou09]. They used the higher frequency mean pulse profile without obvious scattering as the intrinsic pulse profile and convolved it with scattering models for obtaining similar pulse-shapes and PPA curves as observed scattered lower-frequency pulse profiles. Only a few of them have performed deconvolution to recover total intensity pulse profiles of lower frequency from scattering ([@Weisberg90]; [@Kuzmin93]; [@Bhat03]). This paper revisited the method of [@Kuzmin93] to restore total intensity pulse profiles of pulsar and extended it to the restoration of linear intensity profiles and PPA curves for another five pulsars. Scattering broadening time scales of those pulsars has been obtained from best fit for three different scattering models. The scattering time scale is a key parameter in all scattering models, which depends on observing frequency and dispersion measure (DM) ([@Ramachandran97]).
Descattering compensation for the first Stokes parameter $I(t)$ of the scattered pulse signal was performed by [@Kuzmin93]; their method worked well for recovering the original low-frequency pulse profiles of the crab pulsar, but when discussing the restoration of the rest of the Stokes parameters, it draws our consideration whether all the Stokes parameters scattered the same way as $I(t)$. In early works of [@komersaroff72] and [@Rickett77], they assumed that the scattering effect may be approximated by convolving each of the time-dependent Stokes parameters of unscattered pulse with scattering model under certain assumptions. In the research note of [@Li03], based on the works of [@Macquart00] they simply assumed that scattering process works similarly on all Stokes parameters. By using that assumptions in their convolution method they explained well the scattering effect on pulse broadening and on PPA curve flattening, but their approach dose not work properly when applied in the deconvolution method of [@Kuzmin93] to recover the shape of PPA curve. This research paper uses the same method as [@Kuzmin93] to recover the total intensity profile $I(t)$; to recover the linear intensity and the PPA curve, this paper assumes that the complex number form of Stokes parameters $Q, U$ may scatter the same way as $I(t)$; Stokes parameters $Q$ and $U$ has been treated as the real and the imaginary component of a complex number respectively ([@Jaap96]); such treatment is also applied in PSRCHIVEin the section of Complex-valued Rotating Vector Model; these assumption and method are applied to recover the pulse intensity profiles and PPA curves of some pulsars in section 3; results are discussed in section 4 and the conclusions are presented in section 5.
Scattering models and method {#sect:Obs}
============================
This paper tests all three different kinds of scattering models to check the scattering phenomena in pulse profiles and in PPA curves. The thin screen model (Eq. 1) in which the signal is assumed to be scattered approximately mid-way between source and observer by irregularities in the ISM; the thick screen model (Eq. 2) in which ISM irregularities are distributed on a larger scale than thin screen model and it can be near the observer or near the source; the third one is the extended screen model (Eq. 3) in which the signal is scattered in the whole path of its propagation, so that the irregularities spread in the whole space between the source and the observer ([@wiliamson72]). The functions of those models are following:
$$g_{thin}=\exp(-t/\tau_{s}) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~(t \geq 0)$$
$$g_{thick}=(\frac{\pi\tau_{s}}{4t^3})^{1/2}\exp(\frac{-\pi^2\tau_{s}}{16t}) ~~~~~~~~~~~~~~~(t>0)$$
$$g_{extend}=(\frac{\pi^5\tau_{s}^3}{8t^5})^{1/2}\exp(\frac{-\pi^2\tau_{s}}{4t}) ~~~~~~~~~~~(t>0)$$
$\tau_{s}$ is scattering broadening time scale which can be determined through an empirical relation between wavelength ($\lambda$) and DM ([@Ramachandran97])
$$\tau_{s}= 4.5\times10^{-5}DM^{1.6}\times(1+3.1\times10^{-5}\times DM^{3})\times\lambda^{4.4}$$
this relation is used as a reference to set an upper limit in the process of best fit, except for PSR B1946+35.
This paper has employed the method of compensation of [@Kuzmin93] with that of three different scattering models. His method works efficiently to recover the original shape of pulse profiles $x(t)$ from observed pulse profiles $y(t)$; here, the same method has been used to recover total intensity pulse profiles, the recovering of the linear intensity and PPA curve will be introduced at the end of this section. When $g(t)$ is assumed to be a scattering model or response function, the observed $y(t)$ is the product of $x(t)$ and $g(t)$, and the spectrum of recovered pulse (original pulse) can be written as ([@Kuzmin93]) $$X(f)=Y(f)/G(f) \\$$
$$Y(f)=\int y(t)\exp(-j2\pi ft)dt$$
$$G(f)=\int g(t)\exp(-j2\pi ft)dt$$
$Y(f)$ is the spectrum of observed pulse, $G(f)$ is the frequency response of scattering screen, and the descattered restored pulse $x(t)$ is obtained by inverse Fourier transformation: $$x(t)=\int X(f)\exp(j2\pi ft)df$$ by using the above equations, the total intensity pulse profiles can be recovered from the scattered pulse profiles $y(t)$. It is also possible to study the effect of scattering on pulse profiles and on PPA curves by changing the descattering procedure above as Eq.(9) and Eq.(10). The author has tried the work of [@Li03] by using Eq.(9)and Eq.(10) with complex treatment of Stokes parameters $Q, U$. And obtained the same results in explaining pulse profile broadening and PPA curve flattening caused by ISM scattering. The followings are modified functions of [@Kuzmin93] to hold convolution for repeating the work of [@Li03]. The spectrum of the scattered pulse is calculated by $$Y(f)=X(f)*G(f)$$ and the scattered pulse $y(t)$ would be written $$y(t)=\int Y(f)\exp(j2\pi ft)df$$
The above equations (5$-$8) were used to recover total intensity profiles of the crab pulsar from ISM scattering ([@Kuzmin93]). This time they are used to recover total intensity, linear intensity profiles and PPA curves of a few other pulsars. In the process of restoration of Stokes parameters of $Q, U$, it is assumed that the observed scattered Stokes parameter $Q, U$ form a complex number $z(t)=Q(t)+iU(t)$; $p(t)$ is descattered complex number from $z(t)$. The observed spectrum of $z(t)$ would be $$Z(f)=\int z(t)\exp(-j2\pi ft)dt$$ The spectrum of a descattered $p(t)$ is $$P(f)=Z(f)/G(f)$$ so, the descattered recovered linear intensity and PPA can be obtained from $$p(t)=\int P(f)\exp(j2\pi ft)df$$ complex treatment to the Stokes parameters $Q, U$ is more practical than treating them separately as scalar; when the $Q, U$ is tested as scalar separately by using the method of recovering of $I(t)$, the results in all three models for descattering of PPA curves are not as expected, and fail to produce smooth Swing-curves ([@Radhakrishnan69]), PPA jumps, smooth flat curves or to show similarity with the higher frequency pulse’s PPA curves, but when the $Q, U$ are expressed as a vector over complex plane and using the method of [@Kuzmin93] it produces better results, the linear intensity and PPA curve are recovered more desirably.
Simulation and practical application {#sect:data}
====================================
Simulation of scattering and descattering of pulse profiles and PPA curves
--------------------------------------------------------------------------
This paper has held a simple simulation on intensity profiles and on PPA curves with three different models for scattering and descattering; as an example, the thick screen model is used. The simulated pulse profiles have a Gaussian shape, with PPA($\psi$) curves following the Rotating-vector model (RVM) ([@Radhakrishnan69]). This paper has assumed a coherent radiation of 100$\%$ polarized in which the degree of linear polarization in total intensity is 0.8, so that the Stokes parameters $Q, U$ can be expressed in terms of the degree of linear polarization and PPA by $Q=0.8Icos(2\psi)$, $U=0.8Isin(2\psi)$ ([@van57]). The simulated plots from (a) to (k) are shown in Figure 1; the left panel plots of (a), (b), (g), (h) are the original pulse profiles and PPA curves, the middle panel figures (c), (d), (i), (j) are the scattered pulse profiles and PPA curves, the right panel figures (e), (f), (k), (l) are the descattered restored pulse profiles and PPA curves; the solid lines and dotted lines in normalized intensity profiles (a), (c), (e), (g), (i), (k) are total and linear intensity profiles respectively; the dotted lines in the plots (b), (d), (f), (h), (j), (l) are PPA curves. It can be seen from the figure of the simulation, the scattering dose caused pulse broadening and PPA curve flattening, and if the scattering screen $g(t)$ is definite it is in principle possible to recover the actual features of emitting signal.
\[Fig1\]
Recovering the intensity pulse profiles and PPA curves
-------------------------------------------------------
To compensate the scattered pulse profiles and PPA curves, the author of this paper searched in European Pulsar Network (EPN) online database ([@Lorimer98]) for sample pulsars ([@Gould98]) which have obvious pulse profile broadening; five pulsars’ data are downloaded, four of which were previously used by [@Li03] for studying the effect of scattering on pulse profiles and on PPA curves. This study’s calculation has used the lower-frequency pulse profiles with obvious scattering to be compensated for scattering; in all figures from Fig. 2 to Fig.6, the higher frequency profiles without obvious scattering (intrinsic pulse profiles) and it’s PPA curves (intrinsic PPA curves) are given for comparison. The scattering time scale and some relative data of five pulsars are given in Table 1; the three different time scales for three different scattering models are obtained by the best fit for the pulse peak (Col.7, Col.8, Col.9), the time scales in Col.6 are calculated by Eq.(4) (see Table 1). When holding the best fit, almost in all pulsar the precision of those scattering time scales is approximately controlled in 1$ms$.
[ccccccccccc]{} PSR Name & $P$ & $DM$ & $Freq$ &$Freq$ &$\tau_{em}$ &$\tau_{thin}$ &$\tau_{thick}$ &$\tau_{extend}$\
&$(ms)$ &$(pc$ $cm^{-3})$ &$(GHz)$ &$(GHz)$ &$(ms)$ &$(ms)$ &$(ms)$ &$(ms)$\
B1356$-$60 & 127.503335 & 294.133 &1.56 &0.659594 &9.88 &1.0 &0.5 &0.6\
B1831$-$03 & 686.676816 & 235.800 &0.610 &0.408 &29.63 &15.0 &10.0 &6.5\
B1838$-$04 & 186.145156 & 324.000 &0.925 &0.606 &22.38 &15.0 &3.0 &1.0\
B1859$+$03 & 655.445115 & 402.900 &0.925 &0.606 & 60.97 &13.0 &7.4 &5.5\
B1946$+$35 & 717.306765 & 129.050 &0.61 &0.408 &1.87 &10.0 &6.0 &4.0\
Actually, because of the uncertainty of the scattering screen, any of the scattering models are not strictly appropriate to explain the scattering effect on pulse signals independently, even if we take the pulse width evolution with observing frequency into account ([@lorimer05]). In general, this study has tested all three different scattering models to recover the pulse profiles and PPA curves; as an example, Fig. 2 to Fig. 6 presented the application of one of the three models for restoring pulse profiles and PPA curves; the intensity profiles are normalized to its own peak of intensity; the error of the PPA calculated the same with [@von97]by $$\triangle \psi = \frac{\sqrt{(Q\cdot rms_{U})^2+(U\cdot rms_{Q})^2}}{(2L^2)}$$ in all Figures from 2$-$6 the solid lines in the plots are the recovered total intensity (c) and linear intensity profiles (d), the dotted lines in (a), (b) are scattered total intensity profiles (upper panel) and linear intensity profiles (lower panel), the dotted lines in (c),(d) are total intensity (upper panel) and linear intensity (lower panel) profiles of intrinsic pulse profiles for comparison, the plots on the right are scattered PPA curves (upper panel (e)), PPA curves of intrinsic pulse signals (upper panel (f)) and recovered PPA curves (lower panel (g)). $"5\triangle \psi"$ error bars are presented in all Figures of plots of (e), (f), (g).
For PSR B1356$-$60, in Figure 2, shows the descattered pulse profiles and PPA curve observed at 0.659594 $GH_{Z}$; the intrinsic pulse profile is observed at 1.56 $GH_{Z}$; all three models are tested and they produce similar results, the application of thick screen model is presented here. Recovered intensity pulse profiles (c), (d) and PPA curve (g) are similar to the characters of intrinsic pulse signal.
For PSR B1831$-$03, in Figure 3, shows the descattered pulse profiles and PPA curve observed at 0.4 $GH_{Z}$; the intrinsic pulse profile is observed at 0.61 $GH_{Z}$; all three models has been tested, they produce similar results with different scattering time scales; the application of thin screen model is given here; total intensity and linear intensity profiles match well with those of intrinsic pulse profiles, the recovering of a PPA curve results in a jump-like feature in itself.
For PSR B1838$-$04, Figure 4 has demonstrated a recovered pulse intensity profiles and PPA curve observed at 0.606 $G_{HZ}$; the intrinsic pulse profile is observed at 0.925 $GH_{Z}$; all three models has been checked; extended and thick screen models produce the same results in intensity profiles and PPA curves, they produced flat PPA curves; here the thin screen model has been applied. Recovered intensity profiles do not match very well with the intrinsic profile, but the PPA curve shows S-curve-like feature ([@lorimer05]). In this pulsar another high frequency profile observed at 1.4 $GH_{Z}$ was tried for comparison with recovered pulse profile, but results were not good.
For PSR B1859$+$03, Figure 5 plots the descattered pulse profiles and PPA curve observed at 0.606 $GH_{Z}$; the intrinsic pulse profile is observed at 0.925 $GH_{Z}$; all three models are tried, and all of them worked well; the intensity profiles and PPA curves all have the same features. the presented plots in Fig.5 are obtained by using thin screen model. Recovered intensity profiles are similar with the intrinsic profiles in comparison, and PPA curve (g) is much the same with the PPA curve (f) of intrinsic one.
For PSR B1946$+$35, Figure 6 gives the descattered pulse profiles and PPA curve observed at 0.408 $GH_{Z}$; the intrinsic pulse profile is observed at 0.61 $GH_{Z}$; all three models are tested, all that models give good results. In extended screen model PPA curve shows flat pattern, in the other two models PPA curves show jump-like structure. Shown here is the application of the thick screen model. Recovered pulse profiles are quite similar to the intrinsic pulse profiles, the recovered PPA curve shows a jump-like feature as shown in Fig. 6. It is acceptable to add 90 degrees to the last four points of curve (g) ([@Phrudth09]), making the PPA curve much like with the positive part of plot (f). Interestingly, the higher frequency observation of 0.925 $GH_{Z}$, 1.408 $GH_{Z}$ shows orthogonal jumps in it’s PPA curve ([@Lorimer98]), this indicates somehow that the recovered PPA curve (g) may be plausible; a fuller discussion will be presented in the next section.
Discussion {#sect:discussion}
==========
Section 3.1 has shown the simulation of scattering and descattering results; as indicated in Fig. 1 the scattering can cause pulse broadening and flattening of PPA curve; when there is jump in PPA in original pulse, after scattering PPA curve can be much more complicated, but the PPA curves can be descattered as shown in plot (f), (l). In section 3.2, devoted to practical application, five pulsars’ intensity pulse profiles and PPA curves have been descattered; in each pulsar the frequency of intrinsic pulse profile for comparison is below 1.4 $GH{_Z}$. In almost all pulsars the recovered profiles and PPA curves are quite similar to the features of intrinsic pulse profiles and its PPA curves (see Fig. 2, Fig.3, Fig.5, and Fig. 6); these obtained results in section 3.2 support the previous assumption that the original pulse characteristics are substantially frequency-invariant below 1.4 $GH{_Z}$ ([@Radhakrishnan69]).
When carrying out the descattering process the existence of pulse width evolution following changes in frequency has been ignored ([@lorimer05]) because of the small difference in frequency between the scattered pulse profiles and the intrinsic pulse profiles. Figures (2$-$6) show that all the descattered compensations to the scattered pulse profiles are good except the intensity pulse profiles of PSR B1838$-$04 (see Fig. 4); this may arise from the roughness of scattered pulse profiles. In PSR B1356$-$60, the frequency of a chosen intrinsic pulse profile for comparison is 1.56 $GH_{Z}$, because no other observed frequencies are available below 1.4 $GH{_Z}$. The recovered PPA curves in all pulsars also showed good agreement with our expectation, namely that some of them have similar features to their intrinsic PPA curve (see Figure 2, 5), the PPA curve in Fig. 4 shows S-curve-like feature, some of them have jump-like features (see Figure 3 , 6); the jumps in Figure 3 , 6 can be understood through the simulation in section 3.1; if the original PPA curve has jump feature which were distorted or flattened by scattering ([@Karastergiou09]), the recovering of such a PPA curve is likely the cause of jumps within a true scattering model. Fortunately, in higher frequency observation of 0.9 $G H_{Z}$, 1.4 $G H_{Z}$ of the pulsar B1946$+$35 showed orthogonal jumps in their PPA curves ([@Lorimer98]); these are the intrinsic pulse profiles compared to the scattered pulse profile observed at 0.408 $GH_{Z}$. The jumps observed are much more likely the intrinsic character of the PPA curves of this pulsar’s signal which can be observed below 1.4 $GH_{Z}$. According to the simulation, observational evidence and the empirical assumption of frequency invariance of pulse characters below 1.4 $GH_{Z}$ ([@Radhakrishnan69]) it can be said that the occurrence of jumps in our descattering compensation of the PSR B1946$+$35 is acceptable and it would be intrinsic feature of the scattered PPA curve observed at 0.408 $GH_{Z}$. For pulsar B1831$-$3 there is not any observational result of PPA curve with orthogonal jump in EPN database, so it may be easy to explain when adding 90 degrees to the last five points.
Conclusions {#sect:conclusion}
===========
We have shown the descattering compensation to the pulse profiles and PPA curves of five pulsars; the compensation for scattered pulse profiles and PPA curves brought us good results. through simulation and practical application, it is found that all the intrinsic characteristics of pulse signal can be recovered if the scattering model is clear enough. The recovering of pulse characters are an important issue in pulse study and rotation measurement (RM), and in setting the arrival time of impulse signals from pulsar; the recovered S-curve-like PPA curves of such a pulsar B1838$-$04 would give us an opportunity to apply the RVM ([@Radhakrishnan69]) to set the magnetic inclination angle $\alpha$ and impact parameter $\beta$. In one word, the recovering of the pulse profiles and PPA curves may improve our understanding of pulse emission regions and emission mechanisms. In this paper three Stokes parameters $I, Q, U$ have been restored. The Stokes parameter $V$ has been left for latter discussion. The author hopes to continue studying the Stokes phase portraits ([@Chung10]) of those five pulsars which may be useful for the research in pulsar emission properties.
We thank the referee for the helpful comments. This work was funded by the National Natural Science Foundation of China (NSFC) under No.10973026 and the key program project of Joint Fund of Astronomy by NSFC and CAS under No. 11178001.
[99]{}
Bhat, N. D. R., Cordes, J. M., & Chatterjee, S. 2003, ApJ, 584, 782
Chung, C. T. Y., Melatos, A. 2010, MNRAS, 411, 2471
Gould, D. M. & Lyne, A. G. 1998, MNRAS, 301, 235
Jaap Tinbergen. 1996, Astronomical Polarimetry (Cambridge: Cambridge Univ. Press)
Karastergiou, A. 2009, Mon. Not. R. Astron. Soc., 392, L60$-$L64
Komersaroff, M. M., Hamilton, P. A., Abels, J. G. 1972, Australian J. Phys., 25, 759 Kuzmin, A. D. & Izvekova, V. A. 1993, MNRAS, 260, 724
Li, X. H., & Han, J. L. 2003, A & A, 410, 253
Lorimer, D. R., Jessner, A., Seiradakis, J. H. et al. 1998, A & AS, 128, 541
Lorimer, D. R., Kramer, M. 2005, Handbook of Pulsar Astronomy (Cambridge: Cambridge Univ. Press)
Macquart, J. -P. Melrose, D. B. 2000, Phys. Rev. E, 62, 4177
Phrudth Jaroenjittichai. 2009, The Rotating Vector Model and Applications (Univesity of Manchester, School of Physics and Astronomy)
Radhakrishnan, V., & Cooke, D. J. 1969, Astrophys. Lett, 3, 225
Ramachandran, R., Mitra, D., Deshpande, A. A., McConnell, D. M., & Ables, J. G. 1997, MNRAS, 290, 260
Rankin, J. M., Comella, J. M., Craft, H. D., Richards, D. W., Campbell, D. B., & Counsellman, C. C. 1970, ApJ, 162, 707 Rickett, B. J. 1977, ARA & A, 15, 479
Scheuer, P. A. G.1968, nat, 218, 920
Van de Hulst, H. C. 1957, Light scattering by small particles, Wiley, New York
von Hoensbroech, A., Xilouris, K. M. 1997, Astron. Astrophys. Suppl. Ser. 126, 121-149
Weisberg, J. M., R. A., Cordes, J. M., Spangler, S. R., & Clifton, T. R. 1990, BAAS, 22, 1244
Williamson, I. P. 1972, MNRAS, 157, 55
\[lastpage\]
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
=100
The rapid localization of GRB 021004 by the HETE-2 satellite allowed nearly continuous monitoring of its early optical afterglow decay, as well as high-quality optical spectra that determined a redshift of $z_{3}$=2.328 for its host galaxy, an active starburst galaxy with strong Lyman-$\alpha$ emission and several absorption lines. Spectral observations show multiple absorbers at $z_{3A}= 2.323$, $z_{3B}= 2.317$, and $z_{3C}= 2.293$ blueshifted by $\sim$ 450, $\sim$ 990, and $\sim$ 3,155 km s$^{-1}$ respectively relative to the host galaxy Lyman-$\alpha$ emission. We argue that these correspond to a fragmented shell nebula that has been radiatively accelerated by the gamma-ray burst (GRB) afterglow at a distance $\gax$ 0.3 pc from a Wolf-Rayet star GRB progenitor. The chemical abundance ratios indicate that the nebula is overabundant in carbon and silicon. The high level of carbon and silicon is consistent with a swept-up shell nebula gradually enriched by a WCL progenitor wind over the lifetime of the nebula prior to the GRB onset. The detection of statistically significant fluctuations and color changes about the jet-like optical decay further supports this interpretation since fluctuations must be present at some level due to irregularities in a clumpy stellar wind medium or if the progenitor has undergone massive ejection prior to the GRB onset. This evidence suggests that the mass-loss process in a Wolf-Rayet star might lead naturally to an iron-core collapse with sufficient angular momentum that could serve as a suitable GRB progenitor. Even though we cannot rule out definitely the alternatives of a dormant QSO, large-scale superwinds, or a several hundred year old supernova remnant responsible for the blueshifted absorbers, these findings point to the possibility of a likely signature for a massive-star GRB progenitor.
author:
- 'N. Mirabal, J. P. Halpern, Ryan Chornock, Alexei V. Filippenko, D. M. Terndrup, E. Armstrong, J. Kemp, J. R. Thorstensen, M. Tavarez, & C. Espaillat'
title: 'GRB 021004: A Possible Shell Nebula around a Wolf-Rayet Star Gamma-Ray Burst Progenitor'
---
= ==1=1=0pt =2=2=0pt
Introduction
============
Considerable evidence exists connecting long-duration GRBs to star-forming regions and consequently to a massive-star origin. For instance, optical spectroscopy of well-calibrated emission lines has been used to derive star-formation rates (SFRs) that place GRB host galaxies slightly above the field galaxy population at comparable redshifts, in terms of SFR (Djorgovski et al. 2001). GRB locations within their host galaxies also seem to follow closely the galactic light distribution and are hard to reconcile with coalescing compact objects in a galactic halo (Bloom, Kulkarni, & Djorgovski 2002). Additional clues have come from secondary peaks observed in the late-time optical light curves of a few GRBs that have been interpreted as supernova (SN) emission associated with the GRB formation ( Bloom et al. 2002; Garnavich et al. 2003). Recently, spectra of the GRB 030329 afterglow have shown an emergence of broad features characteristic of the peculiar type-Ic supernovae (Stanek et al. 2003; Chornock et al. 2003). Driven by the observational evidence and detailed calculations, two models have emerged as the leading massive-star GRB progenitors, namely, collapsars and supranovae. The collapsar model (Woosley 1993; MacFadyen & Woosley 1999) corresponds to a black hole formed promptly in a massive-star core-collapse (typically a Wolf-Rayet star) that fails to produce a successful outgoing shock (Type I), or in the less extreme case a “delayed black hole” results by fallback after a weak outgoing shock (Type II). In the supranova model, a GRB takes place once the centrifugal support of a “supramassive” neutron star, formed months or years prior to the event, weakens and the neutron star collapses to form a black hole (Vietri & Stella 1998).
Although an association with massive-star collapse was among the first theories proposed to explain GRBs (Colgate 1974), a definite local signature of the GRB progenitor is still being sought. The recent detection of blueshifted H, C IV, and Si IV absorbers in the spectrum of the GRB 021004 afterglow (Chornock & Filippenko 2002), coupled with the irregularities observed in its optical light curve, has been interpreted as evidence of a clumpy wind from a massive-star progenitor, such as a WC Wolf-Rayet star (Mirabal et al. 2002a; Schaefer et al. 2003). In this paper, we discuss what might constitute the first detection of a fragmented shell nebula around a GRB progenitor. Our basic approach in this analysis is to begin with simple models consistent with the photometry and spectroscopy of the GRB 021004 afterglow. We then consider the physical parameters for each model and introduce modifications that best fit the GRB 021004 data. The outline of the paper is as follows: §2 describes the optical photometry and spectroscopy, while §3 describes the temporal decay, broadband modeling of the afterglow, absorption-line identification, and abundance analysis. In §4 and §5, we detail the evolution of a massive-star shell nebula and radiative acceleration models. An in-depth analysis of alternative explanations is given in §6. Finally, the implications of our results for GRB progenitors are presented in §7, and §8 summarizes our conclusions.
Observations
============
Optical Photometry
------------------
GRB 021004 is to date the fastest localized long-duration GRB detected by the HETE-2 satellite (Shirasaki et al. 2002). The HETE-2 FREGATE, WXM, and SXC instruments detected the event on 2002 Oct. 4.504 (UT dates are used throughout this paper) with a duration of $\approx$100 seconds. The improved flight localization software in the WXM instrument produced a reliable position only 49 seconds after the beginning of the burst, that was later refined by the ground analysis. Rapid follow-up detected a bright optical transient (OT) inside the 90$\%$ WXM confidence circle only 10 minutes after the initial HETE-2 notice (Fox 2002).
We began optical observations of the OT 14.7 hr after the burst by obtaining an equal number of well-sampled, high signal-to-noise ratio $B$, $V$, $R$, and $I$ images using the 1.3 m and 2.4 m telescopes at the MDM Observatory (Halpern et al. 2002). Nearly nightly observations were carried out in the $B$ and $R$ bands until 2002 Oct. 25 with additional late-time measurements on 2002 Nov. 25-27. We placed all the optical observations on a common $BVRI$ system using the latest calibration of nearby field stars acquired by Henden (2002). The MDM photometric measurements including errors are listed in Table 1 and shown in Figure 1. For clarity in Figure 1 we have omitted the early-time observations, $t \lax$ 14.7 hr after the burst (refer to Fox et al. 2003 for details).
Optical Spectroscopy
--------------------
Optical spectra were obtained with the dual-beam Low Resolution Imaging Spectrometer (LRIS; Oke et al. 1995) on the Keck-I 10 m telescope on 2002 Oct. 8.426-8.587 (Chornock & Filippenko 2002). The spectra were taken in five individual 1200 s exposures using a $1^{\prime\prime}$ wide slit. The skies were variably cloudy, so the first three exposures were of noticeably higher quality than the last two. We used a 400 lines/mm grating blazed at 8500 Å on the red side and a 400 lines/mm grism blazed at 3400 Å on the blue side. The effective spectral resolution is $\sim$ 6 Å on both the blue and red sides. The data were trimmed, bias-subtracted, and flat-fielded using standard procedures. Extraction of the spectra was performed using IRAF [^1]. The wavelength scale was established by fitting polynomials to Cd-Zn and Hg-Ne-Ar lamps. Flux calibration was accomplished using our own IDL procedures (Matheson et al. 2000) and comparison exposures of the spectrophotometric standard stars BD +28$^{\circ}$ 4211 and BD +17$^{\circ}$ 4708 on the blue and red sides, respectively (Stone 1977; Oke & Gunn 1983). We removed the atmospheric absorption bands through division by the intrinsically smooth spectra of the same standard stars (Matheson et al. 2000). The two halves of the spectrum were averaged in the 5650-5700 Å overlap region.
Analysis
========
Temporal Decay and Environment
------------------------------
Early analysis of the OT revealed statistically significant fluctuations about a simple power-law decay (Bersier et al. 2003; Halpern et al. 2002). Although the general trend of the early optical decay can be fitted by a simple power-law fit, shown in Figure 2, significant deviations about the mean decay are present on time scales from minutes to hours. Figure 3 also shows a distinct color change starting around 1.6 days after the burst in agreement with the results reported by Bersier et al. (2003). It has been postulated that deviations from a simple power-law behavior might be induced by inhomogeneities in the circumburst medium (Wang & Loeb 2000), structure within a jet (Kumar & Piran 2000), and/or if the afterglow is “refreshed” by collisions among separate shells (Rees & Mészáros 1998). The possible causes of the deviations and color changes in the GRB 021004 OT will be discussed at greater length in §7.
By day 9, the gradual decay of the OT became clearly inconsistent with the early-time power-law fit and turned steeper in its decay slope. In order to describe the steepening of the afterglow decay, we fitted the data with a smooth function taking into account a constant host-galaxy contribution and a broken power-law behavior of the form $$F(t)\ =\ {2\,F_b\,(t/t_b)^{\alpha_1} \over
1+(t/t_b)^{(\alpha_1-\alpha_2)}}\ +\ F_0,$$ where $\alpha_1$ and $\alpha_2$ represent the asymptotic early and late-time slopes, $F_0$ is the constant galaxy contribution, and $F_b$ is the OT flux at the break time $t_b$ (Halpern et al. 2000). The best fit to the data is found for $\alpha_{1} = -0.72$, $\alpha_{2} = -2.9$, and $t_{b}$ = 9 days. In Figure 1, we draw the fit including the constant contribution of the host galaxy which contaminates the OT at late times. The host galaxy contribution was determined from deep $B$ and $R$ imaging obtained on 2002 Nov. 25-27 under good seeing conditions. The images reveal a relatively blue host galaxy, $(B-R)_{host} \approx$ 0.65 mag, with estimated magnitudes $R_{host}$ = 23.95 $\pm 0.08$ and $B_{host}$ = 24.60 $\pm 0.06$, measured in an aperture that includes the total contribution of the host galaxy. The estimated host galaxy color is bluer than the OT itself \[$(B-R)_{OT} \approx$ 1.05 mag\] and bluer than nearby field galaxies. Figure 4 shows images of the GRB 021004 OT at early ($t \approx$ 19.8 hr) and late ($t \approx$ 52 days) times when the host galaxy dominates. A recently released (HST Program 9405, PI: Fruchter) high-resolution image of the OT obtained with the Advanced Camera for Surveys (ACS) on [*HST*]{} with the F606W filter, shown in Figure 5, confirms the emergence of an underlying host galaxy by 2002 Nov. 26. Unfortunately, it is difficult to resolve the contribution from the OT cleanly (Levan et al. 2003).
The early-time optical photometry of the OT, in comparison with the X-ray flux obtained 0.85–1.86 days after the burst (Sako & Harrison 2002), can be used to derive the broadband optical-to-X-ray slope $\beta_{ox} = -1.05$. Remarkably, this is similar to the X-ray spectral index itself, $\beta_{x} \approx -1.1$ $\pm$ 0.1. However, a smooth extrapolation through the $BVRI$ photometric points yields $\beta_{o}$ $\approx -1.29$ and an even steeper slope, $\beta_{o} \approx -1.66$, using the full range of the LRIS spectral continuum. Although there is no significant excess absorption in the X-ray afterglow spectrum (Sako & Harrison 2002), this type of discrepancy is common in afterglow spectra and is normally understood as requiring additional dereddening of the optical spectrum to account for local extinction in the host galaxy ( Mirabal et al. 2002b). Alternatively the broadband spectrum can be described as having an X-ray excess due to inverse-Compton scattering (Sari & Esin 2001).
The temporal decay described thus far is consistent with the predicted adiabatic evolution of a jet-like afterglow (Rhoads 1999). A gradual steepening of the optical decay is expected when the jet angle begins to spread into a larger angle. Under the assumption that the GRB is collimated initially, we estimate a half-opening angle of the jet $\theta_0 \approx 11^{\circ}\!n^{1/8}$ (Sari, Piran, & Halpern 1999) for an isotropic energy E$_{iso}\approx 5.6 \times 10^{52}$ ergs (Malesani et al. 2002). For frequencies $\nu$ $<$ $\nu_{c}$, where $\nu_{c}$ is the “cooling frequency” at which the electron energy loss time scale is equal to the age of the shock, the assumption of a synchrotron model in an uniform-density medium predicts $\alpha$ = (3/2)$\beta$ = $-3(p-1)/4$. Here $p$ is the index of the power-law electron energy distribution. For $\alpha_{o} = -0.72$, this implies $\beta_{o} = -0.48$ and $p$ = 1.96, which is consistent with the optical data only if extinction at the host galaxy is significant (Holland et al. 2003).
On the other hand, a model in which the afterglow expands into a pre-existing wind medium of density $n \propto r^{-2}$ can reproduce the slow decay at early times followed by steepening caused by the synchrotron minimum characteristic frequency $\nu_{m}$ passing through the optical band (Li & Chevalier 2003). The decay can be described by $\alpha$ = $-(3p - 2)/4$ = $(3\beta + 1)/2$ for $\nu$ $<$ $\nu_{c}$ (Chevalier & Li 2000). A fit in the wind scenario yields $\alpha = -0.72$, with a steeper index $\beta = -0.81$ and $p$ = 1.63. Although an electron index $p <$ 2 seems rather hard for a power-law electron energy distribution, this type of electron distribution has been encountered in other GRB afterglows ( Panaitescu & Kumar 2002). It is important to note that a wind-like behavior seems to be supported by the radio and X-ray observations assuming $\alpha = -1.0$ and $p$ = 2.1 (Li & Chevalier 2003). It is difficult to determine a definite value for $\alpha$ because of the ubiquitous fluctuations in the early optical light curve. The fact that the broadband wind-interaction model provides a reasonable fit to the early temporal decay $\alpha$, as well as to the spectral index $\beta$ without substantial reddening, makes this model attractive for a circumstellar medium with stellar-like density $n \propto$ $r^{-2}$.
Absorption System Identifications and Line Variability
------------------------------------------------------
We used the full-range optical continuum of the GRB 021004 afterglow to derive a function of the form $F_{\nu}$ $\propto$ $\nu^{\beta}$ with $\beta = -1.66$ $\pm$ 0.26, in agreement with the value reported by Matheson et al. (2003). As pointed out by these authors, a shallower power-law index results from fitting only the red end of the spectrum. Three absorption systems are spectroscopically identified along the blue continuum at $z_{1}=1.380$, $z_{2}=1.602$, and $z_{3}=2.328$ that have been independently confirmed ( Chornock & Filippenko 2002; Salamanca et al. 2002; Matheson et al. 2003). In addition the spectrum reveals three distinct blueshifted absorbers at $z_{3A}$=2.323, $z_{3B}$=2.317, and $z_{3C}$=2.293 within 3,155 km s$^{-1}$ of the Lyman-$\alpha$ emission-line redshift of the $z_{3}=2.328$ system (Chornock & Filippenko 2002; Salamanca et al. 2002; Savaglio et al. 2002).
Figures 6 and 7 show the normalized LRIS spectrum including emission and absorption-line systems, as well as identified blueshifted absorbers. Table 2 lists the line identifications including vacuum wavelengths, observed wavelengths, redshift, oscillator strengths $f_{ij}$, equivalent widths ($W_{\lambda}$) in the rest frame, and error estimates on the equivalent widths. In order to compute the errors on the equivalent width for each line we used the IRAF [*splot*]{} task, which allows error estimates based on a Poisson model for the noise. For blended lines, IRAF [*splot*]{} fits and deblends each line separately using predetermined line profiles. Error estimates for blended lines are computed directly in [*splot*]{} by running a number of Monte Carlo simulations based on preset instrumental parameters.
There has been a recent suggestion of additional Lyman-$\alpha$ blueshifted absorbers located at 27,000 and 31,000 km s$^{-1}$ from the host galaxy (Wang et al. 2003). Lines consistent with the reported positions are present in the LRIS spectrum; however, we believe that the proposed identifications are not straightforward. Apart from being structured at the LRIS resolution, the lines lack matching C IV or Si IV blueshifted absorbers at the proposed velocities. An alternative identification is also plausible if the lines arise from Mg II doublets in systems located at redshifts $z$ $\approx$ 0.293 and 0.313, respectively. However, the line ratios are inconsistent with this interpretation unless the lines are strongly saturated. Given the uncertainty surrounding the nature of these lines, for the remainder of this work we will characterize them as unidentified and will refrain from including them in the analysis. We suspect that high-resolution spectroscopy of the optical afterglow of GRB 021004 obtained by other groups ( Salamanca et al. 2002) might provide more clues about these lines.
The prominence of the Lyman-$\alpha$ line emission and the presence of Al II (1670.79 Å) in absorption at the same redshift as the Lyman-$\alpha$ emission, $z_{3}$ = 2.328, confirms the highest system as the host galaxy of GRB 021004. The host galaxy is an active starburst galaxy with SFR $\approx$15 yr$^{-1}$ (Djorgovski et al. 2002). The detection of a lone low-ionization absorption line (Al II) at $z_{3}$ = 2.328 seems plausible because of its large oscillator strength, $f_{Al~II}$ = 1.83. All other absorption lines ( Lyman series, C IV, and Si IV) have velocity components blueshifted with respect to $z_{3}$. These components are crucial to the analysis since intrinsic blueshifted absorbers located physically near the burst should be sensitive to time-dependent photoionization due to the decaying GRB photoionizing flux (Mirabal et al. 2002b; Perna & Lazzati 2002). Although many of the absorption lines are not fully resolved, the C IV and Si IV doublet ratios suggest that the $z_{3C}$ absorber is not strongly saturated. Other absorbers are blended, but do not show flat profiles reaching zero intensity which are a distinct indication of strong saturation.
Direct comparison of the equivalent-width measurements presented in this work with published results (M$\o$ller et al. 2002; Matheson et al. 2003) show no definite evidence for time-dependent absorption-line variability on timescales of hours to days after the burst. In addition, there are no strong observable signatures of immediate production of vibrationally excited H$_{2}$ levels in the region 912 Å $\leq$ $\lambda_{\rm rest}$ $\leq$ 1650 Å (Draine 2000), and reradiated fluorescent emission in a similar range. The recent report of spectropolarimetric variations seen across some Lyman-$\alpha$ absorption features, and the increasing polarization near the blue continuum of the GRB 021004 afterglow (Wang et al. 2003), are reminiscent of the effects reported in broad absorption-line QSOs (Goodrich & Miller 1995). If real, the spectropolarimetric results would favor the proximity of the absorbers to the burst. This possibility may be reinforced by the suggestion of a “line-locking” effect ( Scargle 1973) in the C IV doublet (Savaglio et al. 2002).
Abundance Analysis
------------------
In order to derive the abundances of the identified absorbers, we estimated the column density $N_{j}$ for each identified line $j$ following the linear part of the curve of growth (Spitzer 1978) written in the form $$N_{j}({\rm cm}^{-2})= 1.13 \times 10^{17}\frac{W_{\lambda}({\rm m\AA})}{f_{ij}\lambda^{2}({\rm \AA})}.$$ The resulting column densities derived for single absorption lines are listed in Table 3. A visual inspection of the lines does not reveal strongly saturated profiles; however, most lines are not fully resolved. Comparison with Table 3 shows that the hydrogen column densities inferred from Lyman-$\alpha$ are less than those inferred from Lyman-$\beta$. This might be the result of line blending or simply implies that Lyman-$\alpha$ is somewhat saturated.
Resulting total ionic concentrations are given in Table 4. In order to determine the total abundances of each element, we assumed the observed ionic concentrations and upper limits for various states of ionization in the spectral range. Therefore, the abundances obtained are an underestimate of true abundances since ionic abundances of other species are required. However, we justify this simplified scheme by pointing out the approximate coincidence in ionization potential of various species (Si, C, N) and the detection of the dominant ions for each element. Particularly interesting are the measurements of C and Si since they exhibit enhanced abundances compared to solar abundances (Anders & Grevesse 1989). This is discussed further in §3.4. The largest uncertainty is that of oxygen due to the large ionization potential of its high-ionization states. Since no O VI was detected, it is impossible to predict what ionization states of oxygen should have been present along this line of sight (Spitzer 1996; Savage et al. 2000).
In general, ionization effects depend on the conditions of the environment. For this reason, in most elements, ionization processes are complex and layered around the GRB host galaxy. Accordingly, the dominant sources of error in the total abundances are the uncertainty in the temperature of the medium and the errors in the measured equivalent widths (Savage & Sembach 1996). We note that the observed blueshifted absorbers range in ionization from Lyman-$\alpha$ $\lambda$1215.67 Å to C IV $\lambda\lambda$1548.20 Å, 1550.77 Å. The presence of Lyman-$\alpha$ in absorption indicates a low-ionization gas component that cannot survive in the highly ionized C IV/Si IV region unless hydrogen is shielded from external photoionization or is dense enough to recombine. One plausible scenario is that we are probing a shielded low-ionization region that has been enriched by physically adjacent C IV and Si IV.
Kinematics and Abundances of the Blueshifted Absorbers
------------------------------------------------------
The next step in our analysis is to explore a connection between the chemical abundances and the physical mechanism responsible for accelerating the blueshifted absorbers. Starting with the hypothesis that the absorbers are intrinsic to the host galaxy, we recall that multiple blueshifted absorbers at similar velocities have been detected in massive stellar winds (Abbott & Conti 1987), as well as in QSO absorption-line systems (Anderson et al. 1987). The former are understood to be driven by the pressure of the stellar radiation (Castor, Abbott, & Klein 1975), while the latter are thought to arise either in chance intervening neighboring systems or as part of QSO gas outflows (Aldcroft, Bechtold, & Foltz 1997). One important distinction in this instance is the absence of any obvious spectroscopic evidence for an active QSO associated with the host galaxy of GRB 021004. We shall consider the likelihood of chance intervening systems in §6.
Having argued against a QSO-related origin, we focus on the possibility of a massive stellar wind around the GRB progenitor. This scenario is highly relevant in connection to massive-star progenitors in GRB models (Woosley 1993). Current stellar models predict that a massive star loses most of its original hydrogen envelope via stellar winds exposing elements like carbon, nitrogen, and oxygen (Abbott & Conti 1987). This stage marks the beginning of the Wolf-Rayet phase. According to the observed chemical composition, Wolf-Rayet stars are classified into subtypes WC, WN, and WO (Crowther, De Marco, & Barlow 1998). For instance, in a few WN stars, hydrogen appears to be present along with helium and nitrogen lines, while the majority of WC and WO Wolf-Rayet stars display hydrogen-free spectra. The notable absence of helium, nitrogen and oxygen in the spectrum of GRB 021004 seemingly rules out a straightforward connection with WN and WO subtype stellar winds. A bigger burden for a smooth stellar wind scenario results from the uncomfortable task of placing sufficient low-ionization species in the same region as highly ionized species like C IV and S IV once the photoionization front from the GRB has made its way through the wind. This is because most species in a stellar wind, following a $n \propto r^{-2}$ profile, are completely photoionized within a few parsecs of the GRB almost instantly (see also Lazzati et al. 2002).
Based on the previous reasoning, it appears unlikely that the observed absorbers are produced directly within a smooth stellar wind. However, we have yet to consider the interaction of a stellar wind with its neighboring ISM and material shed during previous stellar phases. A massive stellar wind carries not only mass but kinetic energy that produces shocks in the wind-ISM interaction (Castor, McCray, & Weaver 1975; Ramirez-Ruiz et al. 2001). The interaction leads naturally to the formation of overdense shells or shell nebulae along the wind profile, as seen in numerous examples ( Moore, Hester, & Scowen 2000). These observations suggest that shell nebulae are common around Wolf-Rayet stars. Indeed, narrow-band surveys indicate that shell nebulae are present around 35$\%$ of all Wolf-Rayet stars ( Marston 1997). A study of the optical morphologies of shell nebulae shows distinctions between different stages of formation and physical conditions of their interior (Chu 1991).
Apart from providing a complex circumstellar environment, a shell nebula configuration enables natural mixing of low-ionization hydrogen species from the ISM and prior main sequence/supergiant phases, with high-ionization C IV and Si IV from an adjacent Wolf-Rayet wind. For instance, nebular structures observed around the explosion site of SN 1987A (Panagia et al. 1996 and references therein) are believed to have been enriched by the progenitor material prior to the explosion (Wang 1991). A number of spectroscopic observations confirm that shell nebulae around Wolf-Rayet stars are mainly material from the massive star rather than the ISM (Parker 1978; Kwitter 1984). The absence of strong nitrogen and oxygen lines, and the presence of C IV and Si IV in the GRB 021004 afterglow spectrum are consistent with a WCL Wolf-Rayet star (Mirabal et al. 2002a), in which the bulk of the wind has a composition characteristic of He-burning and $\alpha$-capture products (Crowther, De Marco, & Barlow 1998). This line of argument is thus far consistent with a shell nebula observed as chemical enrichment in the blueshifted absorbers, but let us explore its kinematic evolution.
The Expansion of a Shell Nebula
===============================
The free expansion of a massive stellar wind is thought to end when the mass of the swept-up shell nebula is comparable to the mass driven by the wind (Castor, McCray, & Weaver 1975). Figure 8 shows the theoretical structure of a stellar wind bubble and shell nebula formed at the termination of a free-expanding wind. The swept-up shell nebula mass becomes equal to mass driven by the wind at a time $\tau$ set by $$\tau=\sqrt{3\dot{M} \over 4 \pi v_{t}^3 n m_{\rm
p} \mu } \approx 300~{\rm yr},$$ for a typical mass-loss rate $\dot{M}=10^{-5}~{\rm M}_{\odot}$ yr$^{-1}$, density of the surrounding medium $n$ = 1 cm$^{-3}$, and terminal velocity $v_{t}$ = 1000 km s$^{-1}$. The mass conservation relation implies that during this time a stellar wind moving at $v_{t}$ = 1000 km s$^{-1}$ has reached a radius $R_{\rm s}$ given by $$R_{\rm s} = v_{t} \tau \approx 0.3~{\rm pc}.$$ This radius $R_{\rm s}$ is in agreement with the modeling of Wolf-Rayet stars using detailed stellar tracks (Ramirez-Ruiz et al. 2001). After the swept-up shell nebula is formed, it proceeds to expand adiabatically because the pressure of the hot gas inside the wind bubble is higher than the circumwind environment (Castor, McCray, & Weaver 1975). As it expands, a low-ionization swept-up shell nebula formed around a massive-star bubble will be gradually enriched and fragmented as it is subject to Rayleigh-Taylor and Vishniac instabilities (Ryu & Vishniac 1988; Garc[[í]{}]{}a-Segura & Mac Low 1995a,b). The onset of instabilities would explain naturally the presence of multiple dense-shell fragments along this line sight that could give rise to the individual blueshifted absorbers observed in the spectrum of the GRB 021004 afterglow.
The expansion of the shell nebula in the adiabatic phase can be described by the momentum equation, or $${d \over dt}[{M_{\rm s}(t)v(t)}]=4\pi R_{\rm s}^{2} P_{w},$$ where $M_{\rm s}(t)$ is the mass of the swept-up shell nebula, $v(t)$ is the rate of expansion of the bubble, and $P_{w}$ is the internal pressure caused by the wind. In the adiabatic regime, the internal pressure due to the wind can be written as $P_{w}$ = $L_{w}t/(2\pi R_{\rm s}^{3})$, where $L_{w}$ is the wind luminosity.
Using this expression for the internal pressure gives $${R \over t} {d \over dt} \left(R^{3} {d \over dt} R\right) = {3 L_{w} \over
2 \pi n m_{p}},$$ where we have used $v(t)=dR/dt$ and $M_{\rm s}(t) = (4\pi/3) R_{\rm s}^{3} n m_{p}$. The expression has a solution of the form
$$R_{\rm s}(t) = \left({25 L_{w} \over 14 \pi n m_{p}}\right)^{1/5} t^{3/5},$$
which can be rewritten as $$R_{\rm s}(t) = 33 \left({L_{36} \over n_{0}}\right)^{1/5} t_{6}^{3/5} {\rm pc},$$ with $L_{36}$ in units of $10^{36}$ erg s$^{-1}$, $n_{0}$ in units of cm$^{-3}$, and $t_{6}$ in units of $10^{6}$ yrs. The velocity of expansion of the shell nebula is given in the same terms by $$v(t) = 19.8 \left({L_{36} \over n_{0}}\right)^{1/5}
t_{6}^{-2/5}~{\rm km~s^{-1}}.$$
A key result here is that over the duration of the Wolf-Rayet phase, shell nebulae can reach radii of order $R_{\rm s} \approx$ 10 pc and expansion velocities $v \approx$ 40 km s$^{-1}$. Evidently the derived expansion velocity of a swept-up shell nebula is nowhere near the observed $\sim$ 450, $\sim$ 990, and $\sim$ 3,155 km s$^{-1}$ blueshifted absorbers. If instead of an energy-conserving expansion, we invoke large radiation losses and assume that the wind bubble is undergoing momentum conservation and hence expanding as $R_{\rm s}(t) \propto t^{1/2}$ (Steigman, Strittmatter, & Williams 1975), the approximation yields a radius and expansion velocity similar to the energy-conserving solution and is still inconsistent with the observed velocities. We call this inconsistency with the blueshifted absorbers the [*kinematic problem*]{}.
Radiative Acceleration of a Shell Nebula
========================================
Faced with a theoretical expansion velocity much too slow to explain the blueshifted absorbers, we reexamined the velocity profiles that we obtained for Lyman-$\alpha$, Lyman-$\beta$, C IV, and Si IV. If the blueshifted components are associated with the host galaxy of GRB 021004, these must originate in an expanding outflow or alternatively might have been accelerated radiatively by the GRB. The absence of noticeable absorption-line variability and deceleration in the absorber velocities could be an argument against an expanding outflow near the GRB afterglow. An outflow leading the GRB afterglow would most likely be subject to rapid photoionization and even disappear as the shock overruns it. An alternative model assumes that radiative acceleration by the GRB afterglow plays a crucial role in the kinematics of the wind bubble and shell nebula surrounding a Wolf-Rayet progenitor. The advantage here is that radiative acceleration provides more flexibility in the discreteness and velocity structure of the blueshifted absorbers. Radiative acceleration effects in absorption could also lead to “line-locking” as suggested by high-resolution spectroscopy (Savaglio et al. 2002).
We can directly model the radiative history of a wind-bubble/shell-nebula system by using photoionization models with a fixed prescription for the density profile. For these particular simulations, we have used the photoionization code IONIZEIT (Mirabal et al. 2002b), which includes time-dependent photoionization processes taking place under a predetermined GRB afterglow ionizing flux. Recombination processes are not included since the densities to be considered are not sufficiently high to produce a recombination timescale comparable to the duration of the GRB afterglow. This is a major assumption since the densities at which the recombination timescales become comparable to the duration of the bright phase of the afterglow, $10^{10}-10^{12}$ cm$^{-3}$ (Perna & Loeb 1998), are still allowed on the basis of high-resolution X-ray spectra of GRB afterglows (Mirabal, Paerels, & Halpern 2003). Moreover, observations of water masers in circumstellar envelopes suggest densities of $\sim 5 \times 10^{9}$ cm$^{-3}$ within discrete clumps (Richards, Yates, & Cohen 1998), which would significantly reduce the recombination timescale within overdensities.
In each case the densities and physical regions are chosen to match the observed column densities (Mirabal et al. 2002b). The models used here include elemental abundances of H, He, C, and Si. The input flux $F_{\nu'}(r,t')$ was approximated from the broadband observations of GRB 021004. The functional form for the flux $F_{\nu'}(r,t')$ has two components to accommodate the observed “rise” in the optical light-curve at $t_{\rm rise} \approx$0.08 days (Fox et al. 2003). So, for $t \leq$ 0.08 day,
$$F_{\nu'}(r_0,t')= 2.21 \times 10^{-26}
\left({\nu' \over 4.55 \times 10^{14}(1+z) {\rm Hz}}\right)^{-1.05}
\left({d^{2}\over (1 + z)r_{0}^{2}}\right)
\left({t'(1+z)\over 0.0066\, {\rm day}}\right)^{-0.8}
{\rm {ergs~cm^{-2}~s^{-1}~Hz^{-1}}};$$
otherwise
$$F_{\nu'}(r_0,t')= 4.66 \times 10^{-28}
\left({\nu' \over 4.55 \times 10^{14}(1+z) {\rm Hz}}\right)^{-1.05}
\left({d^{2}\over (1 + z)r_{0}^{2}}\right)
\left({t'(1+z)\over 1.37\, {\rm day}}\right)^{-0.72}
{\rm {ergs~cm^{-2}~s^{-1}~Hz^{-1}}},$$
where $d$ is the luminosity distance to the burst at $z=2.328$ (assuming $H_0 \simeq 65$ km s$^{-1}$ Mpc$^{-1}$, $\Omega_m \simeq
0.3$, $\Omega_{\Lambda} \simeq 0.7$), and $r_{0}$ is the inner radius of the photoionized region set by the shock evolution $r_{0}=2.85 \times 10^{16}t_{days}^{1/2}$ cm (Chevalier & Li 2000). The simulations also take into account the effect of synchrotron self-absorption during the initial seconds (Piran 1999).
Throughout we adopted a standard $n \propto r^{-2}$ scaling and shell-nebula fragments with a density $n_{\rm s}\approx 10^{3}-10^{6}$ cm$^{-3}$, motivated by observations (Moore, Hester, & Scowen 2000). Initially, we considered the simplest smooth wind model for the density profile with overdense shell-nebula fragments superposed. The parameters of the IONIZEIT models were then varied to maximize the agreement with the observed blueshifted absorbers. In order to avoid overionization, the absorbers must be dense with the appropriate filling factor or alternatively the shell-nebulae fragments must be shielded from the GRB emission by attenuating optically thick material at the base of the wind bubble. Satisfactory photoionization models require the shell-nebula fragments to be placed at a distance of at least $R_{\rm s} \gax$ 0.3 pc to reproduce the non-detection of absorption-line variability in GRB 021004. Using the derived column densities and assuming that we are looking at a typical line of sight, we can estimate the physical mass of each fragment $\Delta M$, where $$\Delta M= 4\pi R_{\rm s}^{2} \Delta R n_{\rm s} m_{p} \mu \gax 10^{-4}~{\rm M_{\odot}}.$$ In the case $R_{\rm s} \gax$ 90 pc, this implies $\Delta M \gax$ 10 M$_{\odot}$ which sets a tentative upper limit on the shell-nebula radius simply based on the mass-loss rate.
With these initial constraints, we proceeded to use the IONIZEIT code to calculate the radiative momentum acquired within individual shell-nebula fragments. The fine-tuning for any configuration derives from the balance required to prevent extreme overionization of the blueshifted absorbers and still be efficient for acceleration mechanisms. In particular, the radiative acceleration $g(r,t)$ as a function of time can be expressed as $$g(r,t) = {\kappa(r,t) L(t) \over 4 \pi r^{2} c},$$ where $L(t)$ corresponds to the total luminosity and $\kappa(r,t)$ represents the opacity at a distance $r$. The radiative flux as a function of time can be estimated directly within each shell-nebula fragment by following the prescription in Mirabal et al. (2002b): $$F_{\nu}(r_{i+1},t) = F_{\nu}(r_{i},t) e^{-\tau_{\nu,i}}
\left({r_{i} \over r_{i+1}}\right)^{2},$$ where $\tau_{\nu,i}$ stands for the photoionization optical depth which is estimated within each shell-nebula fragment $i$. The product of the radiative acceleration and the interval between time steps $\Delta t$ yields the total velocity acquired by a shell-nebula fragment as a function of time, $$v(t) = v_{o} + \sum_{r,t} g(r,t) \Delta t,$$ where $v_{o}$ is the initial velocity in the shell nebula. This calculation assumes that the blueshifted absorbers are driven mainly by bound-free absorption transferred to the shell nebula fragments. Additional mechanisms that can contribute to the radiation acceleration term are bound-bound processes, free electron scattering, and line driving. Generally, spectral lines can play an important factor in enhancing the electron scattering coefficient (Castor, Abbott, & Klein 1975; Gayley 1995). However, the available time for scattering after the GRB is much shorter than in long-lived stellar winds or active galactic nuclei where line driving might be most efficient ( Proga, Stone, & Kallman 2000). A full two-dimensional, time-dependent simulation of a radiation-driven wind around a GRB is imperative to determine the contribution from different mechanisms.
Figure 9 illustrates the total velocity acquired by a fragmented shell nebula as a function of time. The model assumes that the shell nebula is distributed over a thick annulus located $\gax$ 0.3 pc from the GRB and that the fragments are overdense at 0.3 pc, 0.54 pc, and 0.8 pc. Clearly, the radiative acceleration model shown in Figure 9 reproduces the total velocity required to accelerate individual blueshifted absorbers to the observed velocities. These results are in agreement with the discussion by Schaefer et al. (2003). In order to reach the observed velocities and avoid major absorption-line variability, the bulk of the radiative acceleration needs to take place during the early stages of the afterglow, which is consistent with the model. The faster-moving fragments will get impacted by a larger flux and acquire more radiative acceleration. The slower fragments can be explained reasonably if they are more distant or less opaque than the fragment closest to the GRB. In general, shell nebulae can present low opacities to radiative flux. This seems to be confirmed by observations of the NGC 6888 nebula where only 2$\%$ of the ionizing photons are thought to be processed within the shell nebula (Moore, Hester, & Scowen 2000). Alternatively, the slower fragments might have been subject to deceleration as these encountered the surrounding medium. Although our simulations can reproduce the velocities of the absorbers, we cannot rule out that the absorbers are very distant and completely unrelated to the GRB event. However, the spectropolarimetric results (Wang et al. 2003) hint at an intrinsic origin for the absorbers.
For simplicity, processes such as multiple photon scatterings, density gradients within each fragment, and dust destruction/acceleration have been ignored but warrant consideration in more detailed modeling of radiative acceleration processes around GRBs. Because we were denied access to the true broadband GRB photoionizing flux at early times, the models described thus far should be considered tentative. While it can be argued that the actual GRB photoionizing flux, density structure, and opacity within the shell-nebula fragments could be quite different, we believe that variations about the initial estimates can be accommodated by modifying the placement and density structure within each shell-nebula fragment without altering our main conceptualization. It is important to note that observed shell nebulae span diameters ranging from 0.3 pc to 180 pc (Marston 1997; Chu, Weis, & Garnett 1999) and that only about 35$\%$ of all Wolf-Rayet stars seem to be surrounded by overdense shell nebulae (Marston 1997). Furthermore, shell nebulae typically display intrinsic expansion velocities $v \approx 40$ km s$^{-1}$ that can only be resolved with high-resolution spectroscopy. Taken together, these facts imply that shell nebulae around GRBs might have been missed in the past either because they were absent, too slow, or completely photoionized by the GRB emission. Another important factor is the morphology of shell nebulae that might have a decisive effect in the angular geometry of the absorbing material (Chu et al. 1991). GRB 021004 could be a fortunate instance where the shell nebula around a GRB progenitor was located at an ideal distance from the GRB to avoid complete photoionization and simply acquire sufficient radiative acceleration to produce resolved individual blueshifted absorbers.
Alternative Explanations for the Blueshifted Absorbers
======================================================
Supernova Remnant
-----------------
Having made an argument for accelerated shell-nebula fragments to explain the abundances and kinematics of the blueshifted absorbers, we now evaluate whether the observations can still be compatible with a different origin. Of the numerous models for GRB progenitors, the supranova model (Vietri & Stella 1998) and the magnetar models (Wheeler, Meier, & Wilson 2002) predict a possible association with a supernova remnant (SNR) that would already be in place prior to the GRB onset. This possibility has been raised to explain the blueshifted absorbers in the GRB 021004 afterglow spectrum (Wang et al. 2003) and its deviations about the light curve (Lazzati et al. 2002).
Assuming that the observed velocities reflect the mechanical momentum acquired during the free expansion of the SNR together with the distance constraint obtained from the photoionization simulations ($R_{\rm s} \gax$ 0.3 pc) yields a minimum age for the remnant $t_{SNR}$ $\gax$ 100 yrs. The estimated age, $t_{SNR}$, appears high relative to simulations of neutron stars which show major difficulties maintaining differential rotation in neutron stars for more than a few minutes (Shapiro 2000). However, $t_{SNR}$ is still barely consistent with the analytical supranova model which assumes magnetic fields of $\approx 10^{12}-10^{13}$ G, and a SNR age of a few weeks to several years ($\sim$ 100 yrs) (Vietri & Stella 1999). Possibly a bigger difficulty facing the SNR scenario is the absence of strong blueshifted Al, Fe, and O absorbers that should be evident in the remnant of a core-collapse SN (Hughes et al. 2000; Patat et al. 2001).
Considering that the observed abundances are those around a GRB progenitor, then a massive star that is part of a binary system embedded within the old SNR of its companion is also a possibility (Fryer et al. 2002). In that scenario, the hydrogen envelope of the actual GRB progenitor might have been lost via mass transfer to a companion that exploded as a SN following mass transfer. Only after removal via mass transfer of the shear created by a hydrogen envelope, the actual GRB progenitor might have retained sufficient angular momentum ($j \gax$ $10^{16}$ cm$^{2}$ s$^{-1}$) to produce a collapsar (MacFadyen & Woosley 1999). Apart from envelope stripping, an additional advantage of a binary system is the collision of stellar winds that can produce turbulence (Kallrath 1991; Stevens, Blondin, & Pollock 1992) and could account for the clumpy structure observed in the optical decay. This latter scenario is still consistent with a Wolf-Rayet star GRB progenitor.
QSO Absorption-Line Systems
---------------------------
QSO absorption-line systems provide a more obvious connection to blueshifted absorbers. There are numerous QSO observations displaying prominent high-velocity blueshifted absorbers ( Weymann et al. 1979; Anderson et al. 1987). These narrow lines are thought to form either in ejecta or infall near the QSO or in intervening systems that coincidentally fall along the line of sight to the QSO. An examination of the GRB 021004 afterglow spectrum reveals no definite evidence that the host galaxy is an active QSO, hence a connection with intrinsic QSO gas outflows is not implied. Nevertheless, we cannot rule out the possibility that a QSO accelerated the absorbers and became dormant after a duty cycle of $\sim 10^{7}$ yrs (Wyithe & Loeb 2002). The scenario does require that the QSO outflow took place nearly aligned with the line of sight to the GRB, which seems highly improbable.
Supershells and Superwinds
--------------------------
The inferred SFR $\approx$ 15 yr$^{-1}$ for the host galaxy of GRB 021004 (Djorgovski et al. 2002) is well above the average rate at that redshift. Interestingly, a number of powerful extragalactic starbursts show emission-line outflows at velocities around $10^{2}-10^{3}$ km s$^{-1}$ (Heckman, Armus, & Miley 1990). The majority of these “superwind” measurements are made from emission-line widths. In the case of GRB 021004, the blueshifted absorbers are resolved and span a larger velocity range than the wind velocity inferred from the Lyman-$\alpha$ emission-line profile. If a large-scale superwind venting into the halo of the host galaxy is responsible for the blueshifted absorbers, one might expect Al II from interstellar gas to be blueshifted with respect to the Lyman-$\alpha$ emission as part of the expanding outflow (Heckman et al. 2000). This is not the case in the GRB 021004 afterglow spectrum (§2). A different possibility is a chance interception of three local supershells associated with star-forming regions within the host galaxy driven by SNe and stellar winds in starburst bubbles (Heiles 1979). In theory, the large SFR could lead naturally to multiple energetic OB associations ($\gax$ 1000 stars); however, velocities $\geq$ 500 km s$^{-1}$ are rarely observed in individual shells around our Galaxy (Heiles 1979).
Outflowing Systems
------------------
In addition to the well-established intrinsic absorbers, there is a possible association with intervening gas extended over 3,155 km s$^{-1}$ and observed in projection along this line of sight. The system could be a very high-velocity analog of local outflowing systems (Savage et al. 2003). However, an extension of structure over 3,155 km s$^{-1}$ in velocity space appears highly unlikely based on the observed velocity distribution through the Milky Way. Moreover, the host galaxy would have to spill metals within the Lyman-$\alpha$ clouds to create the observed metal enrichment. Finally, a distant origin would be ruled out if the reported polarization changes across the Lyman-$\alpha$ absorption and continuum are intrinsic to the host galaxy (Wang et al. 2003).
Implications for the GRB Progenitors
====================================
Even though we cannot yet rule out definitely some of the alternative explanations, it is apparent from the analysis that a shell nebula around a massive-star progenitor is likely to give rise to the blueshifted absorbers in the spectrum of the GRB 021004 afterglow. The large deviations in the optical decay of the GRB 021004 afterglow (see §3.1) are unusual and suggest that additional effects such as small-scale inhomogeneities in the circumburst medium (Wang & Loeb 2000; Mirabal et al. 2002a), structure within a jet (Kumar & Piran 2000), and/or “refreshed” collisions among separate shells of ejecta are taking place (Rees & Mészáros 1998). Different groups have fitted the $R$-band data (Lazzati et al. 2002; Nakar et al. 2003), as well as the broadband data (Heyl & Perna 2003), to explore each possibility. Although several models provide reasonable fits to the $R$-band data, the broadband modeling finds that a clumpy medium produced by density fluctuations provides a more reasonable fit to the data (Heyl & Perna 2003). The interpretation of density fluctuations in the GRB 021004 circumburst medium is entirely consistent with the predicted density bumps that arise when stellar winds sweep up the ISM or the material shed by the star in previous stages of evolution (Mirabal et al. 2002a; Ramirez-Ruiz et al. 2001). It is also possible that a cocoon from a progenitor stellar envelope can be displaced along the direction of the GRB relativistic jet (Ramirez-Ruiz, Celotti, & Rees 2002). A number of observations of Wolf-Rayet stars confirm that stellar winds are indeed not homogeneous but rather clumpy (Nugis, Crowther, & Willis 1998; Lépine et al. 1999).
Upon examination of Figure 3, it is clear that the OT also exhibits a distinct color evolution over time (Bersier et al. 2003; Heyl & Perna 2003). On its way to the Wolf-Rayet phase, a main-sequence star is thought to evolve into a supergiant phase (Abbott & Conti 1987). The mass loss in the supergiant phase leads to the formation of a dense supergiant material shell. After entering its Wolf-Rayet phase, the Wolf-Rayet wind slowly starts sweeping the supergiant material, eventually overtaking the main-sequence material from the star. The streaming of winds, and wind collisions taking place throughout the mass-loss history of the star, result in a complex morphology that might lead to distinct color changes and a spectrum redder than the typical synchrotron spectrum (Ramirez-Ruiz et al. 2001) as seen in Figure 3, especially if these are dusty winds accelerated by the stellar luminosity. We postulate that if the color changes are external to the afterglow/jet evolution, the changes might be intrinsically related to the mass-loss history and dust patterns within a massive stellar wind (Garc[[í]{}]{}a-Segura & Mac Low 1995a,b). Two-dimensional gasdynamical wind simulations including dust are necessary to explore this possibility.
The suggestion of a fragmented shell nebula around the GRB 021004 progenitor accompanied by a clumpy wind medium meets partially the conditions required by the collapsar model (Woosley 1993). It is associated with a massive star and a star-forming region (MacFadyen & Woosley 1999). The main theoretical difficulty with the collapsar model has been the requirement for retaining sufficient angular momentum (MacFadyen, Woosley, & Heger 2001). Possible solutions include metal-deficient stars and/or Wolf-Rayet stars that have lost most of their envelope through an efficient progenitor wind or to a binary companion (MacFadyen & Woosley 1999). These solutions remove the torques induced by an outer envelope and conserve adequate rotation. The interpretation of an enriched shell-nebula around the GRB 021004 progenitor hints at the possibility that a massive-star GRB progenitor might have lost most of its envelope prior to collapse. If this were the case, a stripped core would ease conservation of angular momentum requirements prior to iron-core collapse and support a connection with the collapsar GRB model. Unfortunately, due to our limited access to a single line of sight towards the GRB, there is little information about the three-dimensional geometry and evolution of the collapse. Therefore, it is crucial to complement time-variability studies with contemporary polarization measurements that might provide information about the evolution of the jet (Sari 1999).
Conclusions and Future Work
===========================
The presence of blueshifted absorbers in the spectrum of the GRB 021004 afterglow presents possible evidence for a fragmented shell nebula located $\gax$ 0.3 pc from the GRB site that has been radiatively accelerated by the GRB afterglow emission. While at this stage we cannot rule out an origin related to a dormant QSO, large-scale superwinds, or an old supernova remnant, these alternative explanations present some problems. The mass-loss process in certain massive stars might conserve sufficient angular momentum to induce an efficient iron-core collapse or collapsar. If this interpretation is correct, the observational data on GRB 021004 might be the first direct evidence of a Wolf-Rayet star GRB progenitor. Additional spectroscopy of high-ionization absorbers such as C IV, Si IV, N V, and O VI along with associated low-ionization species will clarify this possibility, with the caveat that nearby shell nebulae might be rapidly photoionized by the GRB and that only 35$\%$ of all Wolf-Rayet stars show evidence of overdense shell nebulae. In this context, the advent of the [*Swift*]{} satellite (Gehrels 2000) should provide unique access to early multiwavelength observations of GRB afterglows that will be fundamental for determining the photoionization history and radiative acceleration evolution of absorbers.
Interestingly, the inhomogeneities about the optical decay of the GRB 021004 afterglow imply that overdensities in a clumpy medium might be responsible for bumps in the OT decay. This finding motivates the need to model highly structured circumburst media beyond the simplest uniform and wind-like profiles. It also calls for dedicated observatories and observers to provide continuous coverage for a bigger sample of GRB afterglows. It is possible that overdensities might explain the presence of some late-time secondary peaks seen in other GRBs ( Bloom et al. 2002; Garnavich et al. 2003) if SN spectral signatures are missing in the late-time spectrum. In fact, a consequence of the shell nebula model is that a rebrightening in the light curve should occur once the shock overruns the shell-nebula fragments (Ramirez-Ruiz et al. 2001). In addition, blueshifted absorbers from a shell nebula should disappear as the shock reaches that point. Unfortunately, by the time this were to happen in the GRB 021004 afterglow decay ($\gax$ 1 yr after the burst), the light would be completely dominated by the host galaxy. Continued late-time photometry and spectroscopy is urged in order to search for this definite signature in other GRBs. Finally, if some GRBs are produced by core-collapse in Wolf-Rayet stars, type Ib or Ic supernovae might be a viable consequence after the violent event (Smartt et al. 2002). The recent discovery of SN 2003dh associated with GRB 030329 (Stanek et al. 2003; Chornock et al. 2003) could provide further constraints on the nature of the GRB progenitors and another link between Wolf-Rayet stars and GRBs.
We would like to thank Sebastiano Novati and Vincenzo Cardone for obtaining observations at the MDM Observatory, and the Keck Observatory staff for their assistance. We thank Jim Applegate, Orsola De Marco, and Mordecai-Mark Mac Low for useful conversations. We also acknowledge Eric Gotthelf for allowing us to use his Alpha computer. This material is based upon work supported by the National Science Foundation under Grants AST-0206051 to J. P. H. and AST-9987438 to A. V. F.
Abbott, D. C., & Conti, P. S. 1987, ARA&A, 25, 113
Aldcroft, T., Bechtold, J., & Foltz, C. 1997, in ASP Conf. Ser. 128, Mass Ejection from Active Galactic Nuclei, ed. R. Weymann, I. Shlosman, & N. Arav (San Francisco: ASP), 25
Anders, E., & Grevesse, N. 1989, Geochim. Cosmochim. Acta 53, 197
Anderson, S. F., Weymann, R. J., Foltz, C. B., & Chaffee, F. H., Jr. 1987, AJ, 94, 278
Bersier, D., et al. 2003, ApJ, 548, L43
Bloom, J. S., Kulkarni, S. R., & Djorgovski, S. G. 2002, AJ, 123, 1111
Bloom, J. S., et al. 2002, ApJ, 572, L45
Castor, J., McCray, R., & Weaver, R. 1975, ApJ, 200, L107
Castor, J. I., Abbott, D. C., & Klein, R. I. 1975, ApJ, 195, 157
Chevalier, R. A., & Li, Z.-Y. 2000, ApJ, 536, 195
Chornock, R., & Filippenko, A. V. 2002, GCN Circular 1605
Chornock, R., Foley, R. J., Filippenko, A. V., Papenkova, M., & Weisz, D. 2003, GCN Circular 2131
Colgate, S. A. 1974, ApJ, 187, 333
Crowther, P. A., De Marco, O., & Barlow, M. J. 1998, MNRAS, 296, 367
Chu, Y.-H. 1991, in Wolf-Rayet Stars Interrelations with Other Massive Stars in Galaxies, in Proc. IAU Symposium No. 143, ed. van der Hucht, K. A., & Hidayat, B. (Dordrecht: Kluwer), 349
Chu, Y.-H., Weis, K., & Garnett, D. R. 1999, AJ, 117, 1433
Djorgovski, S. G., et al. 2001, in Gamma Ray Bursts in the Afterglow Era, ed. E. Costa, et al. (Springer: Berlin), 218
Djorgovski, S. G., et al. 2002, GCN Circular 1620
Draine, B. T. 2000, ApJ, 532, 273
Fox, D. W. 2002, GCN Circular 1564
Fox, D. W., et al. 2003, Nature, 422, 284
Fryer, C. L., Heger, A., Langer, N., & Wellstein, S. 2002, ApJ, 578, 335
Garc[[í]{}]{}a-Segura, G., & Mac Low, M.-M. 1995a, ApJ, 455, 145
Garc[[í]{}]{}a-Segura, G., & Mac Low, M.-M. 1995b, ApJ, 455, 160
Garnavich, P. M., et al. 2003, ApJ, 582, 924
Gayley, K. G. 1995, ApJ, 454, 410
Gehrels, N. A. 2000, Proc. SPIE 4140, 42
Goodrich, R. W., & Miller, J. S. 1995, ApJ, 448, L73
Halpern, J. P., Armstrong, E. K., Espaillat, C. C., & Kemp, J. 2002, GCN Circular 1578
Halpern, J. P., et al. 2000, ApJ, 543, 697
Heckman, T. M., Armus, L., & Miley, G. K. 1990, ApJS, 74, 833
Heckman, T. M., Lehnert, M. D., Strickland, D. K., & Armus, L. 2000, ApJS, 129, 493
Heiles, C. 1979, ApJ, 229, 533
Henden, A. A. 2002, GCN Circular 1583
Heyl, J. S., & Perna, R. 2003, ApJL, accepted (astro-ph/0211256)
Holland, S. T., et al. 2003, AJ, submitted (astro-ph/0211094)
Hughes, J. P., Rakowski, C. E., Burrows, D. N., & Slane, P. O. 2000, ApJ, 528, L109
Kallrath, J. 1991, MNRAS, 248, 653
Kumar, P., & Piran, T. 2000, ApJ, 535, 152
Kwitter, K. B. 1984, ApJ, 287, 840
Lazzati, D., Rossi, E., Covino, S., Ghisellini, G., & Malesani, D. 2002, A&A, 396, L5
Lépine, S., et al. 2000, AJ, 120, 3201
Levan, A., Fruchter, A., Fynbo, J., Vreeswijk, P., & Gorosabel, J. 2003, GCN Circular 2240
Li, Z.-Y., & Chevalier, R. A. 2003, ApJ, submitted (astro-ph/0303650)
MacFadyen, A. I., & Woosley, S. E. 1999, ApJ, 524, 262
MacFadyen, A. I., Woosley, S. E., & Heger, A. 2001, ApJ, 550, 410
Malesani, D., et al. 2002, GCN Circular 1607
Marston, A. P. 1997, ApJ, 475, 188
Matheson, T., Filippenko, A. V., Ho, L. C., Barth, A. J., & Leonard, D. C. 2000, AJ, 120, 1499
Matheson, T., et al. 2003, ApJ, 582, L5
Mirabal, N., Halpern, J. P., Chornock, R., & Filippenko, A. V. 2002a, GCN Circular 1618
Mirabal, N., et al. 2002b, ApJ, 578, 818
Mirabal, N., Paerels, F., & Halpern, J. P. 2003, ApJ, 587, 128
M$\o$ller, P., et al. 2002, A&A, 396, L21
Moore, B. D., Hester, J. J., & Scowen, P. A. 2000, AJ, 119, 2991
Nakar, E., Piran, R., & Granot, J. 2003, New Astr., submitted (astro-ph/0210631)
Nugis, T., Crowther, P. A., & Willis, A. J. 1998, A&A, 333, 956
Oke, J. B., & Gunn, J. E. 1983, ApJ, 266, 713
Oke, J. B., et al. 1995, PASP, 107, 375
Panagia, N., Scuderi, S., Gilmozzi, R., Challis, P. M., Garnavich, P. M., & Kirshner, R. P. 1996, ApJ, 459, L17
Panaitescu, A., & Kumar, P. 2002, ApJ, 571, 779
Parker, R. A. R. 1978, ApJ, 224, 873
Patat, F., et al. 2001, ApJ, 555, 900
Perna, R., & Lazzati, D. 2002, ApJ, 580, 261
Perna, R., & Loeb, A. 1998, ApJ, 501, 467
Piran, T., 1999, Phys. Rep., 314, 575
Proga, D., Stone, J. M., & Kallman, T. R. 2000, ApJ, 543, 686
Ramirez-Ruiz, E., Celotti, A., & Rees, M. J. 2002, MNRAS, 337, 1349
Ramirez-Ruiz, E., Dray, L. M., Madau, P., & Tout, C. A. 2001, MNRAS, 327, 829
Rees, M. J., & Mészáros, P. 1998, ApJ, 496, L1
Rhoads, J. E. 1999, ApJ, 525, 737
Richards, A. M. S., Yates, J. A., & Cohen, R. J. 1998, MNRAS, 299, 319
Ryu, D., & Vishniac, E. T. 1988, ApJ, 331, 350
Sako, M., & Harrison, F. A. 2002, GCN Circular 1624
Salamanca, I., Rol, E., Wijers, R., Ellison, S., Kaper, L., & Tanvir, N. 2002, GCN Circular 1611
Sari, R. 1999, ApJ, 524, L43
Sari, R., & Esin, A. A. 2001, ApJ, 548, 787
Sari, R., Piran, T., & Halpern, J. P. 1999, ApJ, 519, L17
Savage, B. D., & Sembach, K. R. 1996, ARA&A, 34, 279
Savage, B. D., et al. 2000, ApJ, 538, L27
Savage, B. D., et al. 2003, ApJS, 146, 125
Savaglio, S., et al. 2002, GCN Circular 1633
Scargle, J. D. 1973, ApJ, 179, 705
Schaefer, B., et al. 2003, ApJ, submitted (astro-ph/0211189)
Shapiro, S. L. 2000, ApJ, 544, 397
Shirasaki, C., et al. 2002, GCN Circular 1565
Smartt, S. J., Vreeswijk, P. M., Ramirez-Ruiz, E., Gilmore, G. F., Meikle, W. P. S., Ferguson, A. M. N., & Knapen, J. H. 2002, ApJ, 572, L147
Spitzer, L. 1978, in Physical Processes in the Interstellar Medium (New York: Wiley), 51
Spitzer, L. 1996, ApJ, 458, L29
Stanek, K. Z., et al. 2003, ApJ, submitted (astro-ph/0304173)
Steigman, G., Strittmatter, P. A., & Williams, R. E. 1975, ApJ, 198, 575
Stevens, I. R., Blondin, J. M., & Pollock, A. M. T. 1992, ApJ, 386, 265
Stone, R. P. S. 1977, ApJ, 218, 767
Vietri, M., & Stella, L. 1998, ApJ, 507, L45
Vietri, M., & Stella, L. 1999, ApJ, 527, L43
Wang, X., & Loeb, A. 2000, ApJ, 535, 788
Wang, L. 1991, A&A, 246, L69
Wang, L., Baade, D., Höflich, P., & Wheeler, J. C. 2003, ApJL, submitted (astro-ph/0301266)
Weymann, R. J., Williams, R. E., Peterson, B. M., & Turnshek, D. A. 1979, ApJ, 234, 33
Wheeler, J. C., Meier, D. L., & Wilson, J. R. 2002, ApJ, 568, 807
Woosley, S. E. 1993, ApJ, 405, 273
Wyithe, J. S. B., & Loeb, A. 2002, ApJ, 581, 886
[lcrc]{} 2002 Oct 5.118 & $B$ & $19.95 \pm 0.10$ & MDM 1.3 m\
2002 Oct 5.143 & $B$ & $19.90 \pm 0.10$ & MDM 1.3 m\
2002 Oct 5.169 & $B$ & $20.09 \pm 0.03$ & MDM 1.3 m\
2002 Oct 5.195 & $B$ & $20.12 \pm 0.03$ & MDM 1.3 m\
2002 Oct 5.211 & $B$ & $20.17 \pm 0.04$ & MDM 1.3 m\
2002 Oct 5.227 & $B$ & $20.21 \pm 0.04$ & MDM 1.3 m\
2002 Oct 5.248 & $B$ & $20.22 \pm 0.03$ & MDM 1.3 m\
2002 Oct 5.265 & $B$ & $20.23 \pm 0.03$ & MDM 1.3 m\
2002 Oct 5.280 & $B$ & $20.18 \pm 0.04$ & MDM 1.3 m\
2002 Oct 5.297 & $B$ & $20.32 \pm 0.04$ & MDM 1.3 m\
2002 Oct 5.313 & $B$ & $20.22 \pm 0.03$ & MDM 1.3 m\
2002 Oct 5.329 & $B$ & $20.27 \pm 0.03$ & MDM 1.3 m\
2002 Oct 5.345 & $B$ & $20.23 \pm 0.03$ & MDM 1.3 m\
2002 Oct 5.360 & $B$ & $20.15 \pm 0.03$ & MDM 1.3 m\
2002 Oct 5.376 & $B$ & $20.24 \pm 0.03$ & MDM 1.3 m\
2002 Oct 5.396 & $B$ & $20.28 \pm 0.03$ & MDM 1.3 m\
2002 Oct 5.411 & $B$ & $20.22 \pm 0.04$ & MDM 1.3 m\
2002 Oct 5.426 & $B$ & $20.25 \pm 0.04$ & MDM 1.3 m\
2002 Oct 5.453 & $B$ & $20.34 \pm 0.05$ & MDM 1.3 m\
2002 Oct 5.469 & $B$ & $20.27 \pm 0.05$ & MDM 1.3 m\
2002 Oct 5.485 & $B$ & $20.35 \pm 0.05$ & MDM 1.3 m\
2002 Oct 6.325 & $B$ & $21.03 \pm 0.02$ & MDM 1.3 m\
2002 Oct 7.318 & $B$ & $21.26 \pm 0.02$ & MDM 1.3 m\
2002 Oct 8.359 & $B$ & $21.66 \pm 0.03$ & MDM 1.3 m\
2002 Oct 8.484 & $B$ & $21.72 \pm 0.05$ & MDM 1.3 m\
2002 Oct 9.224 & $B$ & $21.90 \pm 0.03$ & MDM 1.3 m\
2002 Oct 11.303 & $B$ & $22.27 \pm 0.04$ & MDM 1.3 m\
2002 Oct 12.316 & $B$ & $22.52 \pm 0.11$ & MDM 1.3 m\
2002 Nov 27.19 & $B$ & $24.53 \pm 0.06$ & MDM 2.4 m\
2002 Oct 5.123 & $V$ & $19.39 \pm 0.04$ & MDM 1.3 m\
2002 Oct 5.147 & $V$ & $19.42 \pm 0.07$ & MDM 1.3 m\
2002 Oct 5.176 & $V$ & $19.52 \pm 0.02$ & MDM 1.3 m\
2002 Oct 5.199 & $V$ & $19.53 \pm 0.03$ & MDM 1.3 m\
2002 Oct 5.215 & $V$ & $19.56 \pm 0.03$ & MDM 1.3 m\
2002 Oct 5.231 & $V$ & $19.57 \pm 0.03$ & MDM 1.3 m\
2002 Oct 5.253 & $V$ & $19.57 \pm 0.03$ & MDM 1.3 m\
2002 Oct 5.269 & $V$ & $19.62 \pm 0.03$ & MDM 1.3 m\
2002 Oct 5.285 & $V$ & $19.61 \pm 0.03$ & MDM 1.3 m\
2002 Oct 5.301 & $V$ & $19.69 \pm 0.03$ & MDM 1.3 m\
2002 Oct 5.318 & $V$ & $19.60 \pm 0.03$ & MDM 1.3 m\
2002 Oct 5.333 & $V$ & $19.62 \pm 0.03$ & MDM 1.3 m\
2002 Oct 5.349 & $V$ & $19.66 \pm 0.03$ & MDM 1.3 m\
2002 Oct 5.365 & $V$ & $19.59 \pm 0.03$ & MDM 1.3 m\
2002 Oct 5.380 & $V$ & $19.58 \pm 0.03$ & MDM 1.3 m\
2002 Oct 5.400 & $V$ & $19.66 \pm 0.03$ & MDM 1.3 m\
2002 Oct 5.415 & $V$ & $19.75 \pm 0.03$ & MDM 1.3 m\
2002 Oct 5.442 & $V$ & $19.67 \pm 0.03$ & MDM 1.3 m\
2002 Oct 5.457 & $V$ & $19.70 \pm 0.03$ & MDM 1.3 m\
2002 Oct 5.473 & $V$ & $19.73 \pm 0.04$ & MDM 1.3 m\
2002 Oct 5.490 & $V$ & $19.73 \pm 0.04$ & MDM 1.3 m\
2002 Oct 5.126 & $R$ & $18.91 \pm 0.03$ & MDM 1.3 m\
2002 Oct 5.150 & $R$ & $18.89 \pm 0.06$ & MDM 1.3 m\
2002 Oct 5.185 & $R$ & $19.12 \pm 0.02$ & MDM 1.3 m\
2002 Oct 5.202 & $R$ & $19.16 \pm 0.02$ & MDM 1.3 m\
2002 Oct 5.218 & $R$ & $19.17 \pm 0.03$ & MDM 1.3 m\
2002 Oct 5.235 & $R$ & $19.13 \pm 0.03$ & MDM 1.3 m\
2002 Oct 5.257 & $R$ & $19.20 \pm 0.03$ & MDM 1.3 m\
2002 Oct 5.274 & $R$ & $19.18 \pm 0.02$ & MDM 1.3 m\
2002 Oct 5.289 & $R$ & $19.19 \pm 0.02$ & MDM 1.3 m\
2002 Oct 5.305 & $R$ & $19.19 \pm 0.02$ & MDM 1.3 m\
2002 Oct 5.321 & $R$ & $19.18 \pm 0.02$ & MDM 1.3 m\
2002 Oct 5.337 & $R$ & $19.20 \pm 0.02$ & MDM 1.3 m\
2002 Oct 5.353 & $R$ & $19.19 \pm 0.02$ & MDM 1.3 m\
2002 Oct 5.368 & $R$ & $19.16 \pm 0.02$ & MDM 1.3 m\
2002 Oct 5.384 & $R$ & $19.17 \pm 0.02$ & MDM 1.3 m\
2002 Oct 5.403 & $R$ & $19.22 \pm 0.03$ & MDM 1.3 m\
2002 Oct 5.419 & $R$ & $19.21 \pm 0.02$ & MDM 1.3 m\
2002 Oct 5.445 & $R$ & $19.19 \pm 0.03$ & MDM 1.3 m\
2002 Oct 5.461 & $R$ & $19.27 \pm 0.03$ & MDM 1.3 m\
2002 Oct 5.476 & $R$ & $19.29 \pm 0.03$ & MDM 1.3 m\
2002 Oct 5.493 & $R$ & $19.24 \pm 0.03$ & MDM 1.3 m\
2002 Oct 6.112 & $R$ & $19.84 \pm 0.03$ & MDM 1.3 m\
2002 Oct 6.294 & $R$ & $19.91 \pm 0.02$ & MDM 1.3 m\
2002 Oct 6.485 & $R$ & $20.00 \pm 0.02$ & MDM 1.3 m\
2002 Oct 7.110 & $R$ & $20.19 \pm 0.03$ & MDM 1.3 m\
2002 Oct 7.276 & $R$ & $20.14 \pm 0.02$ & MDM 1.3 m\
2002 Oct 7.472 & $R$ & $20.21 \pm 0.04$ & MDM 1.3 m\
2002 Oct 8.295 & $R$ & $20.47 \pm 0.02$ & MDM 1.3 m\
2002 Oct 8.427 & $R$ & $20.52 \pm 0.03$ & MDM 2.4 m\
2002 Oct 9.182 & $R$ & $20.85 \pm 0.04$ & MDM 1.3 m\
2002 Oct 9.334 & $R$ & $20.79 \pm 0.02$ & MDM 2.4 m\
2002 Oct 10.298 & $R$ & $21.03 \pm 0.02$ & MDM 2.4 m\
2002 Oct 11.258 & $R$ & $21.23 \pm 0.04$ & MDM 1.3 m\
2002 Oct 11.401 & $R$ & $21.30 \pm 0.03$ & MDM 2.4 m\
2002 Oct 12.267 & $R$ & $21.40 \pm 0.04$ & MDM 1.3 m\
2002 Oct 12.330 & $R$ & $21.44 \pm 0.03$ & MDM 2.4 m\
2002 Oct 15.297 & $R$ & $22.18 \pm 0.07$ & MDM 2.4 m\
2002 Oct 16.330 & $R$ & $22.33 \pm 0.10$ & MDM 2.4 m\
2002 Oct 25.270 & $R$ & $23.10 \pm 0.06$ & MDM 2.4 m\
2002 Nov 25.125 & $R$ & $23.85 \pm 0.08$ & MDM 2.4 m\
2002 Nov 26.177 & $R$ & $23.87 \pm 0.08$ & MDM 2.4 m\
2002 Oct 5.130 & $I$ & $18.42 \pm 0.07$ & MDM 1.3 m\
2002 Oct 5.155 & $I$ & $18.40 \pm 0.08$ & MDM 1.3 m\
2002 Oct 5.191 & $I$ & $18.46 \pm 0.03$ & MDM 1.3 m\
2002 Oct 5.208 & $I$ & $18.45 \pm 0.03$ & MDM 1.3 m\
2002 Oct 5.223 & $I$ & $18.55 \pm 0.05$ & MDM 1.3 m\
2002 Oct 5.240 & $I$ & $18.55 \pm 0.04$ & MDM 1.3 m\
2002 Oct 5.261 & $I$ & $18.54 \pm 0.04$ & MDM 1.3 m\
2002 Oct 5.278 & $I$ & $18.54 \pm 0.03$ & MDM 1.3 m\
2002 Oct 5.293 & $I$ & $18.56 \pm 0.04$ & MDM 1.3 m\
2002 Oct 5.302 & $I$ & $18.57 \pm 0.04$ & MDM 1.3 m\
2002 Oct 5.325 & $I$ & $18.57 \pm 0.04$ & MDM 1.3 m\
2002 Oct 5.341 & $I$ & $18.53 \pm 0.04$ & MDM 1.3 m\
2002 Oct 5.357 & $I$ & $18.55 \pm 0.03$ & MDM 1.3 m\
2002 Oct 5.372 & $I$ & $18.53 \pm 0.04$ & MDM 1.3 m\
2002 Oct 5.392 & $I$ & $18.54 \pm 0.03$ & MDM 1.3 m\
2002 Oct 5.408 & $I$ & $18.59 \pm 0.04$ & MDM 1.3 m\
2002 Oct 5.423 & $I$ & $18.54 \pm 0.05$ & MDM 1.3 m\
2002 Oct 5.449 & $I$ & $18.67 \pm 0.04$ & MDM 1.3 m\
2002 Oct 5.465 & $I$ & $18.68 \pm 0.05$ & MDM 1.3 m\
2002 Oct 5.481 & $I$ & $18.65 \pm 0.05$ & MDM 1.3 m\
2002 Oct 5.497 & $I$ & $18.61 \pm 0.05$ & MDM 1.3 m\
[ccccc]{} Ly $\delta$(949.74 Å) & 3261.08 & 2.328 & 1.39$\times 10^{-2}$ & ...\
Ly $\gamma$(972.54 Å) & 3203.94 & 2.294 & 2.90$\times 10^{-2}$ & 2.04 $\pm$ 0.60\
C III(977.02 Å) & 3214.88 & 2.290 & 7.62$\times 10^{-1}$ & 4.07 $\pm$ 0.84\
Ly $\gamma$(972.54 Å) & 3231.41 & 2.323 & 2.90$\times 10^{-2}$ & 3.11 $\pm$ 0.55\
C III(977.02 Å) & 3247.57 & 2.324 & 7.62$\times 10^{-1}$ & 3.68 $\pm$ 0.62\
Ly $\beta$(1025.72 Å) & 3376.11 & 2.292 & 7.91$\times 10^{-2}$ & 1.95 $\pm$ 0.55\
Ly $\beta$(1025.72 Å) & 3406.40 & 2.321 & 7.91$\times 10^{-2}$ & 7.17 $\pm$ 0.52\
O VI(1031.93 Å) & 3398.15 & 2.293 & 1.33$\times 10^{-1}$ & $\leq$ 1.02\
+O VI(1037.62 Å) & 3416.88 & 2.293 & 6.61 $\times 10^{-2}$ &\
Si II(1194.75 Å) & 3975.35 & 2.327 & 6.23$\times 10^{-1}$ & 2.37 $\pm$ 0.31\
+Al II(1670.79 Å) & & 1.379 & 1.83 & 3.37 $\pm$ 0.43\
– & 3613.91 & – & – & –\
– & 3626.13 & – & – & –\
– & 3667.12 & – & – & –\
– & 3680.84 & – & – & –\
Ly $\alpha$(1215.67 Å) & 4006.11 & 2.295 & 4.16$\times 10^{-1}$ & 3.91 $\pm$ 0.57\
Ly $\alpha$(1215.67 Å) & 4006.11 & 2.295 & 4.16$\times 10^{-1}$ & 3.91 $\pm$ 0.57\
Ly $\alpha$(1215.67 Å) & 4034.87 & 2.319 & 4.16$\times 10^{-1}$ & 4.82 $\pm$ 0.60\
Ly $\alpha$(1215.67 Å) & 4046.24 & 2.328 & 4.16$\times 10^{-1}$ & emission line\
N V (1238.82 Å) & 4079.37 & 2.293 & 1.57$\times 10^{-1}$ & $\leq$ 0.37\
+N V (1242.80 Å) & 4092.54 & 2.293 & 7.82$\times 10^{-2}$ &\
Al II(1670.79 Å) & 4345.80 & 1.601 & 1.83 & 0.80 $\pm$ 0.24\
Si IV(1393.76 Å) & 4590.26 & 2.293 & 5.14$\times 10^{-1}$ & 0.46 $\pm$ 0.10\
Si IV(1393.76 Å) & 4623.72 & 2.317 & 5.14$\times 10^{-1}$ & 1.33 $\pm$ 0.30\
+Si IV(1402.77 Å) & & 2.296 & 2.55$\times 10^{-1}$ &\
Si IV(1393.76 Å) & 4632.06 & 2.323 & 5.14$\times 10^{-1}$ & 1.14 $\pm$ 0.47\
Si IV(1402.77 Å) & 4653.64 & 2.317 & 2.55$\times 10^{-1}$ & 0.81 $\pm$ 0.15\
Si IV(1402.77 Å) & 4662.02 & 2.323 & 2.55$\times 10^{-1}$ & 1.01 $\pm$ 0.33\
C IV(1548.20 Å) & 5096.29 & 2.292 & 1.91$\times 10^{-1}$ & 0.96 $\pm$ 0.22\
C IV(1550.77 Å) & 5105.29 & 2.292 & 9.52$\times 10^{-2}$ & 0.75 $\pm$ 0.16\
[ccccc]{} C IV(1548.20 Å) & 5134.77 & 2.317 & 1.91$\times 10^{-1}$ & 1.71 $\pm$ 0.46\
C IV(1548.20 Å) & 5143.23 & 2.322 & 1.91$\times 10^{-1}$ & 2.02 $\pm$ 0.51\
+C IV(1550.77 Å) & & 2.317 & 9.52$\times 10^{-2}$ &\
C IV(1550.77 Å) & 5152.37 & 2.322 & 9.52$\times 10^{-2}$ & 1.71 $\pm$ 0.45\
Al II(1670.79 Å) & 5559.70 & 2.328 & 1.83 & 0.72 $\pm$ 0.16\
Fe II(2374.46 Å) & 5652.60 & 1.381 & 3.26$\times 10^{-2}$ & 0.64 $\pm$ 0.23\
Fe II(2344.21 Å) & 6101.00 & 1.603 & 1.10$\times 10^{-1}$ & 0.56 $\pm$ 0.17\
Fe II(2586.65 Å) & 6156.23 & 1.380 & 6.84$\times 10^{-2}$ & 0.82 $\pm$ 0.21\
Fe II(2374.46 Å) & 6175.49 & 1.601 & 3.26$\times 10^{-2}$ & 0.99 $\pm$ 0.17\
+ Al III(1854.72 Å) & & 2.330 & 5.60$\times 10^{-1}$ & 0.77 $\pm$ 0.13\
Fe II(2600.17 Å) & 6188.73 & 1.380 & 2.24$\times 10^{-1}$ & 0.94 $\pm$ 0.29\
Fe II(2382.77 Å) & 6201.15 & 1.602 & 3.01$\times 10^{-1}$ & 1.12 $\pm$ 0.32\
+ Al III(1862.79 Å) & & 2.329 & 2.79$\times 10^{-1}$ & 0.88 $\pm$ 0.25\
Mg II(2796.35 Å) & 6656.10 & 1.380 & 6.12$\times 10^{-1}$& 1.81 $\pm$ 0.37\
Mg II(2803.53 Å) & 6672.88 & 1.380 & 3.05$\times 10^{-1}$ & 1.47 $\pm$ 0.32\
Fe II(2586.65 Å) & 6731.85 & 1.603 & 6.84$\times 10^{-2}$ & 0.68 $\pm$ 0.16\
Fe II(2600.17 Å) & 6766.45 & 1.602 & 2.24$\times 10^{-1}$ & 0.83 $\pm$ 0.26\
Mg II(2796.35 Å) & 7276.72 & 1.602 & 6.12$\times 10^{-1}$& 1.53 $\pm$ 0.37\
Mg II(2803.53 Å) & 7295.44 & 1.602 & 3.05$\times 10^{-1}$ & 1.30 $\pm$ 0.32\
Mg I(2852.96 Å) & 7423.74 & 1.602 & 1.83 & 0.45 $\pm$ 0.13\
Fe II(2344.21 Å) & 7801.53 & 2.328 & 1.10$\times 10^{-1}$ & $\leq$ 0.39\
Fe II(2382.77 Å) & 7929.86 & 2.328 & 3.01$\times 10^{-1}$ & $\leq$ 0.59\
Fe II(2600.17 Å) & 8653.37 & 2.328 & 2.24$\times 10^{-1}$ & $\leq$ 0.88\
[ccccc]{} Ly $\gamma$ & 972.54 & 2.90$\times 10^{-2}$ & 16.11 $\pm$ 0.07 & $z_{3A,B}$\
& & & 15.92 $\pm$ 0.12 & $z_{3C}$\
Ly $\beta$ & 1025.72 & 7.91$\times 10^{-2}$ & 15.99 $\pm$ 0.03 & $z_{3A,B}$\
& & & 15.42 $\pm$ 0.11 & $z_{3C}$\
Ly $\alpha$ & 1215.67 & 4.16$\times 10^{-1}$ & 14.95 $\pm$ 0.05 & $z_{3A,B}$\
& & & 14.86 $\pm$ 0.06 & $z_{3C}$\
C III & 977.02 & 7.62$\times 10^{-1}$ & 14.76 $\pm$ 0.06 & $z_{3A,B}$\
& & & 14.80 $\pm$ 0.08 & $z_{3C}$\
C IV & 1548.20 & 1.91$\times 10^{-1}$ & 14.46 $\pm$ 0.15 & $z_{3A}$\
& & & 14.63 $\pm$ 0.09 & $z_{3B}$\
& & & 14.38 $\pm$ 0.08 & $z_{3C}$\
C IV & 1550.77 & 9.52$\times 10^{-2}$ & 14.93 $\pm$ 0.10 & $z_{3A}$\
& & & 14.63 $\pm$ 0.21 & $z_{3B}$\
& & & 14.57 $\pm$ 0.08 & $z_{3C}$\
N V & 1238.82 & 1.57$\times 10^{-1}$ & $\leq$ 14.23 & $z_{3C}$\
N V & 1242.80 & 7.82$\times 10^{-2}$ & $\leq$ 14.54 & $z_{3C}$\
O VI & 1031.93 & 1.33$\times 10^{-1}$ & $\leq$ 14.91 & $z_{3C}$\
O VI & 1037.62 & 6.61 $\times 10^{-2}$ & $\leq$ 15.21 & $z_{3C}$\
Si IV & 1393.76 & 5.14$\times 10^{-1}$ & 14.11 $\pm$ 0.15 & $z_{3A}$\
& & & 14.10 $\pm$ 0.10 & $z_{3B}$\
& & & 13.72 $\pm$ 0.08 & $z_{3C}$\
Si IV & 1402.77 & 2.55$\times 10^{-1}$ & 14.36 $\pm$ 0.12 & $z_{3A}$\
& & & 14.26 $\pm$ 0.07 & $z_{3B}$\
& & & 13.71 $\pm$ 0.16 & $z_{3C}$\
[ccc]{} H$^{0}$ & 16.11 $\pm$ 0.31 & $z_{3A,B}$\
& 15.92 $\pm$ 0.26 & $z_{3C}$\
C$^{+3}$ & $\geq$ 15.05 & $z_{3A}$\
& $\geq$ 14.93 & $z_{3B}$\
& 15.09 $\pm$ 0.08 & $z_{3C}$\
N$^{+4}$ & $\leq$ 14.71 & $z_{3C}$\
O$^{+5}$ & $\leq$ 15.39 & $z_{3C}$\
Si$^{+3}$ & 14.55 $\pm$ 0.13 & $z_{3A}$\
& 14.49 $\pm$ 0.08 & $z_{3B}$\
& 14.02 $\pm$ 0.12 & $z_{3C}$\
\[lris\]
\[lris\]
\[lris\]
\[lris\]
\[lris\]
\[lris\]
\[lris\]
\[lris\]
\[lris\]
[^1]: IRAF is distributed by the National Optical Astronomy Observatories, which are operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation.
|
{
"pile_set_name": "ArXiv"
}
|
[ **Gravitation equations,\
and space-time relativity**]{}\
\
[*Kharkov, 61103*]{}\
[*Ukraine*]{}
[**Abstract**]{}
[In contrast to electrodynamics, Einstein’s gravitation equations are not invariant with respect to a wide class of the mapping of field variables which leave equations of motion of test particles in a given coordinate system invariant. It seems obvious enough that just these mappings should play a role of gauge transformations of the variables in differential equations of gravitational field. We consider here in short a gauge-invariant bimetric generalisation of the Einstein equations which does not contradict availabel observation data. Physical interpretation of the bimetricity based on relativity of space-time with respect to used reference frame, following conceptually from old Poincaré fundamental ideas, is proposed.. ]{}
The relativistic differential equations of motion of charges in electromagnetic field are invariant with respect to some transformations of the field four-potential. For this reason it is naturally that Maxwell equations are also invariant with respect to these transformations. Similarly, the differential equations of motion of test particles in gravitational field in Einstein’s theory in a given coordinate system are invariant with respect to the following transformations of the Christoffel symbols $\Gamma_{\beta\gamma}^{\alpha}$ [^1] [@Weyl]-[@Eisenhart] of a Riemannian space-time $V$ : $$\label{GammaGeodesTransformations}
\overline{\Gamma}_{\beta\gamma}^{\alpha
}(x)=\Gamma_{\beta\gamma}^{\alpha}(x)+\delta_{\beta}^{\alpha}\ \phi_{\gamma
}(x)+\delta_{\gamma}^{\alpha} \phi_{\beta}(x),$$ where $\phi_{\beta}(x)$ are an arbitrary differentiable vector-function. It is the most easier for seeing, if the geodesic equations are written in the form $$\ddot{x}^{\alpha}+(\Gamma_{\beta\gamma}^{\alpha}-c^{-1}\Gamma_{\beta\gamma
}^{0}\dot{x}^{\alpha})\dot{x}^{\beta}\dot{x}^{\gamma}=0.
\label{EqMotionOfTestPart}$$ where the dot denotes differentiation with respect to $t=x^{0}/c$.
The Ricci and metric tensors also are not invariant under above self-mapping of Riemannian space-time which leave geodesic lines invariant. (They named geodesic mappings).
In contrast to Maxwell theory, Einstein’s equations are not invariant under these transformations [@Petrov], although it seems reasonable to suppose that just they are the transformations that have to play a role of gauge transformations of field variables in differential equations of gravitation. It is a very strange fact, especially taking into account that physical consequences resulting from Einstein’s equations agree very closely with all observations. data.
The most natural explanation of such situation is that if there are more correct gravitation equations than the Einstein ones, they may differ markedly from the last equations only at very strong field, close the Schwarzschild radius, where we have not yet firm evidences of validity of physical consequences of the Einstein equations. [^2]
The simplest object being geodesic invariant is the Thomas symbols [@Thomas]: $$\Pi_{\alpha\beta}^{\gamma}=\Gamma_{\alpha\beta}^{\gamma}-(n+1)^{-1}\left[
\delta_{\alpha}^{\gamma}\ \Gamma_{\beta}+\delta_{\beta}^{\gamma}\ \Gamma_{\alpha}\right] \; \label{ThomasSymbols}.$$ where $\Gamma_{\alpha}=\Gamma_{\beta\alpha}^{\beta}$.
The simpest geodesic-invariant generalisation of the vacuum Einstein equations are $${\mathcal{R}}_{\alpha\beta} =0,
\label{equations0}$$ where ${\mathcal{R}}_{\beta\gamma}$ is an object which is formed by the gauge-invariant Thomas symbols the same way as the Ricci tensor is formed out of the Christoffel symbols.
However, the problem is that $\Pi_{\alpha\beta}^{\gamma}$, as well as ${\mathcal{R}}_{\beta\gamma}$, is not a tensor.
This problem can be solved, if we will be consider all geometrical objects in $V$ as some objects in the Minkowski space-time by analogy with Rosen’s bimetric theory [@Rosen]. It means that we must replace all derivatives in geometrical objects o the f Riemannian space-time by the covariant ones defined in the Minkowski space-time. After that, in an arbitrary coordinate system we obtain instead $\Gamma_{\beta\gamma}^{\alpha}$ a tensor object $D_{\beta\gamma
}^{\alpha}=\Gamma_{\beta\gamma}^{\alpha}-\overset{\circ}{\Gamma}_{\beta\gamma
}^{\alpha}$ , where $\overset{\circ}{\Gamma}_{\beta\gamma
}^{\alpha} $ is Christoffel symbols of Minkowski space-time $E$ in used coordinate system. In like manner we obtain instead Thomas symbols a geodesic-invariant (i.e. gauge-invariant) tensor $$B_{\beta\gamma}^{\alpha}=\Pi_{\beta\gamma}^{\alpha}-\overset{\circ}{\Pi
}_{\beta\gamma}^{\alpha},$$ where $\overset{\circ}{\Pi
}_{\beta\gamma}^{\alpha} $ are the Thomas symbols in the Minkowski space-time. This tensor must play a role of a strength tensor of gravitational field. Now, using the identity $B_{\alpha\beta}^{\beta}=0 $, we obtain instead ( \[equations0\]) a geodesic-invariant bimetric equation which can be written in the form $$\nabla_{\alpha}B_{\beta\gamma}^{\alpha}-B_{\beta\delta}^{\epsilon}B_{\epsilon\gamma}^{\delta}=0, \label{MyVacuumEqs}$$ where $\nabla_{\alpha}$ denotes a covariant derivative in $E$. Some generalisation of the Einstein’s equations can be obtained and for the case of matter presence.
Evidently, these bimetric equations may be true if both the space-times, $V$ and $E$, have some physical meaning. But how these two physical space-time can coexist? An attempt to answer this question leads us to discussion of a fundamental problem of relativity of space-time with respect to properties of used measuring instruments. A fresh look at Poincaré old well known results allows to obtain conclusions which revise our understanding of geometrical properties of space-time .
At beginning of 20th century Poincaré showed [@Poincare] that only an aggregate “ geometry + measuring instruments” has a physical meaning, verifiable by experiment, and it makes no sense to assert that one or other geometry of physical space in itself is true. In fact, the equations of Einstein, is the first attempt to fulfil ideas of Berkeley - of Leibnitz - Mach about space - time relativity. Einstein’s equations clearly show that there is a relationship between properties of space - time and matter distribution . However Poincare’s ideas testify that space and time relativity is not restricted only to dependency of space-time geometry on matter distribution. The space-time geometry also depends on properties of measuring instruments. However, a choice of certain properties of the measuring instruments is nothing more than the choice of certain frame of reference, which just is such a physical device by means of which we test properties of space-time. Consequently, one can expect that there is a relationship between the metric of space- time and a used reference frame.
A step towards the implementation of such idea is considered in [@Verozub08a]. By a non-inertial frame of reference (NIFR) we mean the frame, the body of reference of which is formed by point masses moving in an inertial frame of reference (IFR) under the effect of a force field. By proper frame of reference of a given force field we mean the NIFR, the reference body of which is formed by point masses moving under the effect of the force field. We postulate that space-time in IFRs is the Minkowski one, in accordance with special relativity. Then, above definition of NIFRs allows to find line element of space-time in PFRs.
Let $\mathcal{ L}(x,\dot{x})$ be Lagrangian describing in an IFR the motion of point particles with masses $m$ forming the reference body of a NIFR . In this case can be sufficiently clearly argued [@Verozub08a] that the line element $ds$ of space-time is given by $$ds=-(mc)^{-1} \, dS(x,dx),
\label{dsMain}$$ where $S=\int{ \mathcal{ L}(x,\dot{x}) dt}$ is the action describing the motion of particles of the reference body in the Minkowski space-time. Therefore, properties of space-time in PFRs are entirely determined by properties of used frames in accordance with the Berkeley-Leibnitz-Mach-Poincaré ideas of relativity of space and time.
We can illustrate the above result by some examples.
1\. The reference body consists of noninteracting electric charges in a constant homogeneous electromagnetic field. The Lagrangian describing the motion of charges with masses $m$ is of the form: $$L=-mc^{2}(1-v^{2}/c^{2})^{1/2} - \phi_{\alpha}(x) dx^{\alpha}),
\label{LagrangGarge}.$$ where $\phi$ is a vector function, $c$ is the speed of light, and $v$ is the spatial velocity. Then, according to (\[dsMain\]), the line element of space-time in the PFR is given by $$ds= d\sigma+f_{\alpha}(x)dx^{\alpha} \label{dsRanders}$$ where $f_{\alpha}=\phi/m$ is a vector field, and $d\sigma $ is the line element of the Minkowski space-time. Consequently, space-time in PFRs of electromagnetic field is Finslerian. In principle, we can use both traditional and geometrical description, although the last in this case is rather too complicate.
2\. Motion of an ideal isentropic fluid can be considered as the motion of macroscopic small elements (“particles”) of an arbitrary mass $m$, which is described by the Lagrangian [@Verozub08b]$$L=-mc \left( G_{\alpha\beta} \dot{x}^{\alpha} \dot{x}^{\beta}\right)^{1/2}
\label{Lagrangian_in_V},$$ where $w$ is enthalpy per unit volume, $G_{\alpha\beta}=\varkappa^{2}\eta_{\alpha\beta}$ , $\varkappa=w/\rho c^{2}$, $\rho=m n$, $m$ is the mass of the particles, $n$ is the particles number density, and $\eta_{\alpha\beta}$ is the metric tensor in the Minkowski space-time. According to (\[dsMain\]) the line element of space-time in the NIFR is given by $$ds^{2}=G_{\alpha\beta}dx^{\alpha}dx^{\beta} . \label{ds2}$$ Therefore, the motion of the particles can be considered as occurring under the effect of a force field. (In non-relativistic case it is a pressure gradient). Space-time in the PFR of this force field is Riemannian, and conformal to Minkowski space-time. The motion of the above particles does not depend on theirs masses. We can use both traditional and geometrical description. In some cases such geometrical description is preferable.
3\. Suppose that in the Minkowski space-time the Lagrangian describing the motion of test particles of mass $m$ in a tensor field $g_{\alpha\beta}$ is of the form $$L=-mc[g_{\alpha\beta} \;\dot{x}^{\alpha}\;\dot{x}^{\beta}]^{1/2},
\label{LagrangianThirr}$$ where $\dot{x}^{\alpha}=dx^{\alpha}/dt$. According to (\[dsMain\] ), the line element of space-time in the PFR is given by $$ds^{2}=g_{\alpha\beta}\;dx^{\alpha}\;dx^{\beta}.$$ Space-time in PFRs of this field is Riemannian, and motion of test particles do not depend of their masses. It is natural to assume that in this case we deal with a gravitational field.
The bimetricity in this case has a simple physical meaning. Disregarding the rotation of the Earth, a reference frame, rigidly connected with the Earth surface, can be considered as an IFR. An observer, located in this frame, can describe the motion of freely falling identical point masses as taking place in Minkowski space-time under the effect of a force field. However, for another observer which is located in the PFR the reference body of which is formed by these freely falling particles, the situation is another. Let us assume that the observer is deprived of the possibility of seeing the Earth and stars. Then, from his point of view, the the point masses formed the reference body of the PFR are points of his physical space, and all events occur in his space-time. Consequently, accelerations of these point masses must be equal to zero both in nonrelativistic and relativistic meaning . However, instead of this, he observes a change in distances between these point masses in time. Evidently, the only reasonable explanation for him is the interpretation of this observed phenomenon as a manifestation of the deviation of geodesic lines in some Riemannian space-time of a nonzero curvature. Thus, if the first observer, located in the IFR, can postulate that space-time is flat, the second observer, located in a PFR of the force field, who proceeds from relativity of space and time, already in the Newtonian approximation *is forced* to consider space-time as Riemannian with curvature other than zero.
To obtain physical consequences from (\[MyVacuumEqs\]) it is convenient to select the gauge condition $$Q_{\alpha}=\Gamma_{\alpha\beta}^{\beta}-\overset{\circ}{\Gamma^{\beta}}_{\alpha\beta}=0.
\label{AdditionalConditions}$$ At such gauge condition (which does not depend on coordinate system) eqs. (\[MyVacuumEqs\]) conside with the vacuum Einstein’s equations . Therefore, for solving many problems it is sufficiently to find solution of thvacuumum Einstein equations in the Minkowski space-time (in which $g_{\alpha\beta}(x)$ is simply a tensor field) at the condition $Q_{\alpha}=0 $ .
From the point of view of the observer located in an IFR and studying the gravitational field of a remote compact object of mass $M$, the space - time is flat. The spherically-symmetric solution of the equations (\[MyVacuumEqs\] ) for the point central object very little differs from the solution in general relativity, if the distance from the center $r$ is much more that thSchwarzschildld radius $r_{g}$. However these solutions in essence differ as $r$ is of the order of $r_{g}$ or less than that. The solution in flat space has no singularity at centre and the event horizon at $r=r_{g}$.
![The gravitational force (arbitrary units) affecting freely particles (curve 1) and rest-particles (curve 2) near an attractive point mass.[]{data-label="Force"}](f.eps){width="8cm" height="6cm"}
Fig. 1 shows the plots of the gravitational force $F=m \ddot{x}^{\alpha}$ acting on rest-particle of mass $m$ and on a freely falling test particle as a functions of $r$. It follows from the figure , that in the first case $F$ tends to zero when $r\rightarrow 0$. In the second case, as particle approach to the Schwarzschild radius, the force changes the sign and becomes repulsive.
These unexpected peculiarities of the gravitational force can be tested by observations. The peculiarity of of the static force leads to the possibility of the existence of supermassive compact objects without event horizon. Such objects can be identified with supermassive compact objects at centres of galaxies. [@Verozub06a].
The unusual properties of the force acting on freely moving particles near the Schwarzschild radius give rise to some observable effects in cosmology because it is well-known that the radius of an observable part of the Universe is of the order of the Schwarzschild radius of all observed mass. It yields a natural explanation of a deceleration of the Universe expansion [@Verozub08a].
[ ]{}
H. Weyl, Göttinger Nachr., 90 (1921).
T. Thomas, The differential invariants of generalized spaces, (Cambridge, Univ. Press) (1934).
L. Eisenhart, Riemannian geometry, (Princeton, Univ. Press) (1950).
A. Petrov, Einstein Spaces , (New-York-London, Pergamon Press. (1969).
H. Poincaré, Dernières pensées, (Paris, Flammarion) (1913)
L.. Verozub, Ann. Phys. (Berlin), **17**, 28 (2008)
L. Verozub, Int. J. Mod. Phys. D, **17**, 337 (2008)
N. Rosen, Gen. Relat. Grav., **4**, 435 (1973).
L. Verozub, Astr. Nachr., **327**, 355 (2006)
[^1]: Greek indexes run from $0$ to $3$
[^2]: It follows from (\[GammaGeodesTransformations\]) that the components $\overline{\Gamma}^{i}_{00}=\Gamma^{i}_{00}$. Therefore, in Newtonian limit geodesic-invariance is not an essential fact. Therefore, now we deal with a relativistic effect.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We introduce and motivate the method of effective charges, and consider how to implement an all-orders resummation of large kinematical logarithms in this formalism. Fits for QCD $\Lambda$ and power corrections are performed for the ${e}^{+}{e}^{-}$ event shape obesrvables 1-thrust and heavy-jet mass, and somewhat smaller power corrections found than in the usual approach employing the “physical scale” choice.'
author:
- 'C.J. Maxwell'
title: 'On Effective Charges, Event Shapes and the size of Power Corrections '
---
Introduction
============
In this talk I will describe some recent work together with Michael Dinsdale concerning the relative size of non-perturbative power corrections for QCD event shape observables [@r1; @r1b]. For ${e}^{+}{e}^{-}$ event shape [*means*]{} the DELPHI collaboration have found in a recent analysis that, if the next-to-leading order (NLO) perturbative corrections are evaluated using the method of effective charges [@r2], then one can obtain excellent fits to data without includingany power corrections [@r3; @r3b]. In contrast fits based on the use of standard fixed-order perturbation theory in the $\overline{MS}$ scheme with a physical choice of renormalization scale equal to the c.m. energy, require additional power corrections ${C}_{1}/Q$ with ${C}_{1}\sim 1\;\rm{GeV}$. Power corrections of this size are also predicted in a model based on an infrared finite coupling [@r4] , which is able to fit the data reasonably well in terms of a single parameter. Given the DELPHI result it is interesting to consider how to extend the method of effective charges to event shape [*distributions*]{} rather than means.\
The method of effective charges
===============================
Consider an ${e}^{+}{e}^{-}$ observable ${\cal{R}}(Q)$, e.g. an event shape observable- thrust or heavy-jet mass, $Q$ being the c.m. energy. $${\cal{R}}(Q)=
a(\mu,{{\rm{RS}}})+\sum_{n>0}r_{n}(\mu/Q,{{\rm{RS}}}) a^{n+1}(\mu,{{\rm{RS}}}).$$ Here $a\equiv{\alpha}_{s}/\pi$. Normalised with the leading coefficient unity, such an observable is called an [*effective charge*]{}. The couplant $a(\mu,{{\rm{RS}}})$ satisfies the beta-function equation $$\frac{da(\mu,{{\rm{RS}}})}{d\ln(\mu)}=\beta(a)=-b a^2 (1 + c a + c_2 a^2 + c_3 a^3 + \cdots)\;.$$ Here $b=(33-2{N}_{f})/6$ and $c=(153-19{N}_{f})/12b$ are universal, the higher coefficients ${c}_{i}$, $i\ge{2}$, are RS-dependent and may be used to label the scheme, together with dimensional transmutation parameter $\Lambda$ [@r5]. The [*effective charge*]{} ${\cal{R}}$ satisfies the equation $$\frac{d{{\cal{R}}}(Q)}{d\ln(Q)}=\rho({{\cal{R}}}(Q))=-b {{\cal{R}}}^2 (1 + c {{\cal{R}}}+ \rho_2 {{\cal{R}}}^2 + \rho_3 {{\cal{R}}}^3 + \cdots)\;.$$ This corresponds to the beta-function equation in an RS where the higher-order corrections vanish and ${\cal{R}}=a$, the beta-function coefficients in this scheme are the RS-invariant combinations $$\begin{aligned}
\rho_2 & = & c_2 + r_2 - r_1 c - r_1^2
\nonumber \\
\rho_3 & = & c_3 + 2r_3 - 4r_1 r_2 - 2 r_1 \rho_2 - r_1^2 c + 2r_1^3.\end{aligned}$$ Eq.(3) for $d{\cal{R}}/d{\ln{Q}}$ can be integrated to give $$b\ln\frac{Q}{{\Lambda}_{\cal{R}}}=
\frac{1}{{\cal{R}}}+c{\ln}\left[\frac{c{\cal{R}}}{1+c{\cal{R}}}\right]+
\int_{0}^{{\cal{R}}(Q)}{dx}\left[\frac{b}{\rho(x)}+\frac{1}{{x}^{2}(1+cx)}\right]\;.$$ The dimensionful constant ${\Lambda}_{\cal{R}}$ arises as a constant of integration. It is related to the dimensional transmutation parameter ${\tilde{\Lambda}}_{\overline{MS}}$ by the exact relation, $${\Lambda}_{\cal{R}}={e}^{r/b}{\tilde{\Lambda}}_{\overline{MS}}\;.$$ Here ${r}\equiv{r}_{1}(1,\overline{MS})$ with $\mu=Q$, is the NLO perturbative coefficient. Eq.(5) can be recast in the form $${\Lambda}_{\overline{MS}}=Q{\cal{F}}({\cal{R}}(Q)){\cal{G}}({\cal{R}}(Q)){e}^{-r/b}{(2c/b)}^{c/b}\;.$$ The final factor converts to the standard convention for $\Lambda$. Here ${\cal{F}}({\cal{R}})$ is the [*universal*]{} function $${\cal{F}}({\cal{R}})={e}^{-1/b{\cal{R}}}{(1+1/c{\cal{R}})}^{c/b}\;,$$ and ${\cal{G}}({\cal{R}})$ is $${\cal{G}}({\cal{R}})=1-\frac{{\rho}_{2}}{b}{\cal{R}}+O({\cal{R}}^{2})+{\ldots}\;.$$ Here ${\rho}_{2}$ is the NNLO ECH RS-invariant. If only a NLO calculation is available, as is the case for ${e}^{+}{e}^{-}$ jet observables, then ${\cal{G}}({\cal{R}})=1$, and $${\Lambda}_{\overline{MS}}=Q{\cal{F}}({\cal{R}}(Q)){e}^{-r/b}{(2c/b)}^{c/b}\;.$$ Eq.(10) can be used to convert the measured data for the observable ${\cal{R}}$ into a value of ${\Lambda}_{\overline{MS}}$ bin-by-bin. Such an analysis was carried out in Ref. [@r6] for a number of ${e}^{+}{e}^{-}$ event shape observables, including thrust and heavy jet mass which we shall focus on here. It was found that the fitted $\Lambda$ values exhibited a clear plateau region, away from the two-jet region, and the region approaching $T=2/3$ where the NLO thrust distribution vanishes. The result for 1-thrust corrected for hadronization effects is shown in Fig. 1.

Another way of motivating the effective charge approach is the idea of “complete renormalization group improvement” (CORGI) [@r6a]. One can write the NLO coefficient ${r}_{1}(\mu)$ as $${r_1}({\mu})=b{\ln}\frac{\mu}{{\tilde{\Lambda}}_{\overline{MS}}}-b{\ln}\frac{Q}{{\Lambda}_{\cal{R}}}\;.$$ Hence one can identify scale-dependent $\mu$-logs and RS-invariant “physical” UV $Q$-logs. Higher coefficients are polynomials in ${r}_{1}$. $$\begin{aligned}
{r_2}&=&{r}_{1}^{2}+{r}_{1}c+({\rho}_{2}-{c_2})
\nonumber \\
{r_3}&=&{r}_{1}^{3}+\frac{5}{2}c{r}_{1}^{2}+(3{\rho_2}-2{c_2}){r_1}+(\frac{{\rho}_{3}}{2}-\frac
{c_3}{2})\;.\end{aligned}$$ Given a NLO calculation of ${r}_{1}$, parts of ${r}_{2},{r_3},\ldots$ are “RG-predictable”. One usually chooses ${\mu}=xQ$ then $r_1$ is $Q$-independent, and so are all the $r_n$. The $Q$-dependence of ${\cal{R}}(Q)$ then comes entirely from the RS-dependent coupling $a(Q)$. However, if we insist that $\mu$ is held constant [*independent of $Q$*]{} the only $Q$-dependence resides in the “physical” UV $Q$-logs in $r_1$. Asymptotic freedom then arises only if we resum these $Q$-logs to [*all-orders*]{}. Given only a NLO calculation, and assuming for simplicity that that we have a trivial one loop beta-function ${\beta}(a)=-b{a}^{2}$ so that $a(\mu)=1/b{\ln}(\mu/{\tilde{\Lambda}}_{\overline{MS}})$ the RG-predictable terms will be $${\cal{R}}=a({\mu})\left(1+{\sum_{n>0}}{(a({\mu}){r}_{1}({\mu}))}^{n}\right)\;.$$ Summing the geometric progression one obtains $$\begin{aligned}
{\cal{R}}(Q)&=&a({\mu})/\left[1-\left(b{\ln}\frac{{\mu}}{{\tilde{\Lambda}}_{\overline{MS}}}
-b{\ln}\frac{Q}{{\Lambda}_{\cal{R}}}\right)a({\mu})\right]
\nonumber \\
&=&1/b{\ln}(Q/{\Lambda}_{\cal{R}}).\end{aligned}$$ The $\mu$-logs “eat themselves” and one arrives at the NLO ECH result ${\cal{R}}(Q)=1/b{\ln}(Q/{\Lambda}_{\cal{R}})$.\
As we noted earlier, [@r3; @r3b], use of NLO effective charge perturbation theory (Renormalization Group invariant (RGI) perturbation theory) leads to excellent fits for ${e}^{+}{e}^{-}$ event shape [*means*]{} consistent with zero power corrections, as illustrated in Figure 2. taken from Ref.[@r3].

Given this result it would seem worthwhile to extend the effective charge approach to event shape [*distributions*]{}. It is commonly stated that the method of effective charges is inapplicable to exclusive quantities which depend on multiple scales. However given an observable ${\cal{R}}({Q}_{1},{Q}_{2},{Q}_{3},\ldots,{Q}_{n})$ depending on $n$ scales it can always be written as $${\cal{R}}={\cal{R}}({Q}_{1},{Q}_{2}/{Q}_{1},\ldots,{Q}_{n}/{Q}_{1}){\equiv}{\cal{R}}_{{x}_{2}{x}_{3}\ldots{x}_{n}}({Q}_{1})\;.$$ Here the ${x}_{i}{\equiv}{Q}_{i}/{Q}_{1}$ are [*dimensionless*]{} quantities that can be held fixed, allowing the ${Q}_{1}$ evolution of ${\cal{R}}$ to be obtained as before. In the 2-jet region for ${e}^{+}{e}^{-}$ observables large logarithms $L={\ln}(1/{x}_{i})$ arise and need to be resummed to all-orders.
Resumming large logarithms for event shape distributions
=========================================================
Event shape distributions for thrust ($T$) or heavy-jet mass (${\rho}_{h}$) contain large kinematical logarithms, $L={\ln}(1/y)$, where $y=(1-T),\;{\rho}_{h},\cdots$. $$\frac{1}{\sigma} \frac{d\sigma}{dy} = A_{LL}(aL^2) + L^{-1} A_{NLL}(aL^2) + \cdots\;.$$ Here $LL$, $NLL$, denote leading logarithms, next-to-leading logarithms, etc. For thrust and heavy-jet mass the distributions [*exponentiate*]{} [@r7] $$\begin{aligned}
R_y(y')& \equiv& \int_0^{y'} dy \frac{1}{\sigma} \frac{d\sigma}{dy}
= C(a\pi) \exp(L g_1(a\pi L)
\nonumber \\
&+& g_2(a\pi L)
+ a g_3(a\pi L)
+ \cdots) + D(a\pi ,y)\;.\end{aligned}$$ Here $g_1$ contains the LL and $g_2$ the NLL. $C=1+O(a)$ is independent of $y$, and $D$ contains terms that vanish as $y\rightarrow{0}$. It is natural to define an effective charge ${\cal{R}}
(y')$ so that $$R_y(y') = \exp(r_0(y'){\cal{R}}(y'))\;.$$ This effective charge will have the expansion $$r_0(L){\cal{R}}(L) = r_0(L) (a + r_1(L) a^2 + r_2(L) a^3 + \cdots)\;.$$ Here ${r}_{0}(L)\sim{L}^{2}$, and the higher coefficients ${r}_{n}(L)$ have the structure $$r_n = r_n^{\rm LL} L^n + r_n^{\rm NLL} L^{n-1} + \cdots$$ Usually one resums these logarithms to all-orders using the known closed-form expressions for ${g}_{1}(aL)$ and ${g}_{2}(aL)$, where $a$ is taken to be the ${\overline{MS}}$ coupling with a “physical” scale choice $\mu=Q$ (${\overline{MS}}$PS). Instead we want to resum logarithms to all-orders in the ${\rho}({\cal{R}})$ function (ECH). The form of the ${\rho}_{n}$ RS-invariants (Eq.(4)) means that the ${\rho}_{n}$ have the structure $$\rho_n = \rho_n^{\rm LL} L^n + \rho_n^{\rm NLL} L^{n-1} + \cdots\;.$$ One can then define all-orders RS-invariant $LL$ and $NLL$ approximations to ${\rho}({\cal{R}})$, $$\begin{aligned}
\rho_{\rm LL}({\cal{R}})& = &-b{\cal{R}}^{2} (1 + c{\cal{R}} + \sum_{n=2}^{\infty} \rho_n^{\rm LL} L^n {\cal{R}}^{n})
\nonumber \\
\rho_{\rm NLL}({\cal{R}})& = &-b {\cal{R}}^{2} (1 + c {\cal{R}}
\nonumber \\
&+& \sum_{n=2}^{\infty} (\rho_n^{\rm LL} L^n
+ \rho_n^{\rm NLL} L^{n-1}){\cal{R}}^{n} )\;.\end{aligned}$$ The resummed ${\rho}_{\rm NLL}({\cal{R}})$ can then be used to solve for ${\cal{R}}_{\rm NLL}$ by inserting it in Eq.(5). Notice that since ${\Lambda}_{\cal{R}}$ involves the [*exact*]{} value of ${r}_{1}(1,\overline{MS})$ there is no matching problem as in the standard $\overline{MS}$PS approach. The resummed ${\rho}_{LL}({\cal{R}})$ can be straightforwardly numerically computed using $${\rho}_{\rm LL}(x) = \beta(a) \frac{d\cal{R}_{\rm LL}}{da} = -ba^2 \frac{d\cal{R}_{\rm LL}}{da}\;,$$ with $a$ chosen so that ${\cal{R}}_{\rm LL}(a)=x$. The same relation with ${\beta}(a)=-b{a}^{2}(1+ca)$ suffices for ${\rho}_{NLL}({\cal{R}})$, although in this case one needs to remove $NNLL$ terms, e.g. an ${L}^{0}$ term which would otherwise be included in ${\rho}_{2}$. This can be accomplished by numerically taking limits ${L}\rightarrow{\infty}$ with ${L}{\cal{R}}$ fixed.\
As we have noted a crucial feature of the effective charge approach is that it resums to all-orders [*RG-Predictable*]{} pieces of the higher-order coefficients, thus the NLO ECH result (assuming $c=0$ for simplicity) corresponds to an RS-invariant resummation (c.f. Eq.(13).) $$a+{r}_{1}{a}^{2}+{r}_{1}^{2}{a}^{3}+\cdots+{r}_{1}^{n}{a}^{n+1}+\cdots\;.$$ Thus even at fixed-order without any resummation of large logs in ${\rho}({\cal{R}})$ a [*partial*]{} resummation of large logs is automatically performed. Furthermore one might expect that the LL ECH result contains already NLL pieces of the standard ${\overline{MS}}$PS result.\


In Figure 3 we show various NLO approximations. Notice that the solid curve, which corresponds to the exponentiated NLO ECH result, is a surprisingly good fit even in the 2-jet region, whereas the dashed curve which is the NLO $\overline{MS}$PS result, has a badly misplaced peak. The all-orders partial resummation of large logs in Eq.(15) gives a reasonable 2-jet peak. Figure 4 shows that the NLL ${\overline{MS}}$PS coefficients “predicted” from the LL ECH result by re-expanding it in the $\overline{MS}$PS coupling are in good agreement with the exact coeffiecients out to O(${a}^{10}$).\
Fits for ${\Lambda}_{\overline{MS}}$ and power corrections
==========================================================
We now turn to fits simultaneously extracting ${\Lambda}_{\overline{MS}}$ and the size of power corrections ${C}_{1}/Q$ from the data. To facilitate this we use the result that inclusion of power corrections effectively shifts the event shape distributions, which can be motivated by considering simple models of hadronization, or through a renormalon analysis [@r8]. Thus we define $${R}_{PC}(y)={R}_{PT}(y-{C}_{1}/Q)\;.$$ This shifted result is then fitted to the data for 1-thrust and heavy jet mass. ${e}^{+}{e}^{-}$ data spanning the c.m. energy range from $44-189$ GeV was used (see [@r1] for the complete list of references). The resulting fits for 1-thrust and heavy-jet mass are shown in Figures 5. and 6..\
The ECH fits for thrust and heavy jet mass show great stability going from NLO to LL to NLL, presumably because at each stage a partial resummation of higher logs is automatically performed. The power corrections required with ECH are somewhat smaller than those found with ${\overline{MS}}$PS, but we do not find as dramatic a reduction as DELPHI find for the means. This may be because their analysis corrects the data for bottom quark mass effects which we have ignored. The fitted value of ${\Lambda}_{\overline{MS}}$ for ECH is much smaller than that found with ${\overline{MS}}$PS, (${\alpha}_{s}(M_Z)=0.106$ (thrust) and $0.109$ (heavy-jet mass)). Similarly small values are found with the Dressed Gluon Exponentiation (DGE) approach [@r9]. A problem with the effective charge resummations is that the ${\rho}({\cal{R}})$ function contains a branch cut which limits how far into the 2-jet region one can go. We are limited to $1-T>0.05{M}_{Z}/Q$ in the fits we have performed. This branch cut mirrors a corresponding branch cut in the resummed $g_1(aL)$ function. Similarly as $1-T$ approaches $1/3$ the leading coefficient ${r}_{0}(L)$ vanishes and the Effective Charge formalism breaks down. We need to restrict the fits to $1-T<0.18$. From the “RG-predictability” arguments we might expect that these difficulties would also become apparent for a NNLL ${\overline{MS}}$PS resummation. One will be able to check this expectation when a result for ${g}_{3}(a\pi L)$ becomes available.


Extension to event shape means at HERA
======================================
Event shape means have also been studied in DIS at HERA [@r10]. For such processes one has a convolution of proton pdf’s and hard scattering cross-sections, $$\frac{d{\sigma}(ep\rightarrow X,Q)}{dX}=\sum_{a}\int{d\xi}{f}_{a}(\xi,M)\frac{d{\hat{\sigma}}(e a\rightarrow X,Q,M)}{dX}\;.$$ There is no way to directly relate such quantities to effective charges. The DIS cross-sections will depend on a [*factorization scale*]{} $M$, and a renormalization scale $\mu$ at NLO. In principle one could identify unphysical scheme-dependent ${\ln}(M/{\tilde{\Lambda}}_{\overline{MS}})$ and ${\ln}({\mu}/{\tilde{\Lambda}}_{\overline{MS}})$, and physical UV $Q$-logs, and then by all-orders resummation get the $M$ and $\mu$-dependence to “eat itself”. The pattern of logs is far more complicated than the geometrical progression in the effective charge case, and a CORGI result for DIS has not been derived so far. Instead one can use the Principle of Minimal Sensitivity (PMS) [@r5], and for an event shape mean $\langle{y}\rangle$ look for a stationary saddle point in the $(\mu,M)$ plane [@r11]. It turns that there are large cancellations between the NLO corrections for quark and gluon initiated subprocesses. One can distinguish between two approaches, ${PMS}_{1}$ where one seeks a saddle point in the $(\mu,M)$ plane for the sum of parton subprocesses, and ${PMS}_{2}$ where one introduces two separate scales ${\mu}_{q}$ and ${\mu}_{g}$ and finds a saddle point in $({\mu}_{q},{\mu}_g,M)$. ${PMS}_{1}$ gives power corrections fits comparable to ${\overline{MS}}$PS with $M={\mu}=Q$. ${PMS}_{2}$ in contrast gives substantially reduced power corrections. This is shown in Figure 7 for a selection of HERA event shape means. Given large cancellations of NLO corrections RG-improvement should be performed [*separately*]{} for the $q$ and $g$-initiated subprocesses, and so ${PMS}_{2}$ which indeed fits the data best, is to be preferred.

Conclusions
===========
Event shape means in ${e}^{+}{e}^{-}$ annihilation are well-fitted by NLO perturbation theory in the effective charge approach, without any power corrections being required. With the usual ${\overline{MS}}$PS approach power corrections $C_1/Q$ are required with $C_1\sim{1}$ GeV. Similarly sized power corrections are predicted in the model of Ref.[@r4]. It would be interesting to modify this model so that its perturbative component matched the effective charge prediction, but this has not been done. We showed how resummation of large logarithms in the effective charge beta-function $\rho({\cal{R}})$ could be carried out for ${e}^{+}{e}^{-}$ event shape distrtibutions. If the distributions are represented by an exponentiated effective charge then even at NLO a partial resummation of large logarithms is performed. As shown in Figure 3 this results in good fits to the 1-thrust distribution, with the peak in the 2-jet region in rough agreement with the data. In contrast the ${\overline{MS}}$PS prediction has a badly misplaced peak in the 2-jet region, and is well below the data for the realistic value of ${\Lambda}_{\overline{MS}}=212$ MeV assumed. We further showed in Figure 4 that the LL ECH result contains already a large part of the NLL ${\overline{MS}}$PS result. We found unfortunately that $\rho({\cal{R}})$ contains a branch point mirroring that in the resummed ${g}_{1}(aL)$ function. This limited the fit range we could consider. We fitted for power corrections and ${\Lambda}_{\overline{MS}}$ to the 1-thrust distribution and heavy-jet mass distributions, finding somewhat reduced power corrections for the ECH fits compared to ${\overline{MS}}$PS, with good stability going from NLO to LL to NLL. The suggestion of the “RG-predictability” manifested in Figure 4 would be that the NLL ECH result contains a large part of the NNLL ${\overline{MS}}$PS result. This suggests that the branch point problem which limits the ability to describe the 2-jet peak, would also show up given a NNLL analysis. This can be checked once the ${g}_{3}(aL)$ function becomes available. Recent work on event shape means in DIS was briefly mentioned and seemed to indicate that greatly reduced power corrections are found when a correctly optimised PMS approach is used.
Yasaman Farzan and the rest of the organising committee of the IPM LPH-06 meeting are thanked for their painstaking organisation of this stimulating and productive school and conference. Many thanks are also due to Abolfazl Mirjalili for organising my wonderful post-conference visits to Esfahan, Yazd and Shiraz, and to all those whose welcoming hospitality made my first visit to Iran so extremely enjoyable.
[99]{} M.J. Dinsdale and C.J. Maxwell, Nucl. Phys. [**B713**]{} (2005) 465. C.J. Maxwell, talk delivered at the FRIF workshop on First Principles QCD of Hadron Jets. \[hep-ph/0607039\]. G. Grunberg, Phys. Lett. [**B95**]{} (1980) 70; Phys. Rev. [**D29**]{} (1984) 2315. DELPHI Collaboration (J. Abdallah et. al.) Eur. Phys. J. [**C29**]{} (2003) 285. K. Hamacher, talk delivered at the FRIF Workshop on First Principles QCD of Hadron Jets. \[hep-ex/0605123\]. Y.L. Dokshitzer and B.R. Webber, Phys. Lett. [**B404**]{} (1997) 321. P.M. Stevenson, Phys. Rev. [**D23**]{} (1981) 2916. S.J. Burby and C.J. Maxwell, Nucl. Phys. [**B609**]{} (2001) 193. C.J. Maxwell \[hep-ph/9908463\]; C.J. Maxwell and A. Mirjalili, Nucl. Phys. [**B611**]{}, 423 (2001). S. Catani, G. Turnock, B.R. Webber and L. Trentadue, Phys. Lett. [**B263**]{} (1991) 491. B.R. Webber \[hep-ph/9411384\]. E. Gardi and J. Rathsman, Nucl. Phys. [**B638**]{} (2002) 243. C. Adloff [*et al.*]{} \[H1 Collaboration\], Eur. Phys. J. C [**14**]{} (2000) 255 \[Erratum-ibid. C [**18**]{} (2000) 417\] \[arXiv:hep-ex/9912052\]. M.J. Dinsdale \[arXiv:hep-ph/0512069\].
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We study Manneville–Pomeau maps on the unit interval and prove that the set of points whose forward orbits miss an interval with left endpoint 0 is strong winning for Schmidt’s game. Strong winning sets are dense, have full Hausdorff dimension, and satisfy a countable intersection property. Similar results were known for certain expanding maps, but these did not address the nonuniformly expanding case. Our analysis is complicated by the presence of infinite distortion and unbounded geometry.'
author:
- Jason Duvall
bibliography:
- 'MP.bib'
title: 'Schmidt’s Game and Nonuniformly Expanding Interval Maps'
---
[^1] [^2] [^3]
Introduction and statement of results {#sec:intro}
=====================================
Let $X$ be a compact metric space, $f$ a countably-branched piecewise-continuous map, and $\mu$ an $f$-invariant measure on $X$. There are broad conditions under which $\mu$-almost every point in $X$ has dense forward orbit under $f$. This is the case, for example, if $\mu$ is ergodic and fully supported on $X$. The “exceptional sets” of points with nondense orbits, despite being $\mu$-null, are nevertheless often large in a different sense. In particular they are often winning for Schmidt’s game, which implies that they are dense in $X$, have full Hausdorff dimension (if $X \subset \mathbb{R}^n$), and remain winning when intersected with countably many suitable winning sets in $X$.[^4] Examples of systems possessing winning exceptional sets include surjective endomorphisms of the torus [@MR2818688; @MR980795], beta transformations [@MR2660561; @MR3206688], the Gauss map [@MR195595], and $C^2$ (uniformly) expanding maps of compact connected manifolds [@MR2480100].
In this article we add to this list the Manneville–Pomeau map $f \colon \left[ 0,1 \right] \to \left[ 0,1 \right]$ defined by $$f(x) = \begin{cases}
x+x^{1+\gamma} & \text{if } 0 \leq x < r_1 \\
x+x^{1+\gamma} - 1 & \text{if } r_1 \leq x \leq 1,
\end{cases}$$ where $\gamma > 0$ is a fixed parameter and $r_1$ is the unique solution of $x+x^{1+\gamma} = 1$ (see Figure \[fig:MP\]). Our main result is the following theorem, which we prove in §\[sec:corollary\].
\[thm:maintheoremf\] The set $$\mathcal{E}_f := \big\{ x \in \left[ 0,1 \right] \colon \left. \left[ 0,\epsilon \right. \right) \cap \left\{ f^n x \right\}_{n \geq 0} = \emptyset \text{ for some } \epsilon > 0 \big\}$$ is strong winning for Schmidt’s game.
As the proof of Theorem \[thm:maintheoremf\] will demonstrate, the strong winning dimension of $\mathcal{E}_f$, i.e., the supremum of all $\alpha$ for which $\mathcal{E}_f$ is $\alpha$-strong winning, depends on $\gamma$.
It is well-known that $\operatorname{Leb}\left( \mathcal{E}_f \right) = 0$. Indeed, we may express $\mathcal{E}_f$ as a countable union of nested Cantor sets: $$\mathcal{E}_f = \bigcup_{n=1}^\infty \mathcal{C}_n, \quad \mathcal{C}_n := \bigcap_{k=0}^\infty f^{-k} \left( \left[ r_n, 1 \right] \right).\footnote{See Definition \ref{def:rn} for the definition of the sequence $r_n$.}$$ The sets $\mathcal{C}_n$ are compact and $f$-invariant. By suitably modifying $f$ on the interval $\left[ 0,r_n \right]$, the fact that $\operatorname{Leb}\left( \mathcal{C}_n \right) = 0$ now follows from the standard result that compact sets invariant under a $C^2$ circle map are Lebesgue-null.
One consequence of Theorem \[thm:maintheoremf\] concerns the set $S$ of points having positive lower Lyapunov exponent for $f$. Recall that for $x \in \left[ 0,1 \right]$ the lower Lyapunov exponent of $x$ is the number $\liminf_{n \to \infty} \frac{1}{n} \sum_{k=0}^{n-1} \log \big| \big( f^k \big)' \left( x \right) \big|$ (using one-sided derivatives as necessary). We prove the following corollary in §\[sec:minorresults\].
\[cor:Lyap\] The set of points with positive lower Lyapunov exponent for $f$ is strong winning for Schmidt’s game.
It was known [@MR2505322; @MR1764931] that $S$ has full Hausdorff dimension for all values of $\gamma > 0$; Corollary \[cor:Lyap\] greatly strengthens this. In the case that $\gamma < 1$, $f$ possesses a fully supported absolutely continuous (with respect to Lebesgue measure) ergodic probability measure $\mu$, so that Lebesgue-almost every point has positive lower Lyapunov exponent since $\operatorname{Lyap}(\mu) > 0$ (see [@MR599464] and references therein). Note that even sets with full Lebesgue measure are not necessarily winning (the complement of a Legesgue-null winning set is never winning by Theorem \[thm:schmidt\] below; an example is the set of reals normal to a given base [@MR195595]). When $\gamma \geq 1$, however, $\operatorname{Leb}(S) = 0$ [@MR599464], and so Corollary \[cor:Lyap\] is the strongest available result concerning the “largeness” of the set $S$ in this case, and gives another example of a Lebesgue-null winning set.
![The graph of $F$, the first return map to $\left[ r_1,1 \right]$ induced by $f$.[]{data-label="fig:MPinduced"}](figure1a.jpg){width="\textwidth"}
![The graph of $F$, the first return map to $\left[ r_1,1 \right]$ induced by $f$.[]{data-label="fig:MPinduced"}](figure1b.jpg){width="\textwidth"}
Method of proof
===============
The primary difficulty in studying $f$ is the nonuniformity of expansion near the indifferent fixed point 0, which gives rise to infinite distortion. The map $f$ also exhibits unbounded geometry, by which we mean that the ratio of the longest to the shortest Markov partition element of successive generations tends to infinity. We address the problem of infinite distortion by inducing $f$ on $\left[ r_1,1 \right]$ to get a uniformly expanding first return map $F$. This induced map satisfies a bounded distortion estimate, which is a key property of expanding systems that features prominently in the articles mentioned above. The issue of unbounded geometry is overcome using the notion of “commensurate,” introduced in [@MR3021798].
The bulk of this paper involves analyzing the induced map $F \colon \left[ r_1, 1 \right] \to \left[ r_1, 1 \right]$ given by the rule $$F x := f^{\tau \left( x \right)} \left( x \right), \quad \tau \left( x \right) := \min \left\{ n \geq 0 \colon f^n x \in \left[ r_1,1 \right] \right\}.$$ See Figure \[fig:MPinduced\]. We will show that Theorem \[thm:maintheoremf\] is a straightforward consequence of the following analogous result for $F$, which we prove in §\[sec:maintheoremF\]:
\[thm:maintheoremF\] The set $$\mathcal{E}_F := \left\{ x \in \left[ r_1,1 \right] \colon \left. \left[ r_1,r_1+\epsilon \right. \right) \cap \left\{ F^n x \right\}_{n \geq 0} = \emptyset \text{ for some } \epsilon > 0 \right\}$$ is strong winning for Schmidt’s game.
Our proof of Theorem \[thm:maintheoremF\] works for any map topologically conjugate to $F$ and satisfying the estimates concerning the Markov structure of $F$ in Proposition \[prop:C5\].
In proving Theorem \[thm:maintheoremF\] we follow the approach of Mance and Tseng in [@MR3021798]. In that article the authors studied Lüroth expansions, whose associated dynamical system is piecewise linear. This linear structure permitted a precise computation of the lengths of intervals in the natural Markov partition. In this paper we cannot obtain closed-form expressions for these lengths; instead we derive estimates (Corollary \[cor:kfromzeta\]) derived from a distortion result (Proposition \[prop:C5\]).
We note that in [@MR3206688] Hu and Yu considered the class of piecewise locally $C^{1+\delta}$ expanding maps, a class that includes the Gauss map. At first glance the induced map $F$ looks quite similar to the Gauss map; however, the authors in [@MR3206688] required a Hölder-type distortion estimate that $F$ does not satisfy.
Schmidt’s Game {#sec:Schmidt}
==============
We describe a simplified version of a set-theoretic game introduced by Schmidt in [@MR195595]. The game is played on the unit interval $\left[ 0,1 \right]$. Fix two constants $\alpha, \beta \in \left( 0,1 \right)$ and a set $S \subset \left[ 0,1 \right]$. Two players, Alice and Bob, alternately choose nested closed intervals $B_1 \supset A_1 \supset B_2 \supset A_2 \supset \dots$ with Bob choosing first. These intervals must satisfy the relations $\left| B_{n+1} \right| = \beta \left| A_n \right|$ and $\left| A_n \right| = \alpha \left| B_n \right|$ for all $n \in \mathbb{N}$ ($\left| B_1 \right|$ is arbitrary). Then $\bigcap A_n = \bigcap B_n$ consists of a single point, $\omega$. Alice wins the game if and only if $\omega \in S$.
If Alice has a winning strategy by which she can win regardless of Bob’s choices, $S$ is said to be *$\left( \alpha, \beta \right)$-winning*. $S$ is called *$\alpha$-winning* if it is $\left( \alpha, \beta \right)$-winning for all $\beta \in \left( 0,1 \right)$. $S$ is called *winning* if it is $\alpha$-winning for some $\alpha \in \left( 0,1 \right)$. The following result lists important properties of winning sets; the proof may be found in [@MR195595].
\[thm:schmidt\] A winning set in $\left[ 0,1 \right]$ is dense, uncountable, and has full Hausdorff dimension. A countable intersection of $\alpha$-winning sets is $\alpha$-winning. A cocountable subset of an $\alpha$-winning set is $\alpha$-winning.
In [@MR2720230] McMullen introduced a modification of Schmidt’s game in which the length restrictions are loosened to $\left| B_{n+1} \right| \geq \beta \left| A_n \right|$ and $\left| A_n \right| \geq \alpha \left| B_n \right|$. This results in *strong winning* sets. As the name implies, strong winning sets are winning. In addition, the strong winning property is preserved under quasisymmetric homeomorphisms, which is not generally true of the winning property.
Proofs of Minor Results {#sec:minorresults}
=======================
Notation
--------
Let $B \subset \left[ 0,1 \right]$ be a closed interval. The expression $[B)$ denotes the interior of $J$ union its left endpoint; $(B]$ is similarly defined. $\partial^\ell B$ and $\partial^r B$ denote the left and right endpoints of $B$, respectively. The notations $\overline{B}$ and $B^\circ$ denote the closure and interior of $B$, respectively. $|B|$ denotes the diameter of $B$, and we call $B$ *nontrivial* if $0 < |B| < 1$. Henceforth all closed intervals are assumed to be nontrivial.
Technical results
-----------------
\[def:rn\] Define $\left\{ r_n \right\}_{n=0}^\infty \subset \left. \left( 0,1 \right. \right]$ recursively by $r_0 = 1$ and $\left\{ r_{n+1} \right\} = f^{-1} \left( r_n \right) \cap \left( 0,r_n \right)$; thus $r_n \searrow 0$.
Define $\left\{ p_n \right\}_{n=0}^\infty \subset \left. \left( r_1,1 \right. \right]$ recursively by $p_0 = 1$ and $\left\{ p_n \right\} := f^{-1} \left( r_n \right) \cap \left( r_1,1 \right)$; thus $p_n \searrow r_1$.
The asymptotics of these sequences will play a crucial role. Proofs of the next two results may be found in §6.2 of [@MR1750438].
\[thm:rnyoung\] There exists a constant $C_1 > 1$ such that for all $n \in \mathbb{N}$, $$\begin{gathered}
C_1^{-1} n^{-\frac{1}{\gamma}} \leq r_n \leq C_1 n^{-\frac{1}{\gamma}}, \\
C_1^{-1} n^{-1-\frac{1}{\gamma}} \leq r_{n-1}-r_n \leq C_1 n^{-1-\frac{1}{\gamma}}.
\end{gathered}$$
\[thm:youngdist\] There exists a constant $C_2 > 1$ such that for all integers $1 \leq m \leq n$, and for all points $x,y \in \left. \left[ r_{n+1}, r_n \right. \right)$, $$\left| \log \frac{\left( f^m \right)'x}{\left( f^m \right)'y} \right| \leq \frac{C_2}{r_{n-m} - r_{n-m+1}} \left| f^m x - f^m y \right|.$$
\[cor:C3\] There exists a constant $C_3 > 1$ such that for all integers $1 \leq m \leq n$, and for all points $x,y \in \left. \left[ p_n, p_{n-1} \right. \right)$, $$\left| \log \frac{\left( f^m \right)'x}{\left( f^m \right)'y} \right| \leq \frac{C_3}{r_{n-m} - r_{n-m+1}} \left| f^m x - f^m y \right|.$$
First assume that $m > 1$. Observe that $$\left| \log \frac{\left( f^m \right)'x}{\left( f^m \right)'y} \right| \leq \left| \log \frac{\left( f^{m-1} \right)' \left( fx \right)}{\left( f^{m-1} \right)' \left( fy \right)} \right| + \left| \left( \log f' \right) x - \left( \log f' \right) y \right|.$$ Because $fx,fy \in \left. \left[ r_n, r_{n-1} \right. \right)$, Theorem \[thm:youngdist\] applies to the first term on the right-hand side above. Now use the Mean Value Theorem to find $\xi \in \left( x,y \right)$ such that $$\begin{aligned}
\left| \log \frac{\left( f^m \right)'x}{\left( f^m \right)'y} \right| &\leq \frac{C_2 \left| f^m x - f^m y \right|}{r_{n-m} - r_{n-m+1}} + \left| \frac{f'' \xi}{f' \xi} \right| \left| x - y \right| \\
&\leq \frac{C_2 \left| f^m x - f^m y \right|}{r_{n-m} - r_{n-m+1}} + \frac{\gamma \left( \gamma+1 \right) \xi^{\gamma-1}}{1 + \left( \gamma+1 \right) \xi^\gamma} \left| f^m x - f^m y \right| \\
&\leq \left( \frac{C_2}{r_{n-m} - r_{n-m+1}} + \frac{\gamma}{r_1} \right) \left| f^m x - f^m y \right| \\
&\leq \frac{C_2 + \frac{\gamma}{r_1}}{r_{n-m} - r_{n-m+1}} \left| f^m x - f^m y \right|.
\end{aligned}$$ If $m=1$, then as above we have $$\left| \log \frac{\left( f^m \right)'x}{\left( f^m \right)'y} \right| = \left| \log \frac{f'x}{f'y} \right| \leq \frac{\gamma}{r_1} \left| fx - fy \right| \leq \frac{C_2 + \frac{\gamma}{r_1}}{r_{n-1} - r_n} \left| fx - fy \right|.$$ The corollary follows by taking $C_3 := C_2 + \frac{\gamma}{r_1}$.
Define the *basic interval of generation $0$* to be $\left[ r_1, 1 \right]$ and write $G_0 := \left\{ \left[ r_1, 1 \right] \right\}$. For $n \in \mathbb{N}$, a closed interval is called a *basic interval of generation $n$* if it is the closure of a maximal open interval of monotonicity for $F^n$. We denote by $G_n$ the collection of all basic intervals of generation $n$. Thus, for example, $G_1 = \left\{ \left[ p_1,1 \right], \left[ p_2,p_1 \right], \ldots \right\}$.
\[def:Jsigma\] Given $k \in \mathbb{N}$ and positive integers $m_1, \dots, m_k$, define $J_{m_1, \dots, m_k} \in G_k$ as $$J_{m_1, \dots, m_k} := \overline{{\textstyle \bigcap_{i=1}^k F^{-\left( i - 1 \right)} \left( \left. \left[ p_{m_i}, p_{m_i - 1} \right. \right) \right)}}.$$ Equivalently, we may recursively define $J_1 := \left[ p_1, 1 \right]$, $J_2 := \left[ p_2, p_1 \right]$, etc., and then declare $J_{m_1, \dots, m_k} := J_{m_1 \dots m_{k-1}} \cap F^{-k} \left( J_{m_k} \right)$. Thus $J_\sigma$ is the $m_k$-th branch of $F^k$ in $J_{m_1 \dots m_{k-1}}$, with branches numbered from right to left.
In the following proposition we use that fact that $F$ is uniformly expanding. Write $\lambda = \inf \left\{ F'x \colon x \in \left( r_1, 1 \right) \setminus \left\{ p_n \right\}_{n=1}^\infty \right\} > 1$.
\[prop:distF\] There exists a constant $C_4 > 1$ such that for all integers $1 \leq k \leq n$, for all $J_{m_1 \dots m_n} \in G_n$, and for all $x,y \in \left( J_{m_1 \dots m_n} \right)^\circ$, $$\left| \log \frac{\left( F^k \right)'x}{\left( F^k \right)'y} \right| \leq C_4.$$
Because $F^{i-1} x, F^{i-1} y \in \left( p_{m_i}, p_{m_i-1} \right)$ for $1 \leq i \leq n$, we have $$\left| \log \frac{\left( F^k \right)'x}{\left( F^k \right)'y} \right| \leq \sum_{i=1}^k \left| \log \frac{F' \left( F^{i-1} x \right)}{F' \left( F^{i-1} y \right)} \right| = \sum_{i=1}^k \left| \log \frac{\left( f^{m_i} \right)' \left( F^{i-1} x \right)}{\left( f^{m_i} \right)' \left( F^{i-1} y \right)} \right|.$$ Now we use Corollary \[cor:C3\] to obtain $$\begin{aligned}
\left| \log \frac{\left( F^k \right)'x}{\left( F^k \right)'y} \right| &\leq \frac{C_3}{r_0-r_1} \sum_{i=1}^k \left| f^{m_i} \left( F^{i-1} x \right) - f^{m_i} \left( F^{i-1} y \right) \right| \\
&= \frac{C_3}{r_0-r_1} \sum_{i=1}^k \left| F^i x - F^i y \right| \leq \frac{C_3}{r_0-r_1} \sum_{i=1}^k \lambda^{-\left( k - i \right)} \left| F^k x - F^k y \right| \\
&< C_3 \sum_{j=0}^\infty \lambda^{-j} =: C_4. \qedhere
\end{aligned}$$
\[prop:C5\] There exists a constant $C_5 > 1$ such that for all $n \in \mathbb{N}$, for all $J_\sigma \in G_n$, and for all $k \in \mathbb{N}$, $$\begin{gathered}
C_5^{-1} k^{-\frac{1}{\gamma}} \leq \frac{\left| \bigcup_{i=k}^\infty J_{\sigma i} \right|}{\left| J_\sigma \right|} \leq C_5 k^{-\frac{1}{\gamma}}, \\
C_5^{-1} k^{-1-\frac{1}{\gamma}} \leq \frac{\left| J_{\sigma k} \right|}{\left| J_\sigma \right|} \leq C_5 k^{-1-\frac{1}{\gamma}}.
\end{gathered}$$
In proving the first claimed estimate we may ignore the trivial case $k = 1$. Use the Mean Value Theorem to find $\xi_1, \xi_2 \in \left( J_\sigma \right)^\circ$ such that $$\frac{\left| \bigcup_{i=k}^\infty J_{\sigma i} \right|}{\left| J_\sigma \right|} = \frac{\left| \left[ r_1,p_{k-1} \right] \right| / \left( F^n \right)'\left( \xi_1 \right)}{\left| \left[ r_1,1 \right] \right| / \left( F^n \right)'\left( \xi_2 \right)}.$$ Now using Proposition \[prop:distF\] and the first estimate of Theorem \[thm:rnyoung\] yields $$\begin{aligned}
\frac{\left| \bigcup_{i=k}^\infty J_{\sigma i} \right|}{\left| J_\sigma \right|} &\leq \frac{\exp(C_4)}{1-r_1} \left| \left[ r_1,p_{k-1} \right] \right| \leq \frac{\exp\left( C_4 \right)}{1-r_1} \left| f \left( \left[ r_1,p_{k-1} \right] \right) \right| = \frac{\exp\left( C_4 \right)}{1-r_1} r_{k-1} \\
&\leq \frac{C_1 \exp\left( C_4 \right)}{1-r_1} \left( k-1 \right)^{-\frac{1}{\gamma}} \leq \frac{2^{\frac{1}{\gamma}} C_1 \exp\left( C_4 \right)}{1-r_1} k^{-\frac{1}{\gamma}}
\end{aligned}$$ Similarly we have $$\begin{aligned}
\frac{\left| \bigcup_{i=k}^\infty J_{\sigma i} \right|}{\left| J_\sigma \right|} &\geq \frac{\exp\left( -C_4 \right)}{1-r_1} \left| \left[ r_1,p_{k-1} \right] \right| \geq \frac{\exp\left( -C_4 \right)}{\left( 1-r_1 \right) \sup f'{\mathord{\upharpoonright}}_{\left( r_1,1 \right)}} \left| f \left( \left[ r_1,p_{k-1} \right] \right) \right| \\
&\geq \frac{C_1^{-1} \exp\left( -C_4 \right)}{\left( 1-r_1 \right) \sup f'{\mathord{\upharpoonright}}_{\left( r_1,1 \right)}} \left( k-1 \right)^{-\frac{1}{\gamma}} \geq \frac{C_1^{-1} \exp\left( -C_4 \right)}{\left( 1-r_1 \right) \sup f'{\mathord{\upharpoonright}}_{\left( r_1,1 \right)}} k^{-\frac{1}{\gamma}}.
\end{aligned}$$ In proving the second claimed estimate we include the case $k=1$. With nearly identical calculations to those above, but now using the second estimate of Theorem \[thm:rnyoung\], we see that $$\begin{aligned}
\frac{\left| J_{\sigma k} \right|}{\left| J_\sigma \right|} &\leq \frac{\exp\left( C_4 \right)}{1-r_1} \left| \left[ p_k,p_{k-1} \right] \right| \leq \frac{\exp\left( C_4 \right)}{1-r_1} \left| f \left( \left[ p_k,p_{k-1} \right] \right) \right| \\
&= \frac{\exp\left( C_4 \right)}{1-r_1} \left| \left[ r_k, r_{k-1} \right] \right| \leq \frac{C_1 \exp\left( C_4 \right)}{1-r_1} k^{-1-\frac{1}{\gamma}}
\end{aligned}$$ as well as $$\begin{aligned}
\frac{\left| J_{\sigma k} \right|}{\left| J_\sigma \right|} &\geq \frac{\exp\left( -C_4 \right)}{1-r_1} \left| \left[ p_k,p_{k-1} \right] \right| \geq \frac{\exp\left( -C_4 \right)}{\left( 1-r_1 \right) \sup f'{\mathord{\upharpoonright}}_{\left( r_1,1 \right)}} \left| f \left( \left[ p_k,p_{k-1} \right] \right) \right| \\
&= \frac{\exp\left( -C_4 \right)}{\left( 1-r_1 \right) \sup f'{\mathord{\upharpoonright}}_{\left( r_1,1 \right)}} \left| \left[ r_k,r_{k-1} \right] \right| \geq \frac{C_1^{-1} \exp\left( -C_4 \right)}{\left( 1-r_1 \right) \sup f'{\mathord{\upharpoonright}}_{\left( r_1,1 \right)}} k^{-1-\frac{1}{\gamma}}.
\end{aligned}$$ The proposition follows by taking $$C_5 := \max \left\{ \frac{2^{\frac{1}{\gamma}} C_1 \exp\left( C_4 \right)}{1-r_1}, \frac{\left( 1-r_1 \right) \sup f'{\mathord{\upharpoonright}}_{\left( r_1,1 \right)}}{C_1^{-1} \exp\left( -C_4 \right)} \right\}. \qedhere$$
\[cor:kfromzeta\] Fix $n \in \mathbb{N}$, $J_\sigma \in G_n$, and $\zeta \in \left( 0,1 \right)$. Find the unique $K \in \mathbb{N}$ such that $\partial^\ell J_\sigma + \zeta \left| J_\sigma \right| \in \left. \left[ J_{\sigma K} \right. \right)$. Then $$\left( C_5 \zeta \right)^{-\gamma} - 1 \leq K \leq \left( C_5^{-1} \zeta \right)^{-\gamma}.$$
Because $$\bigcup_{i=K+1}^\infty J_{\sigma i} \subset \left( \left. \partial^\ell J_\sigma, \partial^\ell J_\sigma + \zeta \left| J_\sigma \right| \right] \right. \subset \bigcup_{i=K}^\infty J_{\sigma i},$$ Proposition \[prop:C5\] allows us to estimate the diameters of the three sets above as follows: $$C_5^{-1} \left( K+1 \right)^{-\frac{1}{\gamma}} \left| J_\sigma \right| \leq \left| \bigcup_{i=K+1}^\infty J_{\sigma i} \right| \leq \zeta \left| J_\sigma \right| \leq \left| \bigcup_{i=K}^\infty J_{\sigma i} \right| \leq C_5 K^{-\frac{1}{\gamma}} \left| J_\sigma \right|.$$ Solving the inequalities $$C_5^{-1} \left( K+1 \right)^{-\frac{1}{\gamma}} \leq \zeta \quad \text{and} \quad \zeta \leq C_5 K^{-\frac{1}{\gamma}}$$ for $K$ completes the proof.
Let $S$ be the set of points in $\left( 0,1 \right)$ with positive lower Lyapunov exponent for $f$. If $x \in \mathcal{E}_f$, find $\epsilon > 0$ such that the orbit of $x$ under $f$ avoids $\left. \left[ 0, \epsilon \right. \right)$. Note that $f' \left( f^k x \right)$ is well-defined for all $k \in \mathbb{N} \cup \{ 0 \}$ because $x$ is not a preimage of $0$. Since $f'$ is increasing, the lower Lyapunov exponent of $x$, $L(x)$, satisfies $$L(x) = \liminf_{n \to \infty} \frac{1}{n} \sum_{k=0}^{n-1} \log \left| f' \left( f^k x \right) \right| \geq \liminf_{n \to \infty} \frac{1}{n} \sum_{k=0}^{n-1} \log f' \left( \epsilon \right) = f' \left( \epsilon \right) > 0.$$ Hence $\mathcal{E}_f \subset S$ and the result follows.
Commensurability {#sec:commensurability}
================
Following [@MR3021798], we make the next two definitions.
A point is called a *left endpoint of generation $n$* if it is the left endpoint of some basic interval of generation $n$.
If $B$ is a closed interval and $n \in \mathbb{N}$, say that *$B$ is commensurate with generation $n$ (c.w.g. $n$)* if $B$ contains some member of $G_n$ but no member of $G_{n-1}$.
We observe the following properties of basic intervals:
1. \[obs1\] For all $I \in G_n$ with $n > 0$, and all $0 \leq k \leq n-1$, there exists a unique member of $G_k$ properly containing $I$.
2. \[obs2\] Basic intervals of distinct generations are either nested or disjoint.
3. \[obs3\] Basic intervals of the same generation have disjoint interiors.
4. \[obs4\] Every basic interval $I \in G_n$ has a unique left-adjacent basic interval in $G_n$.
5. \[obs5\] Every basic interval $I_{\sigma k} \in G_n$, where $|\sigma| \geq 0$ and $k > 1$, has a unique right-adjacent basic interval in $G_n$.
6. \[obs6\] If $\ell$ is a left endpoint of generation $n$ and $\epsilon > 0$, then the interval $\left( \ell, \ell+\epsilon \right)$ contains infinitely many members of $G_{n+k}$ for all $k \geq 1$.
7. \[obs7\] For each $n \in \mathbb{N} \cup \{ 0 \}$, the union of the elements of $G_n$ is dense in $\left[ 0,1 \right]$.
Every closed interval $B$ is commensurate with a unique generation.
The collection of all left endpoints of all generations is equal to the set $\left( \bigcup_{n=0}^\infty F^{-n} \left( 0 \right) \right) \setminus \{ 1 \}$, and hence is dense in $\left[ r_1,1 \right]$. So $B^\circ$ contains a left endpoint of some generation $n \in \mathbb{N}$; hence $B$ contains some basic interval of generation $n+1$ by Observation \[obs6\]. Let $n_0$ be the least generation for which $B$ contains a member of $G_{n_0}$. Then $n_0 \geq 1$, and $B$ contains a member of $G_{n_0}$ but no member of $G_{n_0-1}$.
Suppose $B$ is c.w.g $g_1$ and $g_2$, where $g_1 < g_2$. $B$ contains some $I \in G_{g_1}$; hence $\left. \left[ B \right. \right)$ contains $\partial^\ell I$. Thus $\left( \partial^\ell I, \partial^r B \right) \subset B$ contains an element of $G_{g_1+1}$ by Observation \[obs6\]. Repeating this argument shows that $B$ contains an element of $g_2 - 1$, contradicting that $B$ is c.w.g. $g_2$.
\[cor:atmost2\] If a closed interval $B$ is c.w.g. $n$, then $B$ intersects either one or two elements of $G_{n-1}$.
$B$ intersects at least one member of $G_{n-1}$ by Observation \[obs7\]. If $B$ intersects three elements of $G_{n-1}$, then $B$ intersects three adjacent elements of $G_{n-1}$. Call the leftmost one $I_1$, the middle one $I_2$, and the rightmost one $I_3$. Then $I_2 = \left[ \partial^r I_1, \partial^\ell I_3 \right] \subset B$, contradicting that $B$ is c.w.g. $n$.
\[lem:accum\] If a closed interval $B$ is c.w.g. $n$, then $B$ contains at most one left endpoint of generation at most $n-1$. Furthermore, if $B$ contains a left endpoint $\ell$ of generation $k < n-1$, then $\ell$ is the right endpoint of $B$.
Suppose $B$ contains two left endpoints $\ell_1 < \ell_2$ of generations $g_1, g_2$, respectively, and $g_1, g_2 \leq n-1$. First assume that $g_1 = g_2$. Then $B$ contains two adjacent left endpoints of generation $g_1$; hence $B$ contains a basic interval of generation $g_1 \leq n-1$, contradicting that $B$ is c.w.g. $n$.
Next assume $g_1 < g_2$. Then the interval $\left( \ell_1,\ell_2 \right)$ contains an element of $G_{g_1+1}$ by Observation \[obs6\]; hence $\left( \ell_1,\ell_2 \right)$ contains a left endpoint of generation $g_1+1$. Repeating this argument shows that $\left( \ell_1,\ell_2 \right) \subset B$ contains a left endpoint of generation $g_2$. Now we are in the situation of the previous case, giving a contradiction.
Finally, assume $g_1 > g_2$. For $i \in \{ 1,2 \}$ let $I_i$ be the basic interval of generation $g_i$ with left endpoint $\ell_i$. By Observation \[obs2\], either $I_1 \cap I_2 = \emptyset$, $I_1 \subset I_2$, or $I_2 \subset I_1$. Now $I_2 \subset I_1$ is impossible because $g_2 < g_1$, and $I_1 \subset I_2$ is impossible because $\partial^\ell I_1 \notin I_2$. So $I_1 \cap I_2 = \emptyset$ and thus $B$ contains $I_1$, a basic interval of generation at most $n-1$. This contradicts that $B$ is c.w.g. $n$.
For the second claim of the lemma, observe that if $\left. \left[ B \right. \right)$ contains a left endpoint $\ell$ of generation $k < n-1$, then the interval $(\ell, \partial^r B) \subset B$ contains a basic interval of generation $k+1 < n$ by Observation \[obs6\], contradicting that $B$ is c.w.g. $n$.
\[cor:gnminus2\] If a closed interval $B$ is c.w.g. $n \geq 2$, then there is a unique element of $G_{n-2}$ that properly contains $B$.
$\left. \left[ B \right. \right)$ intersects at least one member of $G_{n-2}$ by Observation \[obs7\]. If $B$ intersects two members of $G_{n-2}$, then $B$ intersects two adjacent members $I_1, I_2$ of $G_{n-2}$. Let $\partial^r I_1 = \partial^\ell I_2$. By Lemma \[lem:accum\], $\partial^\ell I_2 = \partial^r B$. This shows that there is exactly one element of $G_{n-2}$ that intersects $\left. \left[ B \right. \right)$; hence this element must contain $B$ by Observation \[obs4\]. Proper containment follows because $B$ is c.w.g. $n$.
Proof that $\mathcal{E}_F$ is strong winning (Theorem \[thm:maintheoremF\]) {#sec:maintheoremF}
===========================================================================
Initial steps
-------------
Recall the constant $C_5 > 1$ defined in Proposition \[prop:C5\], in which bounds on the lengths of basic intervals are derived; $\gamma > 0$, which appears in the exponent in the definition of $f$, controls the degree of nonuniform hyperbolicity of the system. Define $\alpha = 2^{-2-\frac{1}{\gamma}} C_5^{-1}$ and let $\beta \in \left( 0,1 \right)$ be arbitrary. We now show that $\mathcal{E}_F$ is $\left( \alpha, \beta \right)$-strong winning.
Bob begins the game by choosing $B_1 \subset \left[ r_1, 1 \right]$. Alice chooses $A_1 \subset B_1$ so that $\{ r_1, 1 \} \cap A_1 = \emptyset$. Bob chooses $B_2 \subset A_1$. Thus $B_2$ is c.w.g $g_1 > 0$.
Find $d_1'$ large enough that $$\left| B_2 \right| > \tfrac{1}{d_1'} \left| I \right| \text{ for all } I \in G_{g_1-1} \text{ that intersect } B_2.$$ Next, if $g_1 = 1$, define $d_2' := 1$. Otherwise find $d_2' > 1$ large enough so that $$B_2 \cap \left( \partial^\ell I, \partial^\ell I + \tfrac{1}{d_2'} \left| I \right| \right) = \emptyset \text{ for all } I \in \bigcup_{g=0}^{g_1-2} G_g.$$ Now fix constants $d_1$ and $d_2$ satisfying $$\begin{aligned}
d_1 &> \max \left\{ d_1', 2^{1+\frac{1}{\gamma}} C_5^2 \left( \alpha \beta \right)^{-1} \right\}, \\
d_2 &> \max \left\{ d_2', 2^{1+\frac{2}{\gamma}} C_5^4 \left( \alpha \beta \right)^{-1}, 2d_1 \left( 1-2\alpha \right)^{-1} \right\}.\end{aligned}$$
Let $n_1 := 2$. During the course of the $\left( \alpha, \beta \right)$ game we will prove the following claim, which is the heart of our proof, by induction.
Regardless of how Bob plays the $\left( \alpha, \beta \right)$ game, Alice can play in such a way that: there exist integers $0 < n_1 < n_2 < \dots$ and $0 < g_1 < g_2 < \dots$ such that for all $j \in \mathbb{N}$,
1. $B_{n_j}$ is c.w.g. $g_j$,
2. $\left| B_{n_j} \right| > \tfrac{1}{d_1} \left| J \right|$ for all $J \in G_{g_j-1}$ that intersect $B_{n_j}$.
3. $B_{n_j} \cap \big( \partial^\ell J, \partial^\ell J + \tfrac{1}{d_2} \left| J \right| \big) = \emptyset$ for all $J \in \bigcup_{g=0}^{g_j-2} G_g$;
Note that the case $j=1$ was handled above. Before proceeding to the induction step, we show how the claim implies the theorem.
Write $\{ \omega \} = \bigcap_{n=1}^\infty B_n$ and define $K := \left\lceil \left( C_5 d_2 \right)^\gamma \right\rceil \geq 1$. For any basic interval $J_\sigma$ of any generation we have $\big( \partial^\ell J_\sigma, \partial^\ell J_\sigma + \frac{1}{d_2} \left| J_\sigma \right| \big) \supset \bigcup_{i=K+1}^\infty \left. \left[ J_{\sigma i} \right. \right)$ by Corollary \[cor:kfromzeta\]. Also for any $n \in \mathbb{N} \cup \{ 0 \}$ we have $F^n \omega \in \left( r_1, p_K \right)$ if and only if $\omega \in \bigcup_{i=K+1}^\infty J_{\sigma i}$ for some $J_\sigma \in G_n$. The claim implies that the latter condition never holds; therefore the orbit of $\omega$ under $F$ stays outside $\left( r_1, p_K \right)$. We conclude that $$\widetilde{\mathcal{E}_F} := \left\{ x \in \left[ r_1,1 \right] \colon \left( r_1, r_1+\epsilon \right) \cap \left\{ F^n x \right\}_{n \geq 0} = \emptyset \right\}$$ is $\left( \alpha, \beta \right)$-strong winning. As $\beta$ was arbitrary, $\widetilde{\mathcal{E}_F}$ is $\alpha$-strong winning. Finally, the original set of interest, $\mathcal{E}_F$, is a cocountable subset of $\widetilde{\mathcal{E}_F}$ because $$\mathcal{E}_F = \left\{ x \in \left[ r_1,1 \right] \colon \left. \left[ r_1, r_1+\epsilon \right. \right) \cap \left\{ F^n x \right\}_{n \geq 0} = \emptyset \right\} = \widetilde{\mathcal{E}_F} \setminus \bigcup_{n=0}^\infty F^{-n} \left( r_1 \right).$$ Therefore $\mathcal{E}_F$ is $\alpha$-winning because a countable intersection of $\alpha$-strong winning sets is $\alpha$-strong winning (see the observation before Theorem 1.2 in [@MR2720230]), and because an $\alpha$-strong winning set with one point removed is $\alpha$-strong winning whenever $\alpha \leq \frac{1}{2}$ (because Alice can avoid the removed point within two turns).
Induction step of the claim
---------------------------
We will need the following result.
\[lem:1overbK\] Fix a basic interval $J_\sigma$ of any generation. Then $$\left[ \partial^\ell J_\sigma, \partial^\ell J_\sigma + \tfrac{1}{d_2} \left| J_\sigma \right| \right] \subset \left[ \partial^\ell J_\sigma, \partial^r J_{\sigma 3} \right).$$ Equivalently, $\big[ \partial^\ell J_\sigma, \partial^\ell J_\sigma + \frac{1}{d_2} \left| J_\sigma \right| \big] \cap \left( J_{\sigma 1} \cup J_{\sigma 2} \right) = \emptyset$.
Let $K$ be the unique integer such that $\partial^\ell J_\sigma + \frac{1}{d_2} \left| J_\sigma \right| \in \left. \left[ J_{\sigma K} \right. \right)$. Using Corollary \[cor:kfromzeta\] we find that $$K + 1 \geq \left( \frac{C_5}{d_2} \right)^{-\gamma} > \left( \frac{C_5}{2^{1+\frac{2}{\gamma}} C_5^4 \left( \alpha \beta \right)^{-1}} \right)^{-\gamma} > \left( \frac{C_5}{3^{\frac{1}{\gamma}} C_5} \right)^{-\gamma} = 3.$$ Hence $K \geq 3$.
Now we begin the induction. Assume that for some $j \in \mathbb{N}$ statements $P_1 \left( j \right)$, $P_2 \left( j \right)$, and $P_3 \left( j \right)$ hold. By Lemma \[lem:accum\], $B_{n_j}$ contains at most one left endpoint of generation at most $g_j-1$. Let $B_{n_j}^\text{mid}$ denote the midpoint of $B_{n_j}$. We consider two cases, according as to whether the interval $\big. \big( B_{n_j}^\text{mid}, \partial^r B_{n_j} \big. \big]$ contains a left endpoint of generation at most $g_j-1$.
Case 1: The interval $\bm{\big. \big( B_{n_j}^\text{mid}, \partial^r B_{n_j} \big. \big]}$ does not contain a left endpoint of generation at most $\bm{g_j-1}$ {#case-1-the-interval-bmbig.-big-b_n_jtextmid-partialr-b_n_j-big.-big-does-not-contain-a-left-endpoint-of-generation-at-most-bmg_j-1 .unnumbered}
--------------------------------------------------------------------------------------------------------------------------------------------------------------
![One possibility for Case 1 of the induction step.[]{data-label="fig:case1"}](figure2.jpg){width="\textwidth"}
We refer the reader to Figure \[fig:case1\]. Because $B_{n_j}$ is c.w.g. $g_j$, $B_{n_j}$ contains some basic interval of generation $g_j$. Let $I_1$ be the rightmost basic interval of generation $g_j$ contained in $B_{n_j}$, and let $I$ denote the unique basic interval of generation $g_j-1$ containing $I_1$ by Observation \[obs1\]. Then $\partial^\ell I \leq B_{n_j}^\text{mid}$. Note that $\partial^\ell I$ could be inside or outside $B_{n_j}$.
Next, we claim that $\partial^r I > \partial^r B_{n_j}$. To see this, first note that $\partial^r I \geq \partial^\ell B_{n_j}$ because $I_1$, and hence $I$, intersects $B_{n_j}$. Next we have that $\partial^r I \geq \partial^r B_{n_j}$, for otherwise the interval $\big( \partial^r I, \partial^r B_{n_j} \big) \subset B_{n_j}$ would contain a member of $G_{g_j}$ to the right of $I_1$ by Observation \[obs6\]. Finally, if $\partial^r I = \partial^r B_{n_j}$, then $\partial^r B_{n_j} \leq \partial^r B_2 < 1$ and hence $\partial^r I \in \big. \big( B_{n_j}^\text{mid}, \partial^r B_{n_j} \big. \big]$ would be the left endpoint of some basic interval of generation at most $g_j-1$. This proves the claim.
Write $I = J_\sigma$ for some string $\sigma$ of length $g_j-1$. In order to specify Alice’s strategy in choosing $A_{n_j}$ we consider two subcases, according as to whether $\partial^\ell J_{\sigma 1} \leq \partial^r B_{n_j}$.
![Case 1, Subcase 1 of the induction step.[]{data-label="fig:case1subcase1"}](figure3a.jpg){width="\textwidth"}
*Subcase 1: $\partial^\ell J_{\sigma 1} > \partial^r B_{n_j}$.* See Figure \[fig:case1subcase1\]. Alice chooses $$A_{n_j} = \left[ \partial^r B_{n_j} - \alpha \left| B_{n_j} \right|, \partial^r B_{n_j} \right] \subset B_{n_j}.$$ Using the induction hypothesis $P_2 \left( j \right)$ we find that $$\begin{aligned}
\partial^r B_{n_j} - \left( \partial^\ell I + \frac{1}{d_2} \left| I \right| \right) &\geq \frac{1}{2} \left| B_{n_j} \right| - \frac{1}{d_2} \left| I \right| > \frac{1}{2} \left| B_{n_j} \right| - \frac{d_1}{d_2} \left| B_{n_j} \right| \\
&> \left( \frac{1}{2} - \frac{d_1}{2d_1 \left( 1-2\alpha \right)^{-1}} \right) \left| B_{n_j} \right| = \alpha \left| B_{n_j} \right|.\end{aligned}$$ This shows that $A_{n_j}$ is disjoint from $\big( \partial^\ell I, \partial^\ell I + \frac{1}{d_2} \left| I \right| \big)$. Also $A_{n_j}$ is disjoint from $J_{\sigma 1}$ because $\partial^r A_{n_j} = \partial^r B_{n_j} < \partial^\ell J_{\sigma 1}$. Finally, because $\alpha < \frac{1}{2}$ we have $A_{n_j} \subset \big[ B_{n_j}^\text{mid}, \partial^r B_{n_j} \big] \subset I$ so that $A_{n_j}$ is disjoint from every element of $G_{g_j-1} \setminus \{I\}$.
![Case 1, Subcase 2 of the induction step.[]{data-label="fig:case1subcase2"}](figure3b.jpg){width="\textwidth"}
*Subcase 2: $\partial^\ell J_{\sigma 1} \leq \partial^r B_{n_j}$.* See Figure \[fig:case1subcase2\]. In this case $B_{n_j}$ must contain $J_{\sigma 2}$ since otherwise $B_{n_j}$ would not contain any member of $G_{g_j}$. Also $\big( \partial^\ell I, \partial^\ell I + \frac{1}{d_2} \left| I \right| \big)$ is disjoint from $J_{\sigma 2}$ by Lemma \[lem:1overbK\]. Furthermore, by Proposition \[prop:C5\], $$\begin{aligned}
\left| I_{\sigma 2} \right| &\geq 2^{-1-\frac{1}{\gamma}} C_5^{-1} \left| I \right| > 2^{-1-\frac{1}{\gamma}} C_5^{-1} \big| \big[ B_{n_j}^\text{mid}, \partial^r B_{n_j} \big] \big| \\
&= 2^{-2-\frac{1}{\gamma}} C_5^{-1} \left| B_{n_j} \right| > \alpha \left| B_{n_j} \right|.\end{aligned}$$ Thus, as in the previous subcase, Alice may choose $A_{n_j} \subset J_{\sigma 2} \subset I$ to be disjoint from $\big( \partial^\ell I, \partial^\ell I + \frac{1}{d_2} |I| \big)$, $J_{\sigma 1}$, and every element of $G_{g_j-1} \setminus \{ I \}$.
This takes care of the two subcases. Now Bob chooses $B_{n_j+1}$. If $B_{n_j+1}$ is c.w.g. $g_j$, Alice plays arbitrarily until Bob chooses an interval c.w.g. $g_{j+1} > g_j$. This will eventually happen because $A_{n_j}$ contains finitely many members of $G_{g_j}$ (since $\partial^\ell I \notin A_{n_j}$) and Alice can force $\left| B_n \right| \searrow 0$ by always choosing an interval $A_n$ of length $\alpha \left| B_n \right|$; hence $B_n$ will eventually be too small to contain a member of $G_{g_j}$.
Let $n_{j+1}$ be such that $B_{n_{j+1}-1}$ is c.w.g. $g_j$ and $B_{n_{j+1}}$ is c.w.g. $g_{j+1} > g_j$. Define $$\mathcal{J} := \left\{ J \in \bigcup_{g=g_j}^{g_{j+1}-1} G_g \colon J \cap B_{n_{j+1}} \neq \emptyset \right\}.$$ Observe that every $J \in \mathcal{J}$ is contained in $J_\sigma$ because $B_{n_{j+1}}$ is disjoint from every element of $G_{g_j-1} \setminus \left\{ J_\sigma \right\}$.
\[lem:case1lem\] $\left| B_{n_{j+1}} \right| \geq 2^{-1-\frac{1}{\gamma}} \alpha \beta C_5^{-2} \left| J \right| > 2^{\frac{1}{\gamma}} C_5^2 d_2^{-1} \left| J \right|$ for all $J \in \mathcal{J}$.
First observe that every $J \in \mathcal{J}$ is contained in some element of $G_{g_j} \cap \mathcal{J}$, and so it suffices to verify the lemma when $J \in G_{g_j} \cap \mathcal{J}$. Next, note that the function $n \mapsto \left| J_{\sigma n} \right|$ is strictly decreasing; this follows immediately from the fact that $f'$ is increasing. Finally, because $B_{n_{j+1}-1} \subset I$ is c.w.g. $g_j$ we may define $k_0 := \min \left\{ k \colon J_{\sigma k} \subset B_{n_{j+1}-1} \right\}$. Then $k_0 \geq 2$ by the choice of $A_{n_j}$ (or because $\partial^r I \notin B_{n_j}$ if $B_{n_{j+1}-1} = B_{n_j}$). By the definition of $k_0$ we have $k_0-1 = \min \left\{ k \colon J_{\sigma k} \cap B_{n_{j+1}-1} \neq \emptyset \right\} \leq \min \left\{ k \colon J_{\sigma k} \in \mathcal{J} \right\}$. Using Proposition \[prop:C5\] we have $$\begin{aligned}
\frac{ \left| B_{n_{j+1}} \right|}{\max \left\{ \left| J \right| \colon J \in G_{g_j} \cap \mathcal{J} \right\}} &\geq \alpha \beta \frac{\left| B_{n_{j+1}-1} \right|}{\left| J_{\sigma \left( k_0-1 \right)} \right|} \geq \alpha \beta \frac{\left| J_{\sigma k_0} \right|}{\left| J_{\sigma \left( k_0-1 \right)} \right|} \geq \alpha \beta \frac{C_5^{-1} k_0^{-1-\frac{1}{\gamma}}}{C_5 \left( k_0-1 \right)^{-1-\frac{1}{\gamma}}} \\
&\geq 2^{-1-\frac{1}{\gamma}} \alpha \beta C_5^{-2} = \frac{2^{\frac{1}{\gamma}} C_5^2}{2^{1+\frac{2}{\gamma}} C_5^4 \left( \alpha \beta \right)^{-1}} > \frac{2^{\frac{1}{\gamma}} C_5^2}{d_2}. \qedhere
\end{aligned}$$
\[cor:case1cor\] $B_{n_{j+1}}$ is disjoint from every interval $\big( \partial^\ell J, \partial^\ell J + \frac{1}{d_2} \left| J \right| \big)$, where $J \in \bigcup_{g=0}^{g_{j+1}-2} G_g$.
$P_3 \left( j \right)$ is true by the induction hypothesis; therefore it suffices to consider $J \in \bigcup_{g=g_j-1}^{g_{j+1}-2} G_g$. Also $B_{n_{j+1}} \subset A_{n_j}$, $A_{n_j}$ is disjoint from $\big( \partial^\ell I, \partial^\ell I + \frac{1}{d_2} \left| I \right| \big)$, and $I$ is the only element of $G_{g_j-1}$ that intersects $\left( A_{n_j} \right)^\circ$. So it suffices to consider $J \in \bigcup_{g=g_j}^{g_{j+1}-2} G_g$.
Fix such a $J = J_\tau \in G_{g'}$, where $g_j \leq g' \leq g_{j+1}-2$. Using Observation \[obs1\] and Corollary \[cor:gnminus2\], let $J'$ be the unique element of $G_{g'}$ containing $B_{n_{j+1}}$. If $J \neq J'$, then $B_{n_{j+1}}$ is disjoint from the interior of $J$ and we are done. So suppose $J = J'$.
Find the unique $K \in \mathbb{N}$ such that $\partial^\ell J + \frac{1}{d_2} \left| J \right| \in \left. \left[ J_{\tau K} \right. \right)$. By Lemma \[lem:1overbK\], $K-1 \geq 2$, and by Corollary \[cor:kfromzeta\], $K-1 \geq \big( \frac{C_5}{d_2} \big)^{-\gamma} - 2$. So by Proposition \[prop:C5\], $$\begin{aligned}
\frac{\left| \bigcup_{i=K-1}^\infty J_{\tau i} \right|}{\left| J \right|} &\leq C_5 \left( K-1 \right)^{-\frac{1}{\gamma}} \leq C_5 \bigg( \left( \frac{C_5}{d_2} \right)^{-\gamma}-2 \bigg)^{-\frac{1}{\gamma}} \\
&\leq C_5 \bigg( \frac{1}{2} \left( \frac{C_5}{d_2} \right)^{-\gamma} \bigg)^{-\frac{1}{\gamma}} = \frac{2^{\frac{1}{\gamma}} C_5^2}{d_2}.
\end{aligned}$$ Therefore, if the left endpoint of $B_{n_{j+1}}$ were contained in $\big. \big[ \partial^\ell J, \partial^\ell J + \frac{1}{d_2} \left| J \right| \big. \big)$, then $B_{n_{j+1}}$ would contain $J_{\tau \left( K-1 \right)} \in G_{g'+1}$ by Lemma \[lem:case1lem\]. But this is not possible because $B_{n_{j+1}}$ is c.w.g. $g_{j+1} > g'+1$.
In conclusion, $P_1 \left( j+1 \right)$ is true by construction, Lemma \[lem:case1lem\] implies $P_2 \left( j+1 \right)$ because $\frac{1}{d_1} < 2^{-1-\frac{1}{\gamma}} \alpha \beta C_5^{-2}$, and Corollary \[cor:case1cor\] is the statement $P_3 \left( j+1 \right)$. This completes the analysis of Case 1.
Case 2: The interval $\bm{\big. \big( B_{n_j}^\text{mid}, \partial^r B_{n_j} \big. \big]}$ contains a left endpoint of generation at most $\bm{g_j-1}$ {#case-2-the-interval-bmbig.-big-b_n_jtextmid-partialr-b_n_j-big.-big-contains-a-left-endpoint-of-generation-at-most-bmg_j-1 .unnumbered}
------------------------------------------------------------------------------------------------------------------------------------------------------
![Case 2 of the induction step.[]{data-label="fig:case2"}](figure4.jpg){width="\textwidth"}
We refer the reader to Figure \[fig:case2\]. Let $I_1$ be a basic interval of generation at most $g_j-1$ with left endpoint in $\big. \big( B_{n_j}^\text{mid}, \partial^r B_{n_j} \big. \big]$. Then there is some basic interval of generation at most $g_j-1$ with right endpoint $\partial^\ell I_1$ by Observation \[obs4\]; hence there is some $I = J_{\kappa} \in G_{g_j-1}$ having right endpoint $\partial^\ell I_1$. Note that $\partial^\ell I < \partial^\ell B_{n_j}$ since $\partial^r I \in B_{n_j}$ and $B_{n_j}$ is c.w.g. $g_j$. Alice chooses $A_{n_j} = \left[ \partial^r I - \alpha \left| B_{n_j} \right|, \partial^r I \right]$. Using Proposition \[prop:C5\] we have $$\begin{aligned}
\left| \left[ \partial^\ell J_{\kappa 1} + \tfrac{1}{d_2} \left| J_{\kappa 1} \right|,\partial^\ell I \right] \right| &\geq C_5^{-1} \left| I \right| \left( 1- \tfrac{1}{d_2} \right) \geq C_5^{-1} \big| \big[ \partial^\ell B_{n_j}, B_{n_j}^\text{mid} \big] \big| \left( 1 - \tfrac{1}{d_2} \right) \\
&> \tfrac{1}{4} C_5^{-1} \left| B_{n_j} \right| > \alpha \left| B_{n_j} \right|,\end{aligned}$$ which shows that $A_{n_j} \subset J_{\kappa 1}$ and moreover, that $A_{n_j}$ is disjoint from the interval $\big[ \partial^\ell J_{\kappa 1}, \partial^\ell J_{\kappa 1} + \frac{1}{d_2} \left| J_{\kappa 1} \right| \big]$. Thus $A_{n_j}$ is disjoint from all intervals $\big[ \partial^\ell J, \partial^\ell J + \frac{1}{d_2} \left| J \right| \big]$ where $J \in G_{g_j}$.
Let $A_{n_j}$ be c.w.g. $\tilde{g} > g_j$. Then by the choice of $A_{n_j}$, $J_{\kappa 1 \tilde{\kappa} 1} \subset A_{n_j} \subset J_{\kappa 1 \tilde{\kappa}}$, where $\tilde{\kappa}$ is a string of $\tilde{g}-g_j-1$ repeating ones. Now Bob chooses $B_{n_j+1}$. Define $n_{j+1} := n_j + 1$ and let $B_{n_{j+1}}$ be c.w.g. $g_{j+1} \geq \tilde{g}$.
\[lem:case2lemsize\] $\left| B_{n_{j+1}} \right| \geq \beta C_5^{-1} \left| J \right| > \tfrac{1}{d_2} \left| J \right|$ for all $J \in G_{g_{j+1}-1}$ that intersect $B_{n_{j+1}}$.
If $g_{j+1} = \tilde{g}$, then the only basic interval of generation $g_{j+1}-1$ intersecting $B_{n_{j+1}}$ is $J_{\kappa 1 \tilde{\kappa}}$, and by Proposition \[prop:C5\] we have $$\left| B_{n_{j+1}} \right| \geq \beta \left| A_{n_j} \right| \geq \beta \left| J_{\kappa 1 \tilde{\kappa} 1} \right| \geq \beta C_5^{-1} \left| J_{\kappa 1 \tilde{\kappa}} \right| > \tfrac{1}{d_1} \left| J_{\kappa 1 \tilde{\kappa}} \right|.$$ On the other hand, if $g_{j+1} > \tilde{g}$, then there are at most two basic intervals of generation $g_{j+1}-1$ intersecting $B_{n_{j+1}}$ by Corollaries \[cor:atmost2\] and \[cor:gnminus2\]. If there is one, call it $J_{\tau t}$; if there are two, call them $J_{\tau t}$ and $J_{\tau (t+1)}$. Both $J_{\tau t}$ and $J_{\tau (t+1)}$ are contained in $J_{\kappa 1 \tilde{\kappa}}$. Thus $\left| J_{\tau (t+1)} \right| < \left| J_{\tau t} \right| < \left| J_{\kappa 1 \tilde{\kappa}} \right|$ since $f'$ is increasing. Borrowing from the calculation above, $$\begin{aligned}
\left| B_{n_{j+1}} \right| &\geq \beta C_5^{-1} \left| J_{\kappa 1 \tilde{\kappa}} \right| > \beta C_5^{-1} \max \left\{ \left| J_{\tau (t+1)} \right|, \left| J_{\tau t} \right| \right\} \\
&> \tfrac{1}{d_1} \max \left\{ \left| J_{\tau (t+1)} \right|, \left| J_{\tau t} \right| \right\}. \qedhere
\end{aligned}$$
\[lem:case2lem\] $B_{n_{j+1}}$ is disjoint from every interval $\big( \partial^\ell J, \partial^\ell J + \frac{1}{d_2} \left| J \right| \big)$, where $J \in \bigcup_{g=0}^{g_{j+1}-2} G_g$.
We use the same notation as in the previous lemma. $P_3 \left( j \right)$ is true by the induction hypothesis; therefore it suffices to consider $J \in \bigcup_{g=g_j-1}^{g_{j+1}-2} G_g$. Also $B_{n_{j+1}} \subset A_{n_j} \subset J_{\kappa 1}$, $J_{\kappa 1}$ is disjoint from $\big( \partial^\ell I, \partial^\ell I + \frac{1}{d_2} \left| I \right| \big)$ by Lemma \[lem:1overbK\], and $I$ is the only element of $G_{g_j-1}$ that intersects $\left( A_{n_j} \right)^\circ$. So it suffices to consider $J \in \bigcup_{g=g_j}^{g_{j+1}-2} G_g$.
Fix such a $J \in G_g$, where $g_j \leq g \leq g_{j+1}-2$. Let $J'$ be the unique element of $G_g$ containing $J_{\tau (t+1)}$ and $J_{\tau t}$. If $J \neq J'$, then $B_{n_{j+1}}$ is disjoint from the interior of $J$ and we are done. So suppose $J = J'$. Thus $J = J_{\kappa 1 \kappa'}$ where $\kappa'$ is a string of $g-g_j$ repeating ones. We consider two cases, the first of which (Case A) is potentially vacuous.
*Case A: $g_j \leq g \leq \tilde{g}-2$.* Recall that $B_{n_{j+1}} \subset A_{n_j} \subset J_{\kappa 1 \tilde{\kappa}} \in G_{\tilde{g}-1}$ where $\tilde{\kappa}$ is a string of $\tilde{g}-g_j-1$ repeating ones. Also $J = J_{\kappa 1 \kappa'}$ where $\kappa'$ is a string of $g-g_j$ repeating ones; but $\left| \kappa' \right| = g-g_j \leq \tilde{g}-g_j-2 < \tilde{g}-g_j-1 = \left| \tilde{\kappa} \right|$, and $\big( \partial^\ell J, \partial^\ell J + \frac{1}{d_2} \left| J \right| \big) \subset \bigcup_{i=3}^\infty J_{\kappa 1 \kappa' i}$ by Lemma \[lem:1overbK\]. The result follows in this case.
*Case B: $\tilde{g}-1 \leq g \leq g_{j+1}-2$.* Find the unique $K$ such that $\partial^\ell J + \frac{1}{d_2} \left| J \right| \in \left. \left[ J_{\kappa 1 \kappa' K} \right. \right)$. By Lemma \[lem:1overbK\], $K-1 \geq 2$, and by Corollary \[cor:kfromzeta\], $K-1 \geq \big( \frac{C_5}{d_2} \big)^{-\gamma} - 2$. Thus, using Proposition \[prop:C5\], $$\begin{aligned}
\frac{\left| \bigcup_{i=K-1}^\infty J_{\kappa 1 \kappa' i} \right|}{\left| J \right|} &\leq C_5 \left( K-1 \right)^{-\frac{1}{\gamma}} \leq C_5 \bigg( \left( \frac{C_5}{d_2} \right)^{-\gamma}-2 \bigg)^{-\frac{1}{\gamma}} \\
&\leq C_5 \bigg( \frac{1}{2} \left( \frac{C_5}{d_2} \right)^{-\gamma} \bigg)^{-\frac{1}{\gamma}} = \frac{2^{\frac{1}{\gamma}} C_5^2}{d_2}.
\end{aligned}$$ Also $\left| \kappa' \right| = g - g_j \geq \tilde{g} - g_j - 1 = \left| \tilde{\kappa} \right|$ and so by Proposition \[prop:C5\], $$\frac{\left| B_{n_{j+1}} \right|}{\left| J \right|} \geq \beta \frac{\left| A_{n_j} \right|}{\left| J \right|} \geq \beta \frac{\left| J_{\kappa 1 \tilde{\kappa} 1} \right|}{ \left| J_{\kappa 1 \kappa'} \right|} \geq \beta \frac{\left| J_{\kappa 1 \tilde{\kappa} 1} \right|}{\left| J_{\kappa 1 \tilde{\kappa}} \right|} \geq \beta C_5^{-1} > \frac{2^{\frac{1}{\gamma}} C_5^2}{d_2}.$$ Therefore, if the left endpoint of $B_{n_{j+1}}$ were contained in $\big. \big[ \partial^\ell J, \partial^\ell J + \frac{1}{d_2} \left| J \right| \big. \big)$, then $B_{n_{j+1}}$ would contain $J_{\kappa 1 \kappa' \left( K-1 \right)} \in G_{g+1}$. But this is not possible because $B_{n_{j+1}}$ is c.w.g. $g_{j+1} > g+1$.
In conclusion, $P_1 \left( j+1 \right)$ is true by construction, Lemma \[lem:case2lemsize\] is the statement $P_2 \left( j+1 \right)$, and Lemma \[lem:case2lem\] is the statement $P_3 \left( j+1 \right)$. This completes the analysis of Case 2. The induction argument is complete, and with it, the proof of Theorem \[thm:maintheoremF\].
Proof that $\mathcal{E}_f$ is strong winning (Theorem \[thm:maintheoremf\]) {#sec:corollary}
===========================================================================
Let $\mathcal{E}_F$ be $\alpha_F$-strong winning (with $\alpha_F \leq \frac{1}{2}$) and define $\alpha_f := \exp \left( -C_2 \right) \alpha_F$ (the constant $C_2$ is defined in Theorem \[thm:youngdist\]). Let $\beta_f \in \left( 0, \exp \left(-C_2 \right) \right)$ be arbitrary and define $\beta_F := \exp \left( C_2 \right) \beta_f$. We claim that $\mathcal{E}_f$ is $\left( \alpha_f, \beta_f \right)$-strong winning. In order to prove this we set up two $\left( \alpha, \beta \right)$ games; Alice and Bob will play the primary $\left( \alpha_f, \beta_f \right)$ game on $\left( \left[ 0,1 \right], \mathcal{E}_f \right)$, and Alicia and Bobby will play an auxiliary $\left( \alpha_F, \beta_F \right)$ game on $\left( \left[ r_1,1 \right], \mathcal{E}_F \right)$.
The main game begins as Bob chooses $B_1 \subset \left[ 0,1 \right]$. Alice chooses $A_1$ such that $0 \notin A_1$. Bob chooses $B_2$. Alice plays arbitrarily until Bob chooses an interval that is contained in some $\left[ r_{n+1}, r_n \right]$. This will eventually happen for the following reason. There are finitely many intervals $\left[ r_{n+1}, r_n \right]$ that intersect $B_2$ (because $0 \notin B_2$), and Alice can force $\left| B_n \right| \searrow 0$ by always choosing an interval $A_n$ of length $\alpha_f \left| B_n \right|$. Furthermore $\alpha_f < \frac{1}{2}$ and so Alice may always choose $A_n$ so as to avoid any given point in $B_n$. After relabeling we may therefore assume without loss of generality that $B_1 \subset \left[ r_{n+1}, r_n \right]$ for some $n \in \mathbb{N}$.
The auxiliary game begins as Bobby chooses $B_1' = f^n \left( B_1 \right) \subset \left[ r_1,1 \right]$. Alicia, as part of her winning strategy, chooses $A_1' \subset B_1'$. Define $A_1 = f^{-n} \left( A_1' \right) \cap \left[ r_{n+1}, r_n \right] \subset B_1$. By the Mean Value Theorem there exist $\xi,\xi' \in B_1$ such that $$\frac{\left| A_1 \right|}{\left| B_1 \right|} = \frac{\left| A_1' \right|/ \left( f^n \right)'\left( \xi \right)}{\left| B_1' \right| / \left( f^n \right)'\left( \xi' \right)} \geq \exp \left( -\frac{C_2}{r_0-r_1} \left| f^n \xi - f^n \xi' \right| \right) \alpha_F \geq \alpha_f.$$ Thus $A_1$ is a permissible interval for Alice to choose; she does so.
Suppose the four players have chosen intervals $\left\{ A_i, B_i, A_i', B_i' \right\}_{i=1}^k$ for some $k \in \mathbb{N}$ in such a way that $f^n \left( B_k \right) = B_k'$ and $A_k = f^{-n} \left( A_k' \right) \cap \left[ r_{n+1}, r_n \right]$, and $A_k$ is chosen as part of Alicia’s winning strategy. Bob chooses $B_{k+1} \subset A_k$. Define $B_{k+1}' = f^n \left( B_{k+1} \right) \subset A_k'$. By the Mean Value Theorem there exist $\eta, \eta' \in A_k$ such that $$\frac{\left| B_{k+1}' \right|}{\left| A_k' \right|} = \frac{\left| B_{k+1} \right|}{\left| A_k \right|} \frac{\left( f^n \right)'\left( \eta \right)}{\left( f^n \right)'\left( \eta' \right)} \geq \exp \left( -\frac{C_2}{r_0-r_1} \left| f^n \eta - f^n \eta' \right| \right) \beta_f \geq \beta_F.$$ Thus $B_{k+1}'$ is a permissible interval for Bobby to choose; he does so. Alicia, as part of her winning strategy, chooses $A_{k+1}' \subset B_{k+1}'$. Define $A_{k+1} = f^{-n} \left( A_{k+1}' \right) \cap \left[ r_{n+1}, r_n \right] \subset B_{k+1}$. By the Mean Value Theorem there exist $\upsilon,\upsilon' \in B_{k+1}$ such that $$\frac{\left| A_{k+1} \right|}{\left| B_{k+1} \right|} = \frac{\left| A_{k+1}' \right| / \left( f^n \right)'\left( \upsilon \right)}{\left| B_{k+1}' \right| / \left( f^n \right)' \left( \upsilon' \right)} \geq \exp \left( -\frac{C_2}{r_0-r_1} \left| f^n \upsilon - f^n \upsilon' \right| \right) \alpha_F \geq \alpha_f.$$ Thus $A_{k+1}$ is a permissible interval for Alicia to choose; she does so.
This completes the induction. Define $\{ \omega \} = \bigcap_{k=1}^\infty B_k$ and $\{ \omega' \} = \bigcap_{k=1}^\infty B_k'$. By construction, Alicia wins; thus there exists $L \in \mathbb{N}$ such that the orbit of $\omega'$ under $F$ stays outside the interval $\left. \left[ r_1, p_L \right. \right)$. Define $M := 2 + \max \left\{ L,n \right\}$. We claim that the orbit of $\omega$ under $f$ stays outside the interval $\left. \left[ 0, r_M \right. \right)$.
Suppose otherwise. Write $\omega' \in J_{m_1 m_2 \dots}$ and let $$\tau := \min \left\{ t \in \mathbb{N} \cup \{ 0 \} \colon f^t \omega \in \left. \left[ 0, r_M \right. \right) \right\}.$$ Because $M > n+1$ and $\omega \in \left[ r_{n+1},r_n \right]$ we have $\tau > n$. Find $j \geq 0$ and $0 \leq s < m_{j+1}$ such that $$\tau = n + m_1 + \dots + m_j +s.$$ Because the orbit of $\omega'$ under $F$ avoids $\left. \left[ r_1, p_L \right. \right)$ we have that $m_i \leq L < M$ for all $i$. Therefore $$F^{j+1} \omega' = f^{m_1 + \dots m_{j+1} + n} \omega = f^{m_{j+1}-s} \left( f^\tau \omega \right) \in \left. \left[ 0, r_{M-m_{j+1}+s} \right. \right) \subset \left. \left[ 0,r_1 \right. \right).$$ But $F^{j+1} \omega' \in J_{m_{j+2}} \subset \left[ r_1,1 \right]$, a contradiction.
This shows that $\mathcal{E}_f$ is $\left( \alpha_f, \beta_f \right)$-strong winning whenever $\beta_f \in \left( 0, \exp \left(-C_2 \right) \right)$. Clearly this implies that $\mathcal{E}_f$ is $\left( \alpha_f, \beta \right)$-strong winning for all $\beta \in \left( 0, 1 \right)$. Hence $\mathcal{E}_f$ is $\alpha_f$-strong winning.
Acknowledgments
===============
The author would like to thank his Ph.D. advisor, Vaughn Climenhaga, for his infinite patience and wisdom.
[^1]: The author is partially supported by NSF grant DMS-1554794
[^2]: Keywords: Schmidt’s game, Manneville–Pomeau maps, nondense orbit, nonuniform hyperbolicity, Hausdorff dimension
[^3]: Mathematics Subject Classification numbers: 37D25, 11K55
[^4]: See Theorem \[thm:schmidt\] for the precise statement and §\[sec:Schmidt\] for the relevant definitions.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'This work studies the semiclassical methods in multi-dimensional quantum systems bounded by finite potentials. By replacing the Maslov index by the scattering phase, the modified transfer operator method gives rather accurate corrections to the quantum energies of the circular and square potential pots of finite heights. The result justifies the proposed scattering phase correction which paves the way for correcting other semiclassical methods based on Green functions, like Gutzwiller trace formula, dynamical zeta functions, and Landauer-Büttiker formula.'
author:
- 'Wen-Min Huang$^1$, Cheng-Hung Chang$^{2,3}$, Chung-Yu Mou$^{1,2}$'
title: 'Semiclassical methods for multi-dimensional systems bounded by finite potentials'
---
Semiclassical approaches are techniques to study quantum systems in classical limit [@Brack]. In these approaches, physical quantities are expressed by the Green function of the system and replaced by its underlying classical trajectories under stationary phase approximation. Well known examples range over Gutzwiller trace formula, dynamical zeta function, and transfer operators, which are aimed at determining the quantized energies of closed systems [@Brack]. For open systems, the most prominent example might be the Landauer-Bütticker formula for charge current transport through open quantum dots [@Jalabert]. Recently semiclassical approaches also have been used to study spin current transport [@AM] and spin dynamics, which clarifies the suppression of D’yakonov-Perel’ spin relaxation in mesoscopic quantum dots [@CHC]. All these approaches can relate quantum problems to the ergodicity property of their corresponding classical dynamics and gives more transparent pictures to complex phenomena, including the signatures of quantum chaos [@Brack].
Besides this conceptual contribution, some semiclassical methods provide efficient numerical techniques for less time-consuming calculations. Their results are especially accurate in mesoscopic systems in which the de Broglie wavelength of the Fermi electrons is much shorter the sample size. The devices in this scale are often fabricated by lithography or controlled by confining potentials. A convenient theoretical approach to study these quantum systems is assuming them to be bounded by infinite potential. This largely simplifies the formulism of the semiclassical methods, since the quantum particle has the same phase change for arbitrary particle energy after it is reflected by this potential. This phase change is carried in the well known Maslov index. However, for real systems beyond this assumption, the accuracy of the conventional semiclassical methods using Maslov index become out of control. The usual way is going back to solve the Schrödinger equation with a boundary combined with Dirichlet and von Neuman conditions. But one could ask whether the conventional semiclassical methods still work after taking certain effective correction.
A natural candidate for this correction is an effective phase change in the wave function after it is bounced back by the potential. This phase can be understood as the dwell time of the quantum particle penetrating into and staying inside the potential barrier. Indeed, in the one-dimensional (1D) potential well, it has been shown that replacing the Maslov index in the Green function by a scattering phase, this function gives an exact quantization rule identical with the Wentzel-Kramers-Brillouin (WKB) method [@HHLin]. This relation stirs up the motivation how to extend this result to multi-dimensional systems. The current paper demonstrates this extension in the example of Bogomolny’s transfer operator (BTO) method [@EB] and tested it in a circular billiard and a square billiard bounded by potential pots of finite hight. The calculated energies are surprisingly accurate, which justifies the suggested scattering phase correction for multi-dimensional systems. Although the result is presented in the BTO method, it should be valid for all semiclassical methods as long as they are based on the Green function.
The WKB method might be the simplest semiclassical method. For a particle of mass $m$ and energy $E$ bounded by an 1D potential $V(x)$, the solution of its Schrödinger equation can be approximated by the WBK wave function [@Landau], $$\psi(x)=\frac{1}{\sqrt{p(x)}}{\rm exp}\left[ \pm
\int_{x_i}^{x}p(x')dx' \right],$$ with $p(x)=\sqrt{2m[E-V(x)]}$, if the de Broglie wavelength $\lambda(x)=2\pi\hbar/p(x)$ varies slowly compared to the potential. This requirement is usually violated at the classical turning point $x_0$, where the momentum changes sign. At that point one has $E=V(x_0)$ which gives rise to a vanishing $p(x_0)$ and a singular function $\psi(x)$. If the potential varies slowly around $x_0$, the exponentially decreasing real wave function outside this point should be associated with the oscillating wave function inside this point. This enforces the incident wave function to take a $scattering$ $phase$ after reflection. This phase is equal to $\pi/2$ in the semiclassical (short wave) limit [@Landau]. If the particle is reflected back and forth between two turning points $x_1$ and $x_2$ on two potential barriers, the particle should take two scattering phases $\phi_1=\phi_2=\pi/2$ during one period of motion. The total phase then equals $2n\pi$ with an integer $n$, which leads to the WKB quantization condition [@Landau] $$\label{wkb1}
\frac{1}{\hbar}\oint p(x)dx=\frac{2}{\hbar}\int^{x_2}_{x_1}
p(x)dx=2\pi\left(n+\frac{\mu}{4}\right),$$ where the Maslov index $\mu=2$ corresponds to the two reflections during one period of particle motion.
If the potential does not vary sufficiently slowly at the turning points, for example the step function barrier, the phase change $\pi/2$ is no longer a good approximation. A general scattering phase should be determined quantum-mechanically [@HF], $$\label{wkb2}
\frac{1}{\hbar}\oint p(x)dx=2n\pi+\phi_1(E) + \phi_2(E),$$ where the phase changes $\phi_1(E)$ and $\phi_2(E)$ at the turning points $x_1$ respectively $x_2$ become a function of $E$. As an example, let this particle move in an 1D finite square well with $V(x)=0$ for $0<x<L$ and $V_0$ otherwise, where $L$ is the well width and $V_0>0$ is the potential height. Solving the Schrödinger equation with the continuous boundary condition, the scattering phase can be calculated, $$\label{sp1}
\phi_s(E)=\cos^{-1}\left[2\left(\frac{E}{V_0}\right)-1\right].$$ Substituting Eq. (\[sp1\]) into $\phi_1(E)$ and $\phi_2(E)$ in Eq. (\[wkb2\]), one obtains an exact quantization rule for the 1D finite square well.
For general $k$-dimensional systems with $k\geq 2$, many quantization rules have been derived to extend the WKB method, including Gutzwiller trace formula, dynamical zeta function, and transfer operators [@Brack]. All of them are based on the semiclassical Green function $$\begin{aligned}
G({\bf r},{\bf r'};E)=\frac{1}{(2\pi
i\hbar)^{k/2}}\sum_\gamma\sqrt{\left|\det\frac{\partial S_{\rm
cl}({\bf r},{\bf r'};E)}{\partial {\bf r}\partial {\bf
r}'}\right|} \nonumber \\
\hspace{1cm}\times\exp\left[\frac{i}{\hbar}S_{\rm cl}({\bf r},{\bf
r'};E)-i\mu\frac{\pi}{2}\right], \label{gf1}\end{aligned}$$ which is a sum over all classical trajectories $\gamma$ starting from ${\bf r'}$ and ending at ${\bf r}$ [@VV; @Brack]. The function $S_{\rm cl}({\bf r},{\bf r'};E)$ therein is the action of the particle from $\bf r'$ to $\bf r$ along $\gamma$. The Maslov index $\mu$ stands for the total number of the turning points between ${\bf r'}$ and ${\bf r}$ along $\gamma$. For 1D finite square well, the quantization rule derived from Eq. (\[gf1\]) with the replacement $\mu\frac{\pi}{2}$ by $\phi_s$ in Eq. (\[sp1\]) is identical with the quantization rule of Eq. (\[wkb2\]) with the same phase correction $\phi_s$ [@HHLin].
For multi-dimensional systems, intuitively, the magnitude of the scattering phase should depend on the incident angle of the particle wave. Naively one could suggest the phase $\phi_s$ in Eq. (\[sp1\]) to be only related to the perpendicular component of the incident wave with respect to the boundary. Figure \[SP\](a) shows an example of a 2D step potential $V_c(x,y)=V_0 \Theta(x)$ with the potential high $V_0>0$ for $x\geq 0$ and $0$ for $x<0$, where $\Theta(x)$ is the Heviside function. The momentum $p_x$ perpendicular to the boundary has energy $E_p=p_x^2/2m$, which will replace the total energy $E$ in Eq. (\[sp1\]) for multi-dimensional systems. This phase is a value between $0$ and $\pi$, depending on the ratio $E_p/V_0$, as shown in Fig. \[SP\](b). If the potential barrier is high, that is $E_p/V_0\ll1$, the scattering phase will approach $\pi$, which is the same as the infinitely high potential barrier.
![(a) The incident state $|p_x,p_y \rangle$ and the reflected state $|-p_x,p_y\rangle\,e^{-\phi_s}$ on the confining potential $V_0$. (b) The scattering phase $\phi_s$ of the reflected state is a function of $E_p/V_0$.[]{data-label="SP"}](fig1.eps){width="7.6cm"}
Next, the scattering phase for the perpendicular wave component discussed above will be merged into BTO method. Let us consider again the particle of mass $m$ and energy $E$ moving in a $k$-dimensional system. Select a Poincaré section (PS) $\Sigma$ in the configuration space of this system, such that almost all classical trajectories pass this section [@good1]. The original transfer operator ${\cal T}(E)$ is defined as the integral operator [@EB] $$\label{transfer1}
{\cal T}(E)\psi(q)=\int_{\Sigma}T(q,q';E)\psi(q')dq',$$ acting on some function $\psi(q')$ on $\Sigma$. The integral kernel, $$\begin{aligned}
\label{transfer2}
T(q,q';E)\hspace{-0.1cm}&=&\hspace{-0.2cm}\sum_\gamma\frac{1}{(2\pi
i\hbar)^{(k-1)/2}}\sqrt{\left|{\rm det}\frac{\partial
S_{\rm cl}(q,q';E)}{\partial q\partial q'}\right|}\nonumber \\
& &\times\hspace{0.05cm}{\rm exp}\left[\frac{i}{\hbar}S_{\rm
cl}(q,q';E) -i\nu\frac{\pi}{2}\right],\end{aligned}$$ is defined as the sum over all possible classical trajectories $\gamma$’s from the initial point $q'\in \Sigma$ to the final point $q \in \Sigma$. The action $S_{\rm cl}(q,q';E)$ is the same as before in Eq. (\[gf1\]). The Maslov index $\nu$ counts the number of the crossing points of $\gamma$ through $\Sigma$ from the same side of $\Sigma$. According to the BTO method, the zeros of the Fredholm determinant $|\det(1-{\cal T}(E))|$ of the transfer operator ${\cal T}(E)$ are the energy eigenvalues of the quantum system.
The kernel in Eq. (\[transfer2\]) is derived from the semiclassical Green function in Eq. (\[gf1\]), which can be divided into two parts, $G(q,q';E)=G^{\rm
osc}(q,q';E)-G_0(q,q';E)$, where $G^{\rm osc}(q,q';E)$ is the contribution from long trajectories and $G_0(q,q';E)$ is the contribution form short trajectories. After coordinate deduction from $k$-dimensional space to ($k-1$)-dimensional space on $\Sigma$ [@EB], the quantization condition $|\det(G(q,q';E))|=0$ can be expressed as $|\det(1-{\cal
T}(E))|=0$, where the identity operator comes from $G_0$ and the operator ${\cal T}(E)$ originates from $G^{\rm osc}$. The entire derivation holds the same when the phase in $G(q,q';E)$ is modified. The transfer operator ${\cal T}_m(E)$ modified with the scattering phase for perpendicular wave component then has the kernel $$\begin{aligned}
\label{transfer3}
T_m(q,q';E)\hspace{-0.1cm}&=&\hspace{-0.2cm}\sum_\gamma\frac{1}{(2\pi
i\hbar)^{(k-1)/2}}\sqrt{\left|{\rm det}\frac{\partial
S_{\rm cl}(q,q';E)}{\partial q\partial q'}\right|}\nonumber \\
& &\times\hspace{0.05cm}{\rm exp}\left[\frac{i}{\hbar}S_{\rm
cl}(q,q';E) -i\phi_s(E_p)\right].\end{aligned}$$ The efficiency of this modified operator will be tested in the following two 2D integrable systems.
![The upper part shows the Fredholm determinant $|\det(1-{\cal T}_m(E))|$ of the modified transfer operator ${\cal T}_m(E)$ for the potential pot in the inset for $V_0=20$, $50$, and $\infty$ with $R=2$. The points on the three bottom lines depict the exact quantum energies for $V_0=20$, $50$, and $\infty$ with $R=2$.[]{data-label="circleplot"}](fig2.eps){width="8.5cm"}
The first system is a 2D circular quantum dot bounded by a finite potential, as shown in the inset of Fig. \[circleplot\]. The potential pot has radius $R$ and height $V_0$, that is $V(r)=V_0$ for $r\geq R$ and $V(r)=0$ for $r< R$. Analytically the energy eigenvalues of its Schrödinger equation can be calculated by matching the boundary conditions at radius $r=R$ [@math]. Setting Planck constant $\hbar=1$, the mass $m=1$, and the pot radius $R=2$, the exact energy eigenvalues for different potential height $V_0=20$, $50$, and $\infty$ are denoted on the three bottom lines of Fig. \[circleplot\]. The dotted, dashed, and solid curves in the upper part of Fig. \[circleplot\] are the Fredholm determinant $|\det(1-{\cal T}_m(E))|$ of the modified transfer operator for $V_0=20$, $50$, and $\infty$. The zeros of these functions determine the semiclassical quantum energies of the systems.
![The upper part shows the Fredholm determinant $|\det(1-{\cal T}_m(E))|$ of the modified transfer operator ${\cal
T}_m(E)$ for the square potential pot in the inset with the potential configuration $(V_x,V_y)=(20,20)$, $(50,50)$, $(20,\infty)$, and $(\infty,\infty)$ where $V_{xy}=V_x+V_y$. The points on the four bottom lines depict the exact quantum energies for $(V_x,V_y)=(20,20)$, $(50,50)$, $(20,\infty)$, and $(\infty,\infty)$.[]{data-label="squareplot"}](fig3.eps){width="8.5cm"}
The second system is the 2D square quantum dot with the confining potential as shown in the inset of Fig. \[squareplot\]. Therein the $xy$-plane is separated into nine regions with different constant potential heights $V_x$, $V_y$, and $V_{xy}$ at each region, where $V_x>0$, $V_y>0$, and $V_{xy}=V_x+V_y$. This 2D problem can be reduced to two independent 1D finite wells with potential heights $V_x$ and $V_y$ and solved separately [@QM]. Combining the eigenvalues of these two separated systems, the total quantum energies of this 2D system for $(V_x,V_y)=(20,20)$, $(50,50)$, $(20,\infty)$, and $(\infty,\infty)$ are determined and depicted on the four bottom lines in Fig. \[squareplot\], where $\hbar=m=1$ as before and the well length $L$ is normalized by the condition $2\pi^2/L^2=1$. The upper part of this Figure shows the Fredholm determinants $|\det(1-{\cal T}_m(E))|$ of the modified BTO. The dashed, dash-dotted, dotted, and solid curves represent these functions for the potential configurations $(V_x,V_y)=(20,20)$, $(50,50)$, $(\infty,\infty)$, and $(20,\infty)$.
Figure 2 and 3 shows surprisingly accurate quantum energies after the scattering phase correction. Taking Fig. 2 as example, the exact $10$-th energy has a remarkable left shift after the potential height is reduced from $V_0=\infty$ to $V_0=20$. This shows how large the error could be when simplifying a deep potential pot, even with the large ratio $V_0/R=10$, by an infinitely high pot. Without scattering phase correction, the semiclassically calculated energies are the zeros of the solid curve. The 10-th zero of this curve is close to the $10$-th point on the bottom line of $V_0=\infty$, but far apart from the $10$-th point for $V_0=20$. However, after taking the scattering phase, the determinant (dotted curve) shifts leftward quite a lot and its zeros largely approach the exact energies for $V_0=20$.
Quantitatively we can define values to characterize these errors. Suppose $E_i$ is the $i$-th exact energy of a quantum system bounded by some potential pot of height $V_0<\infty$ and $\tilde{E}_i$ is the corresponding semiclassical energy approximated by the modified BTO. Let the value $\delta_i=\tilde{E}_i-E_i$ be the difference between these two energies and the ratio $\Delta_i=\delta_i/E_i$ is the error of the i-th eigenvalue. For the special case of $V_0=\infty$ the values defined above are furnished with a superscript $\infty$ as $E_i^\infty$, $\tilde{E}_i^\infty$, and $\delta^\infty_i=\tilde{E}^\infty_i-E^\infty_i$ respectively. The relative error $\Gamma_i$ of the $i$-th energy is then defined as the ratio $$\label{RE}
\Gamma_i = \frac{\left|\delta_i -
\delta^\infty_i\right|}{|E_i-E_i^\infty|}.$$ The denominator denotes the exact energy shift after the potential height is reduced from $\infty$ to $V_0$. The numerator represents the error $\delta_i$ for $V_0<\infty$ subtracted by the basic semiclassical error $\delta_i^\infty$ from infinite potential.
![The dotted and dashed curves in the main plot denote the relative errors $\Gamma_i$ of the 2D circular potential pots with potential height $50$ and $20$, as shown in the inset of Fig. \[circleplot\]. The dash-dotted and solid curves represent $\Gamma_i$ for the 2D square potential pots of configuration $(V_x,V_y)=(50,50)$ and $(20,20)$, as shown in the inset of Fig. \[squareplot\]. The corresponding errors $\delta_i$ of these systems are shown in the inset.[]{data-label="error"}](fig4.eps){width="8.5cm"}
According to these definitions, the errors $\delta_i$ of the lowest ten energies from Fig. \[circleplot\] and \[squareplot\] are plotted in the inset of Fig. \[error\] and their corresponding relative errors $\Gamma_i$ are plotted in its main figure. All $\delta_i$ are bounded by $5\%$ and all $\Gamma_i$ are bounded by $20\%$. Roughly speaking, the scattering phase in the modified BTO has corrected at least $80\%$ of the energy error due to the potential reduction from $\infty$ to $V_0$. This justifies the scattering phase correction proposed above for finitely confined systems.
Notably, one cannot expect $100\%$ correction in this first order correction. Remember that there still exists other degrees of freedom in the system, which are not included in the scattering phase. For instance, the modified transfer operator ${\cal
T}_m(E)$ is the same for square potential pots of different $V_{xy}$, although they have different quantum energies. This $V_{xy}$ difference can only be distinguished from higher order corrections beyond the current scattering phase. Furthermore, if the particle energy $E$ is close to the potential height $V_0$, the particle can penetrate into the potential well quite long and the wave property of the particle prevails its particle property. In this regime, the correction deviation from the modified BTO increases. However, that is not because this first correction is wrong, but because higher order corrections are required in the semiclassical approach.
Finally, almost all semiclassical methods are based on the Green function of the quantum system. The successful result in the modified BOT gives a clear direction for extending the quantum correction to other semiclassical methods, including Gutzwiller trace formula, dynamical zeta functions, and Landauer-Büttiker formula.
We thank Hsiu-Hau Lin for fruitful discussions. This work was supported by the National Science Council at Taiwan under Grant Nos. NSC 93-2112-M-007-009.
[999]{}
M. Brack and R.K. Bhaduri, [*Semiclassical Physics*]{} (Addison-Wesley, Reading, 1997); M.C. Gutzwiller, [*Choas in Classical and Quantum Mechanics*]{}, (Springer-Verlag, New York, 1990). R.A. Jalabert, H.U. Baranger, and A.D. Stone, Phys. Rev. Lett. [**65**]{}, 2442 (1990). R. Blümel and U. Smilansky, Phys. Rev. Lett. [**60**]{}, 477 (1988).
A.G. Mal’shukov, V.V. Shlyapin and K.A. Chao, Phys. Rev. B [**60**]{} R2161 (1999); C.H. Chang, A.G. Mal’shukov and K.A. Chao, Phys. Lett. A [**326**]{} 436-441 (2004).
C.H. Chang, A.G. Mal’shukov and K.A. Chao, to appear in Phys. Rev. B (2004).
L.D. Landau and E.M. Lifshitz, [*Quantum Machanics*]{} (Pergamon, Oxford, 1965).
H. Friedrish and J. Trost, Phys. Rep. [**397**]{}, 359-449 (2004).
Wei Chen, Tzay-Ming Hong, and Hsiu-Hau Lin, Phys. Rev. B [**68**]{}, 205104(2003).
J.H. Van Vleck, Proc. Natl. Acad. Sci. [**14**]{}, 178(1928). M.C. Gutzwiller, J. Math. Phys. [**8**]{}, 1979(1967)
E.B. Bogomolny, Nonlinearity [**5**]{}, 805(1992).
N.C. Snaith and D.A. Goodings, Phys. Rev. E [**55**]{}, 5212(1997); C.H. Chang, Phys. Rev. E [**66**]{}, 056202 (2002); C.H. Chang, Phys. Rev. E [**67**]{}, 046201 (2003).
G.B. Arfken and H.J. Weber, [*Mathematical Methods For Physicists*]{} (Fourth Edition, Academic Press, Inc.).
S. Gasiorowicz, [*Quantum Mechanics*]{} (Sencond Edition, John Wiley & Sons, 1996)
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We investigate various phenomenological schemes for the rapid generation of 3D mock galaxy catalogues with a given power spectrum and bispectrum. We apply the fast bispectrum estimator [`MODAL-LSS`]{} to these mock galaxy catalogues and compare to $N$-body simulation data analysed with the halo-finder `ROCKSTAR` (our benchmark data). We propose an assembly bias model for populating parent halos with subhalos by using a joint lognormal-Gaussian probability distribution for the subhalo occupation number and the halo concentration. This prescription enabled us to recover the benchmark power spectrum from $N$-body simulations to within 1% and the bispectrum to within 4% across the entire range of scales of the simulation. A small further boost adding an extra galaxy to all parent halos above the mass threshold $M>2\times10^{14}\,h^{-1} M_\odot$ obtained a better than 1% fit to both power spectrum and bispectrum in the range $K/3<1.1\,h\,\text{Mpc}^{-1}$, where $K=k_1+k_2+k_3$. This statistical model should be applicable to fast dark matter codes, allowing rapid generation of mock catalogues which simultaneously reproduce the halo power spectrum and bispectrum obtained from $N$-body simulations. We also investigate alternative schemes using the Halo Occupation Distribution (HOD) which depend only on halo mass, but these yield results deficient in both the power spectrum (2%) and the bispectrum (>4%) at $k,K/3 \approx 0.2\,h\,\text{Mpc}^{-1}$, with poor scaling for the latter. Efforts to match the power spectrum by modifying the standard four-parameter HOD model result in overboosting the bispectrum (with a 10% excess). We also characterise the effect of changing the halo profile on the power spectrum and bispectrum.'
author:
- Johnathan Hung
- Marc Manera
- 'E.P.S. Shellard'
bibliography:
- 'main.bib'
title: 'Advancing the matter bispectrum estimation of large-scale structure: fast prescriptions for galaxy mock catalogues'
---
Introduction
============
One of the most active areas of cosmological research is to understand the collapse of matter and the evolution of large scale structure (LSS) in the Universe. This goal is facilitated by upcoming large data sets offered by galaxy surveys such as the Dark Energy Survey (DES) [@DES1; @DES2], the Large Synoptic Survey Telescope (LSST) [@LSST], the ESA Euclid Satellite [@Euclid] and the Dark Energy Spectroscopic Instrument (DESI) [@DESI]. In particular, the bispectrum has been shown to be a crucial diagnostic in the mildly non-linear regime and, combined with the galaxy power spectrum, can constrain parameters five times better than the power spectrum alone [@forecast], potentially offering much tighter constraints for local-type primordial non-Gaussianities (PNG) than current limits from Planck. The bispectrum also has a stronger dependence on cosmological parameters so can provide tighter constraints than the power spectrum for the same signal to noise and can help break degeneracies in parameter space, notably those between $\sigma_8$ and bias [@bias]. For this reason, our focus in this paper is on making the galaxy bispectrum a tractable diagnostic tool for analysing future galaxy surveys by deploying our efficient bispectrum estimator [`MODAL-LSS`]{} on mock galaxy catalogues. Previously we have used [`MODAL-LSS`]{} to compare the dark matter bispectrum from $N$-body and fast dark matter codes[@DM], but now we apply it to the halo (or galaxy) bispectrum. This work builds on earlier efforts to estimate the full three-dimensional bispectrum from simulations (see, for example, [@wagner; @regan; @9param; @glass; @Andrei]) and direct measurements of the galaxy bispectrum using existing galaxy survey data from the Baryon Oscillation Spectroscopic Survey (BOSS) [@BOSS1; @BOSS2; @sdssi; @sdssii; @sdssiii].
As we enter the age of precision cosmology we are ever more reliant on cosmological simulations to understand the dynamics of dark matter and baryons. Numerical simulations act as a buffer between theory and observation: we test cosmological models by matching simulation results to observational data, and hence obtain constraints on cosmological parameters. On the other hand since we only observe one universe we must turn to simulations to understand the statistical significance of our measurements. This is especially important with large galaxy data sets coming from current and near-future surveys such as DES, LSST, Euclid and DESI. While it would be ideal to use full $N$-body simulations to generate these so-called mock catalogues for statistical analysis, their huge demand for computational resources is prohibitive for generating the large number of simulations required for accurate estimates of covariances [@l-picola]. Alternatively, compression methods have also been developed to reduce the number of mocks required, see e.g. [@gualdi1; @gualdi2; @gualdi3; @heavens; @alsing].
Although dark matter simulations have given us a wealth of information about the clustering of matter in the universe, ultimately we need to map this information to the visible universe. Gravitational pull induces the formation of bound dark matter halos, and these virialised objects in turn create an environment in which baryons can collapse and form bound objects such as galaxies. The galaxies we observe in galaxy surveys, which live inside these halos, therefore act as biased tracers to the underlying dark matter distribution, as the spatial distribution of galaxies need not exactly mirror that of the dark matter [@kaiser]. To take advantage of high resolution galaxy data from future surveys we must therefore have a robust way to extract halo and galaxy distributions from $N$-body dark matter simulations. Many techniques for this process, known as halo finding, have been developed over the years (e.g. [@halo1; @halo2; @halo3; @halo4; @halo5; @halo6; @halo7; @ahf2; @halo9; @halo10; @halo11; @halo12; @halo13; @halo14; @halo15; @ahf; @halo17; @halo18; @rockstar; @halo20; @halo21; @halo22; @halo23; @halo24]), but it remains a computationally intensive task, especially with the sheer number of simulations required for covariance matrix estimation. Additionally, to put constraints on cosmological parameters halo properties must be understood to percent level in order for theoretical and statistical uncertainties to be at the same level [@rockstar; @percent1; @percent2]. In this paper we present fast phenomenological prescriptions for producing mock galaxy catalogues that reproduce the power spectrum and bispectrum of a reference catalogue to better than 1% accuracy. In order to do so we examine the effects of the spatial distribution of galaxies within their host halos, the halo occupation number through the Halo Occupation Distribution (HOD) model, as well as a more sophisticated assembly bias model that jointly models the occupation number and halo concentration. Previous work estimating the dark matter bispectrum has shown its power in helping benchmark fast dark matter codes [@DM], and here we likewise validate these methods with both the power spectrum and bispectrum.
The paper is outlined as follows: in we detail our benchmark galaxy mock catalogue and the phenomenological methods we use to reproduce the statistics of this catalogue. Then in we introduce the [`MODAL-LSS`]{} method for bispectrum estimation, as well as the phenomenological 3-shape model for the halo bispectrum. In , we then present the alternative prescriptions for generating mock catalogues as we investigate the effect of halo profiles and different HODs on the bispectrum, ultimately proposing a joint lognormal-Gaussian assembly bias model which is a key outcome of this paper. Finally, we summarise the main results and conclude the paper in .
Halo catalogues\[sec:methodology\]
==================================
There are many techniques that have been developed to identify collapsed objects in dark matter simulations, but two methods remain a core part of the halo finding process. These are the Friends-of-Friends (FoF) algorithm [@FoF_Davis], originally proposed in 1985, and the Spherical Overdensity (SO) algorithm [@SO_PS], originally proposed in 1974. In its simplest form the FoF algorithm simply links together particles that are separated by a distance less than a given linking length $b$, resulting in distinct connected regions that are identified as collapsed halos. The SO algorithm on the other hand identifies peaks in the density field as the candidate halo centres, then assuming a spherical profile grows the halo until a density threshold is reached. There are shortcomings associated with naive implementations of both of these methods: the FoF algorithm is susceptible to erroneously connecting two distinct halos to each other via *linking bridges*, which are filaments between linked particles belonging to the 2 distinct halos; whereas the spherical assumption in the SO method does not reflect the true shape of halos. A particular difficulty of these position-based finders, yet crucial for the mapping of dark matter distribution to the galaxies we observe, is the classification of halos within halos, or *subhalos*, i.e. virialised objects that sit inside and orbit a larger, host halo. Many have introduced refinements to extend the capabilities of FoF and SO, for example by changing the FoF linking length or the SO density threshold as well as better taking advantage of other information given to us by cosmological simulations; for instance, see [@mad] for a comprehensive review.
A relatively recent and novel approach to this old problem is the incorporation of velocity information of the particles, reducing the ambiguity in determining particle membership between overlapping halos. While this additional information is clearly useful for distinguishing subhalos from its host halos due to their relative motion, working in phase-space necessitates the creation of a metric that suitably weights the relative positions and velocities of the particles. The 6D phase-space halo finder we adopt for this paper is `ROCKSTAR` [@rockstar], which further utilises *temporal* data across simulation time steps to ensure consistency of halo properties. Furthermore the authors claim it to be the first grid- and orientation-independent adaptive phase-space code, and possesses the unprecedented ability to probe substructure masses down to the very centres of host halos. Here we give a brief overview of the mechanics of the `ROCKSTAR` algorithm.
The simulation box is first partitioned with a fast implementation of position-based FoF and a large linking length of $b=0.28$ (in units of the mean inter-particle distance). Likewise in the 3D case, an adaptive metric must be used if one is to find substructures at all levels. For each of these 3D FoF groups a hierarchy of 6D phase-space FoF subgroups is built up by adapting the phase-space linking length at every level so that only 70% of the particles are linked together in its subgroups, until the number of particles in the deepest level falls under a predefined threshold (here set to 10). The phase-space metric they adopt is weighted by the standard deviations in position, $\sigma_x$, and velocity $\sigma_v$, of the particles within a (3D or 6D) FoF group, i.e. for two particles $p_1$ and $p_2$ the metric is: $$\begin{aligned}
\label{eq:rockstar_pp_metric}
d(p_1,p_2)=\left(\frac{\left|\textbf{x}_1-\textbf{x}_2\right|^2}{\sigma_x^2}
+\frac{\left|\textbf{v}_1-\textbf{v}_2\right|^2}{\sigma_v^2}\right)^{1/2}.\end{aligned}$$
Once this phase-space hierarchy is built, the deepest levels in the hierarchy are identified as seed halos, and all particles in the base 3D FoF group are assigned to these seed halos from the bottom-up. If a seed halo is the only child of its parent then all the particles of the parent will be assigned to that seed halo. Otherwise if a parent has multiple subgroups then particle membership is determined by proximity in phase-space. In this instance the metric () is modified to reflect halo and not particle properties; for a halo $h$ and particle $p$ the metric is $$\begin{aligned}
\label{eq:rockstar_hp_metric}
d(h,p)=\left(\frac{\left|\textbf{x}_h-\textbf{x}_p\right|^2}{r_{vir}^2}
+\frac{\left|\textbf{v}_h-\textbf{v}_p\right|^2}{\sigma_v^2}\right)^{1/2},\end{aligned}$$ where $r_{vir}$ is the current virial radius of the halo and now $\sigma_v$ is the current velocity dispersion of the halo. This procedure is repeated recursively along the hierarchical ladder until particle assignment is complete. A significant advantage of this assignment scheme is the assurance that particles that belong to the host halo will not be mis-assigned to the subhalo, or vice versa, even if the subhalo sits close to the host halo centre. This is because host halo particles and subhalo particles should have different distributions in phase-space even if they are close in position-space.
Finally, host-subhalo relationships are determined based on phase-space distances before halo masses are calculated to avoid ambiguity when multiple halos are involved. At each level the halos are first ordered by the number of assigned particles. Starting with the lowest one, each halo centre is treated as a particle, and its distance to the other halos are calculated with . The halo being examined is then assigned as a subhalo of the closest larger halo. These relationships are checked against the previous time-step, if available, for consistency across time-steps. After all assignments have been made, unbounded particles are removed by a modified Barnes-Hut method from the halos, and halo properties are calculated.
Benchmark galaxy mock catalogue
-------------------------------
[c|c|c]{} Description & Symbol & Value\
------------------------------------------------------------------------
Hubble constant & $H_0$ & 67.74 $\text{km}\,\text{s}^{-1}$\
Physical baryon density & $\Omega_b h^2$ & 0.02230\
Matter density & $\Omega_m$ & 0.3089\
Dark energy density & $\Omega_{\Lambda}$ & 0.6911\
Fluctuation amplitude at $8h^{-1}$ Mpc & $\sigma_8$ & 0.8196\
Scalar spectral index & $n_s$ & 0.9667\
Primordial amplitude & $10^9A_s$ & 2.142\
------------------------------------------------------------------------
Physical neutrino density & $\Omega_{\nu} h^2$ & 0.000642\
Number of effective neutrino species & $N_{eff}$ & 3.046\
Curvature density & $\Omega_{k}$ & 0.0000\
[c|c|c]{} Name & Description & Value\
------------------------------------------------------------------------
MaxRMSDisplacementFac & & 0.1\
ErrTolIntAccuracy & & 0.01\
MaxSizeTimestep & & 0.01\
------------------------------------------------------------------------
ErrTolTheta & & 0.2\
ErrTolForceAcc & & 0.002\
Smoothing length & & $30\,h^{-1}$ kpc\
------------------------------------------------------------------------
Number of particles & & $2048^3$\
Mass of particles & & $2.1\times10^{10}\,h^{-1} M_\odot$\
PM grid size & & 2048
Our benchmark dark matter simulation is a $N$-body simulation run with [`GADGET-3`]{} code. We have chosen a cubical box of size $1280\,h^{-1}$ Mpc and run with $2048^3$ particles, obtaining a particle mass of $M_p=2.1\times10^{10}\,h^{-1} M_\odot$. We have dark matter outputs at redshifts $z=0,0.5,1,2$. The Particle Mesh (PM) grid of the simulation is $2048^3$.
We have generated the Gaussian initial conditions from second-order Lagrangian Perturbation Theory (2LPT) displacements using [`L-PICOLA`]{} [@l-picola; @scoccimarro] at redshift $z_i=99$ to ensure the suppression of transients in power spectra and bispectra estimates of our simulations [@transients]. Our input linear power spectrum at redshift $z=0$ was produced by `CAMB` [@CAMB] using a flat $\Lambda$CDM cosmology with extended Planck 2015 cosmological parameters (TT,TE,EE+lowP+lensing+ext, see ). For neutrinos we had one massive neutrino species and two massless neutrinos. The lack of radiation and neutrino evolution in [`L-PICOLA`]{} and [`GADGET-3`]{} has led us to define the matter power spectrum to consist only of cold dark matter and baryons, which leads us to recover the input power spectrum at $z=0$ to linear order. This causes the raised value of $\sigma_8$ instead of the Planck value of 0.8159. shows a number of [`GADGET-3`]{} parameter values we used guarantee high numerical precision in our simulation. To obtain a benchmark galaxy mock catalogue we first ran `ROCKSTAR` on the [`GADGET-3`]{} output. Since small halos are unreliable we impose a mass threshold of $M_{200b}>10^{12}\,h^{-1} M_\odot$ on the parent halos of the `ROCKSTAR` output, where $M_{200b}$ means the mass enclosed by the halo corresponds to a spherical overdensity of 200 times the background density of the Universe. This cuts all parent halos with fewer than 50 particles, which is roughly the same criterion adopted in [@eisenstein1; @eisenstein2]. The benchmark halo mock catalogue then consists of all parent halos that pass this threshold alongside all subhalos they contain, if any. In this paper we use the halos as proxies for galaxies, such that every parent halo hosts a central galaxy at its core, and all the subhalos of the parent hosts a satellite galaxy each. Our benchmark galaxy mock catalogue is therefore identical to the benchmark halo mock catalogue, and we will be using these terms interchangeably.
The purpose of this paper is to investigate phenomenological methods to reproduce the statistics of the benchmark galaxy mock catalogue without detailed information given by the simulation. We restrict ourselves to the mass, position, and halo concentration of the parent halos, and build models that inform us of the number and positions of the satellite galaxies in each parent halo. We define the benchmark catalogue as above to examine these effects rather than reproduce a realistic mock galaxy catalogue that matches observational data, e.g. in [@eisenstein1]. We are also interested in first understanding these effects in configuration space, and as such will not include observational effects such as Redshift Space Distortions (RSD). This is because the RSD signal will dominate in the bispectrum at small scales and swamp the contributions that we are interested in here. After we correctly model these effects in configuration space we shall tackle RSD effects in a future paper. Additionally, both the projected bispectrum [@projection] or bispectrum monopole [@monopole] are rather insensitive to RSD effects, thus our methods are well suited to the study of these observables. We note here that our previous investigation of the dark matter bispectrum using these simulations have uncovered problematic transient modes that persist to late times [@DM]. However this should not interfere with our work in this paper, as these modes only distort the bispectrum signal at large scales, and their effects will cancel when we make comparisons between different phenomenological methods. When calculating statistics we follow the example of others, e.g. [@mock1; @mock2], and use the number density field where each object is weighted by 1 instead of their mass in the Cloud in Cell (CIC) assignment scheme, which is on a $1024^3$ grid throughout the paper.
Halo profile\[subsec:profile\]
------------------------------
We tackle the distribution of galaxies within a halo by first examining the relevance of the halo shape. It is well known in the literature, particularly from dark matter simulations, that halos are triaxial objects [@triaxial1; @triaxial2; @triaxial3], and that their shape are complicated functions of time, halo mass, and choice of halo radius. Halo shapes have been predicted analytically as well within the ellipsoidal-collapse model in [@triaxial4]. In principle one should take into account these effects when building a halo mock catalogue, but as we shall see in , halo triaxiality only has a small effect compared to the choice of halo profile in the power spectrum and bispectrum, and only at small scales. Consequently, in this paper we only consider radially symmetric profiles here and randomise the solid angle distribution of each halo. We leave the inclusion of halo triaxiality for future work.
There are a number of radially symmetric halo profiles in the literature that we can use to populate halos with satellite galaxies. One popular choice is the NFW profile proposed by Navarro, Frenk and White [@nfw], which was adopted in the generation of BOSS galaxy mock catalogues [@mock1]: $$\begin{aligned}
\label{eqn:nfw}
\rho(r|r_s,\rho_s)=\frac{4\rho_s}{\frac{r}{r_s}\left(1+\frac{r}{r_s}\right)^2}.\end{aligned}$$
The two parameters of the model are the scale radius $r_s$ and the density at that radius $\rho_s=\rho(r_s)$. An alternative parameterisation is with the concentration parameter $c=R_{vir}/r_s$, and the virial mass of the halo $M_{vir}$; in `ROCKSTAR` the virial radius $R_{vir}$ is defined such that the corresponding virial mass $M_{vir}$ is consistent with the virial threshold in [@virial]. Further imposing conservation of mass: $$\begin{aligned}
\label{eqn:nfw3}
M_{vir}=\int^{R_{vir}}_0\rho(r|r_s,\rho_s) 4\pi r^2\,dr,\end{aligned}$$ leads to $$\begin{aligned}
\label{eqn:nfw4}
\rho_s=\frac{M_{vir}}{16\pi R_{vir}^3}\frac{c^3}{\log(1+c)-\frac{c}{1+c}}.\end{aligned}$$ This allows us to write the radial density as $$\begin{aligned}
\label{eqn:nfw2}
\rho(r|M_{vir},c)=\frac{M_{vir}}{4\pi rc(R_{vir}+rc)^2}
\frac{c^3}{\log(1+c)-\frac{c}{1+c}}.\end{aligned}$$
To populate the halos with the NFW profile we assume the radial probability density function (PDF) of the mass distribution in a halo is proportional to $\rho(r|M_{vir},c)$, and then obtain the positions of the galaxies by inverse sampling. This first involves calculating the cumulative distribution function (CDF) from the PDF: $$\begin{aligned}
\label{eqn:cdf}
\text{CDF}_{\text{NFW}}(r|M_{vir},c)
&=\frac{\int^{r}_0\rho(r'|M_{vir},c) 4\pi r'^2\,dr'}
{\int^{R_{vir}}_0\rho(r'|M_{vir},c) 4\pi r'^2\,dr',}
\nonumber \\
&=\frac{\log(1+\frac{cr}{R_{vir}})-\frac{cr}{R_{vir}+cr}}
{\log(1+c)-\frac{c}{1+c}}.\end{aligned}$$ We then draw samples from the inverse of the CDF, $\text{CDF}_{\text{NFW}}^{-1}$, with a uniform distribution $u\sim U\in[0,1]$: $$\begin{aligned}
\label{eqn:inverse}
r=\text{CDF}^{-1}_{\text{NFW}}(u|M_{vir},c).\end{aligned}$$ Since the inversion of the CDF is numerically expensive we instead calculate the desired $r$ by interpolating the tabulated CDF. Finally, we model the concentration $c$ with this analytical fit as proposed in [@concentration]: $$\begin{aligned}
\label{eqn:conc}
\bar{c}(M,z)=\frac{9}{1+z}\left(\frac{M}{M_{NL}}\right)^{-0.13},\end{aligned}$$ where $M_{NL}=\frac{4\pi}{3}\bar{\rho}(z)(\frac{2\pi}{k_{NL}})^3$ is the non-linear mass scale, and $k_{NL}$ is defined by the linear power spectrum $P_L$ as $k^3_{NL}P_L(k_{NL},z)=2\pi^2$.
![Mean concentration of the benchmark `ROCKSTAR` halos as a function of their mass, calculated from both the scale radius and Klypin scale radius, as well as the analytical fit in [@concentration] (). []{data-label="fig:mean_conc"}](Gadget3_2048_1280_z0p000_rockstar_halos_parents_halo_profile_conc){width="\linewidth"}
To judge whether the NFW profile is a good choice for our purposes we first compared the benchmark mean concentration to the analytical fit in . `ROCKSTAR` fits an NFW profile by calculating both the scale radius $r_s$ and the Klypin scale radius $r_{s,K}$ [@Klypin], which is derived from $v_{max}$, the maximum circular velocity, and $M_{vir}$. We have plotted the mean concentration computed from $r_s$ and $r_{s,K}$ against the analytical fit in . While the Klypin concentration demonstrates better numerical stability overall, it is not clear that it is more robust for halos with fewer than 100 particles as the authors of `ROCKSTAR` claim [@rockstar]. We shall be using the Klypin concentration in all our methods discussed below. We note that while seems to qualitatively capture the correct power law behaviour, the magnitude is too low by about 10-20%.
[0.45]{} {width="\linewidth"}
[0.45]{} {width="\linewidth"}
[0.45]{} {width="\linewidth"}
[0.45]{} {width="\linewidth"}
More importantly, while the NFW profile is used in the literature to populate halos with galaxies, it is ultimately a fit to the dark matter profile and may not reflect the subhalo density profile. Comparisons between the NFW profile and the number density profile for the `ROCKSTAR` benchmark catalogue at different mass bins is shown in . Throughout the paper we only populate subhalos to the virial radius $R_{vir}$. In these plots, the NFW profile is calculated using the average Klypin concentration given by `ROCKSTAR` for the mean halo mass of the bin. Additionally, distances are scaled by the virial radius $R_{vir}$, since that is the distance `ROCKSTAR` uses when fitting the NFW profile.
We found that the NFW profile is clearly more concentrated near the centre of the halo than the density profile of the benchmark subhalos (as observed already in, for example, [@subhalo1; @subhalo2; @subhalo3; @subhalo4]). Consequently, for a NFW profile based galaxy catalogue we expect a stronger correlation than the benchmark at small scales. We have also modified the NFW profile by keeping its functional form but changing the concentration, but this was not a good fit to the `ROCKSTAR` profile as shown in . Following [@subgen], we then adopted a universal power law $\rho\propto r^{-\gamma}$, where $\gamma\sim1$ is our fiducial halo profile, such that $$\begin{aligned}
\label{eq:power_law}
\text{CDF}_{\text{pow}}(r|M_{vir},c)
=\left(\frac{r}{R_{vir}}\right)^{3-\gamma}. \end{aligned}$$ We have found that $\gamma\approx1$ is a satisfactory fit to the subhalo number distribution, as shown in .
![Power law fit to the halo profile at different mass bins. []{data-label="fig:gamma"}](Gadget3_2048_1280_z0p000_rockstar_halos_parents_halo_profile_rvir_1level_gamma){width="\linewidth"}
Halo Occupation Distribution (HOD) \[subsec:HOD\]
-------------------------------------------------
Another important consideration in the population of parent halos is the halo occupation number, i.e. the number of galaxies per halo. A conventional way to phenomenologically model this is via a Halo Occupation Distribution (HOD) algorithm [@hod1; @hod2; @hod3] which gives the mean occupation number as a function of the mass of the halo. A functional form for this algorithm consisting of 5 parameters is commonly used in the literature [@zheng; @eisenstein1; @eisenstein2; @mock1]: $$\begin{aligned}
\label{eqn:5_param_hod}
\bar{N}_{\text{cent}}(M)
&=\frac{1}{2}\operatorname{erfc}\left[-\frac{\ln M/M_0}{\sqrt{2}\sigma}\right], \\
\bar{N}_{\text{sat}}(M)
&=\left(\frac{M-\kappa M_0}{M_1}\right)^\alpha,
\label{eq:simpleHODfit}\end{aligned}$$ where $\bar{N}_{\text{cent}}$ is the expected number of central galaxies and $\bar{N}_{\text{sat}}$ the expected number of satellite galaxies such that $\bar{N}_g(M)=\bar{N}_{\text{cent}}(M)+\bar{N}_{\text{sat}}(M)$. Here $M_0$ denotes the typical minimum mass scale for a halo to have a central galaxy, and $\sigma$ is the parameter that controls the scatter around that mass. $\kappa M_0$ sets the cutoff scale for a halo to host a satellite, $M_1$ is the typical additional mass above $\kappa M_0$ for a halo to have one satellite galaxy, and $\alpha$ is the exponent that controls the tail of the HOD, and therefore has a strong influence on the number of high-mass halos.
Instead of using the error function we employ a Heaviside cut for $\bar{N}_{\text{cent}}$: $$\begin{aligned}
\label{eq:N_cent}
\bar{N}_{\text{cent}}(M)=\theta(M-M_0),\end{aligned}$$ reducing the number of parameters to 4. This is appropriate as we impose a mass cut on the parent halo when constructing the benchmark galaxy catalogue. These 4 parameters give us freedom to tweak the power spectrum and bispectrum of our galaxy mock catalogues to better reproduce those of the benchmark sample. The total number of galaxies is $$\begin{aligned}
\label{eq:n_g}
n_g = \int dM\,n(M)\left(\theta(M-M_0)+
\left(\frac{M-\kappa M_0}{M_1}\right)^{\alpha}\right),\end{aligned}$$ where $n(M)$ is the halo mass function that gives the number density of halos for a given mass $M$. If the variation in the parameters are small we obtain the following perturbation to the number of galaxies to first order: $$\begin{aligned}
\label{eq:constraint}
&\Delta n_g \nonumber \\
={}& - \int dM\,n(M) \nonumber \\
&\times\Bigg(\frac{\Delta M_0}{M_0}M_0\left(\delta(M-M_0)+
\frac{\alpha\kappa}{M_1}
\left(\frac{M-\kappa M_0}{M_1}\right)^{\alpha-1}\right) \nonumber \\
&\qquad+\frac{\Delta \kappa}{\kappa}\kappa\frac{\alpha M_0}{M_1}
\left(\frac{M-\kappa M_0}{M_1}\right)^{\alpha-1} \nonumber \\
&\qquad+\frac{\Delta M_1}{M_1}M_1\frac{\alpha (M-\kappa M_0)}{M^2_1}
\left(\frac{M-\kappa M_0}{M_1}\right)^{\alpha-1} \nonumber \\
&\qquad-\frac{\Delta \alpha}{\alpha}\alpha\log\left(\frac{M-\kappa M_0}{M_1}\right)
\left(\frac{M-\kappa M_0}{M_1}\right)^\alpha\Bigg),\end{aligned}$$ and we enforce $\Delta n_g=0$ to conserve particle number when changing the parameters.
In we show the HOD $\bar{N}_g(M)$ from our benchmark `ROCKSTAR` catalogue (which will be referred to as the benchmark HOD model below), and the best fit for the 4-parameter HOD while keeping the total number of galaxies constant. As a comparison we also obtain an unconstrained fit to the benchmark HOD. The best fit parameters for the constrained fit are $\log(M_0)=11.76$, $\kappa=0.89$,$\log(M_1)=13.35$ and $\alpha=1.04$, with only a $4\times10^{-4}\%$ deficiency in the number of galaxies.
[0.49]{} {width="\linewidth"}
[0.49]{} {width="\linewidth"}
Halo polyspectra\[sec:halo-polyspectra\]
========================================
Power spectrum and Bispectrum
-----------------------------
The leading source of cosmological information, and hence the principal diagnostic of our methods, is the two-point correlator, or power spectrum $P(k)$ of an overdensity field $\delta(\mathbf{x})$: $$\begin{aligned}
\expval{\delta(\mathbf{k}) \delta(\mathbf{k}')}=(2\pi)^3
\delta_D(\mathbf{k}+\mathbf{k}') P(k),
\label{PS}\end{aligned}$$ where $\delta_D$ is the Dirac delta function. The power spectra of our benchmark dark matter and galaxy catalogues at redshifts $z=0,0.5,1$ are plotted in . Our galaxy catalogue consists of parent halos with mass in the range of $1\times10^{12}$ and $3.2\times10^{15}h^{-1}\,M_{\odot}$ and all their subhalos, and has a number density of $0.0056\,h^{3}\,\text{Mpc}^{-3}$, which is similar to the number density of the LOWZ galaxy sample in BOSS at low redshift [@anderson]. It is well known in the literature that while the dark matter power spectrum grows with time, the growth of the halo power spectrum is slow [@halo_evo; @halo_evo1]. At large scales the linear bias relationship $b_1=\delta_g/\delta$ between dark matter and galaxies tends to a constant [@linear_bias], and since the dark matter power spectrum grows as $D^2_1(z)$ at these scales, where $D_1(z)$ is the linear growth factor, we expect $b_1(z)\propto1/D_1(z)$. This is also shown clearly in , giving a value of $b_1\approx1.1$.
[0.49]{} {width="\textwidth"}
[0.49]{} {width="\textwidth"}
[0.49]{} {width="\textwidth"}
For mildly non-linear scales the primary diagnostic is the three point correlation function or bispectrum $B_\delta(k_1, k_2, k_3)$: $$\begin{aligned}
&\expval{\delta(\mathbf{k}_1) \delta(\mathbf{k}_2) \delta(\mathbf{k}_3)}
\nonumber \\
&\qquad=(2\pi)^3 \delta_D (\mathbf{k}_1+\mathbf{k}_2+\mathbf{k}_3)
B_\delta(k_1,k_2,k_3).
\label{bispectrum}\end{aligned}$$ Due to statistical isotropy and homogeneity, in configuration space the bispectrum only depends on the wavenumbers $k_i$ in the absence of redshift space distortions. Additionally the delta function, arising from momentum conservation, imposes the triangle condition on the wavevectors so the three $k_i$ when taken as lengths must be able to form a triangle. Together with a parameter $k_{max}$ which defines the resolution of the data, the bispectrum occupies a tetrapydal domain $\mathcal{V}_B$ in $k$-space, as shown in the left panel of . We have found it useful to split it in half to make apparent its internal morphology as illustrated in the right panel of . The bispectra plots in this paper are generated with `ParaView` [@paraview], an open source scientific visualisation tool.
[0.42]{} {width="\textwidth"}
[0.42]{} {width="\textwidth"}
Due to the large number of triangle configurations, numerical estimation of the full bispectrum is computationally expensive. In this paper we use the newly rewritten [`MODAL-LSS`]{} method for the efficient and accurate estimation of the bispectrum for any overdensity field $\delta$ [@DM]. The full bispectra of the benchmark catalogue at various redshifts thus obtained are shown in , along with the corresponding dark matter bispectra plotted for reference.
[0.37]{} {width="\linewidth"}
[0.37]{} {width="\linewidth"}
[0.37]{} {width="\linewidth"}
[0.37]{} {width="\linewidth"}
[0.37]{} {width="\linewidth"}
[0.37]{} {width="\linewidth"}
[`MODAL-LSS`]{} bispectrum methodology\[sec:modallss-methodology\]
------------------------------------------------------------------
Here we give a brief summary of the [`MODAL-LSS`]{} algorithm. We first approximate the signal-to-noise weighted estimated bispectrum of a density field, $\sqrt{\frac{k_1k_2k_3}{P(k_1)P(k_2)P(k_3)}}\hat{B}_\delta(k_1,k_2,k_3)$, by expanding it in a general separable basis: $$\begin{aligned}
&\sqrt{\frac{k_1k_2k_3}{P(k_1)P(k_2)P(k_3)}}\hat{B}_\delta(k_1,k_2,k_3) \nonumber \\
&\quad\approx\sum_{mn}^{n_{max}} \gamma^{-1}_{nm}\beta^Q_m
Q^{{\texttt{MODAL-LSS}}{}}_n(k_1/k_{max},k_2/k_{max},k_3/k_{max}),
\label{expand}\end{aligned}$$ where $P(k)$ is the power spectrum of the density field. The information in the full bispectrum is compressed into these $\mathcal{O}(1000)$ $\beta^Q_m$ coefficients, and it has been shown to be superior to other bispectrum estimators in terms of data compression [@sussex]. The basis functions $Q^{{\texttt{MODAL-LSS}}{}}_n$ are symmetrised products over one dimensional functions $q_r$: $$\begin{aligned}
Q^{{\texttt{MODAL-LSS}}{}}_n (x,y,z) \equiv q_{\{r}(x)q_{s}(y)q_{t\}}(z),\end{aligned}$$ with $\{\dots\}$ representing symmetrisation over the indices $r,s,t$, and each $n$ corresponds to a combination of $r,s,t$. The relationship between $n$ and $r,s,t$ is ‘slice ordering’ which orders the triples by the sum $r+s+t$. $k_{max}$ is the resolution of the tetrahedral domain defined above. $\gamma_{nm}$ is the inner product between $Q^{{\texttt{MODAL-LSS}}{}}_n$ functions over the tetrapyd domain: $$\begin{aligned}
\gamma_{nm}\equiv\frac{V}{\pi}\int_{\mathcal{V}_B}dV_kQ_nQ_m,
\label{gamma}\end{aligned}$$ where $V=(2\pi)^3\delta_D(\mathbf{0})$ is the volume of the simulation box. There is a freedom in the choice of $q_r$, provided the $Q^{{\texttt{MODAL-LSS}}{}}_n$ basis is orthogonal, or can be made orthogonal. We employ shifted Legendre polynomials $\tilde{P}_l(x)=P_l(2x-1)$, such that $\tilde{P}_l(x)$ is orthogonal over the interval $\left[0,1\right]$ instead of the usual $\left[-1,1\right]$ for $P_l(x)$.
For $\hat{B}_\delta(k_1,k_2,k_3)=\frac{1}{V}\delta(\mathbf{k}_1)
\delta(\mathbf{k}_2)\delta(\mathbf{k}_3)$ we multiply both sides of by $Q^{{\texttt{MODAL-LSS}}{}}_m(k_1/k_{max},k_2/k_{max},k_3/k_{max})$ and integrate over $\mathcal{V}_B$ to find $$\begin{aligned}
\label{eq:betaQ}
\beta^Q_n
&=\frac{1}{\pi}\int_{\mathcal{V}_B}dV_k
\sqrt{\frac{k_1k_2k_3}{P(k_1)P(k_2)P(k_3)}}
\delta_{\mathbf{k}_1}\delta_{\mathbf{k}_2}
\delta_{\mathbf{k}_3}
\nonumber \\
&\qquad\qquad\qquad\times q_{\{r}(\frac{k_1}{k_{max}})
q_s(\frac{k_2}{k_{max}})q_{t\}}(\frac{k_3}{k_{max}})
\nonumber \\
&=\int_{\mathbf{k}_1,\mathbf{k}_2,\mathbf{k}_3}(2\pi)^6
\delta_D(\mathbf{k}_1+\mathbf{k}_2+\mathbf{k}_3)
\nonumber \\
&\qquad\qquad\qquad\times\frac{\delta_{\mathbf{k}_1}
\delta_{\mathbf{k}_2}\delta_{\mathbf{k}_3}}
{\sqrt{k_1P(k_1)k_2P(k_2)k_3P(k_3)}} \nonumber \\
&\qquad\qquad\qquad\times q_{\{r}(\frac{k_1}{k_{max}})
q_s(\frac{k_2}{k_{max}})q_{t\}}(\frac{k_3}{k_{max}})
\nonumber \\
&=(2\pi)^3\int d^3 x\int\frac{\prod_i d^3k_i}{(2\pi)^9}
e^{i(\mathbf{k}_1+\mathbf{k}_2+\mathbf{k}_3)\cdot\mathbf{x}}
\nonumber \\
&\qquad\qquad\qquad\times\frac{\delta_{\mathbf{k}_1}
\delta_{\mathbf{k}_2}\delta_{\mathbf{k}_3}}
{\sqrt{k_1P(k_1)k_2P(k_2)k_3P(k_3)}} \nonumber \\
&\qquad\qquad\qquad\times q_{\{r}(\frac{k_1}{k_{max}})
q_s(\frac{k_2}{k_{max}})q_{t\}}(\frac{k_3}{k_{max}})
\nonumber \\
&=(2\pi)^3\int d^3 x\,
M_r(\mathbf{x})M_s(\mathbf{x})M_t(\mathbf{x}),\end{aligned}$$ where we define $$\begin{aligned}
M_r(\mathbf{x}) \equiv \int\frac{d^3k}{(2\pi)^3}\frac{\delta_{\mathbf{k}}q_r(k/k_{max})}
{\sqrt{kP(k)}}e^{i\mathbf{k}\cdot\mathbf{x}},
\label{Mfunc}\end{aligned}$$ which is an inverse Fourier transform, and $\int_{\mathbf{k}_1,\mathbf{k}_2,\mathbf{k}_3}=\int\frac{d^3k_1}{(2\pi)^3}
\frac{d^3k_2}{(2\pi)^3}\frac{d^3k_3}{(2\pi)^3}$. In the second line we used the identity $$\begin{aligned}
&\int\frac{d^3 k_1}{(2\pi)^3}\frac{d^3 k_2}{(2\pi)^3}
\frac{d^3 k_3}{(2\pi)^3}(2\pi)^6\delta^2_D
\left(\mathbf{k}_1+\mathbf{k}_2+\mathbf{k}_3\right)F \nonumber \\
&\quad=\frac{V}{8\pi^4}\int_{\mathcal{V}_B}dk_1dk_2dk_3\,k_1k_2k_3F.
\label{integrals}\end{aligned}$$
In summary, we have reduced the 9-dimensional integrals involved in bispectrum estimation to a number of (inverse) Fourier transforms which can be evaluated efficiently with the fast Fourier transform (FFT) algorithm, together with an integral over the spatial extent of the data set () which can highly parallelised. Additionally we have compressed the full 3D bispectral information to $\mathcal{O}(1000)$ $\beta^Q_n$ coefficients, which are much easier to manipulate.
To make comparisons between bispectra $B_i$ and $B_j$ we first define inner products between them as $$\begin{aligned}
\left[B_i,B_j\right] \equiv \frac{V}{\pi} \int_{\mathcal{V}_B}dV_k\,
k_1k_2k_3 \frac{B_i(k_1,k_2,k_3)B_j(k_1,k_2,k_3)}{P(k_1)P(k_2)P(k_3)}.
\label{inner_product}\end{aligned}$$ We define two correlators between bispectra. The first is the total correlator $\mathcal{T}$: $$\begin{aligned}
\mathcal{T}(B_i,B_j)
&\equiv 1- \sqrt{\frac{
\left[B_j-B_i,B_j-B_i\right]}{\left[B_j,B_j\right]}},
\label{total}\end{aligned}$$ which is a stringent test of correlation between bispectra, but is susceptible to degradation by statistical noise. The other one is the $f_{nl}$ correlator, named as such due to its similarity to the optimal $\langle\hat{f}_{nl}\rangle$ estimator for the amplitude of a theoretical shape (see [@DM]), as: $$\begin{aligned}
f_{nl}(B_i,B_j)
\equiv \frac{\big[B_i,B_j\big]}{\big[B_j,B_j\big]}.
\label{fnl_corr}\end{aligned}$$ The $f_{nl}$ correlator can be thought of as proportional to the cosine between the two shapes, weighted by the magnitude of $B_j$. This correlator is therefore appropriate for the compression of 3D bispectral information into a one-dimensional function of $k_{max}$.
We further define a ‘sliced’ correlator between bispectra which integrates over transverse degrees of freedom $K\equiv k_1+k_2+k_3=\text{const.}$ on the tetrahedron: $$\begin{aligned}
\left[B_i,B_j\right]^{S}_{K} \equiv \frac{V}{\pi} \int_{\Delta\mathcal{V}_B}dV_k\,
k_1k_2k_3 \frac{B_i(k_1,k_2,k_3)B_j(k_1,k_2,k_3)}{P(k_1)P(k_2)P(k_3)}.
\label{inner_product_sliced}\end{aligned}$$ The new restricted integration region, $\Delta\mathcal{V}_B$, encompasses a range of these $K$ slices such that: $$\begin{aligned}
\label{eq:slice}
K<k_1+k_2+k_3<K+\Delta K.\end{aligned}$$ Similarly we define the sliced $f_{nl}$ correlator as $$\begin{aligned}
f^S_{nl}(B_i,B_j,K)
\equiv \frac{\big[B_i,B_j\big]^{S}_{K}}{\big[B_j,B_j\big]^{S}_{K}}.
\label{fnl_corr}\end{aligned}$$
Halo three-shape model
----------------------
![Best fit three-shape model to the bispectrum of the benchmark `ROCKSTAR` catalogue. []{data-label="fig:3-shape_bis"}]({theo_bis_6_1000_4_341_1280_3-shape_z0p000}.png){width="0.8\linewidth"}
![Sliced $f_{nl}$ correlation between the best fit three-shape model to the benchmark, and the benchmark. The feature observed at $K/3=1.1\,h\,\text{Mpc}^{-1}$ here is due to the transition from the tetrahedral region in the bottom to the pyramid at the top, causing a kink in the sliced correlator, and is not a real physical effect. []{data-label="fig:3-shape_bis_slice"}]({halo_bis_slice_fnl_3-shape}.jpeg){width="\linewidth"}
The three-shape model was proposed in [@Andrei; @Andrei2] as a phenomenological model to quantitatively describe the dark matter bispectrum $B_{\rm mmm}(k_1,k_2,k_3)$, consisting of a linear combination of the ‘constant’ one-halo model on small length scales, the tree-level gravitational bispectrum on the largest, and a local or ‘squeezed’ shape interpolating on intermediate scales. The combined three-shape model takes the following form: $$\begin{aligned}
&B_{\text{3-shape}}(k_1,k_2,k_3) \nonumber \\
&= \sum^3_{i=1}f_i(K)B^i(k_1,k_2,k_3)\nonumber\\
&= f_{1h}(K)B^{\text{const}}(k_1,k_2,k_3)
+f_{2h}(K)B^{\text{squeez}}(k_1,k_2,k_3)
\nonumber \\
&\quad
+f_{3h}(K)B^{\text{treeNL}}(k_1,k_2,k_3),
\label{eqn:3-shape}\end{aligned}$$ where the $f_i(K)$ are scale-dependent amplitudes and the constant, squeezed and tree-level shapes are respectively: $$\begin{aligned}
\label{eq:shapes}
&B^{\text{const}}(k_1,k_2,k_3) = 1, \\
&B^{\text{squeez}}(k_1,k_2,k_3) =
\frac{1}{3} [P_{\text{lin}}(k_1)P_{\text{lin}}(k_2)
\nonumber \\
&\qquad\qquad +2\,\text{perms.}, \\
&B^{\text{treeNL}}(k_1,k_2,k_3) =
2P_{\text{NL}}(k_1) P_{\text{NL}}(k_2)F^{(s),\Lambda}_2
(\mathbf{k}_1,\mathbf{k}_2)
\nonumber \\
&\qquad\qquad +2\,\text{perms.}.\end{aligned}$$ Here, $P_{\text{lin}}$ denotes the linear dark matter power spectrum, $P_{\text{NL}}$ is the non-linear power spectrum obtained from simulations, and the gravitational kernel $F^{(s),\Lambda}_2$ is $$\begin{aligned}
F^{(s),\Lambda}_2(\mathbf{k}_1,\mathbf{k}_2)
&=\frac{1}{2}(1+\epsilon)+
\frac{1}{2}\frac{\mathbf{k}_1\cdot\mathbf{k}_2}{k_1 k_2}
\left(\frac{k_1}{k_2}+\frac{k_2}{k_1}\right) \nonumber \\
&\qquad +\frac{1}{2}(1-\epsilon)
\frac{(\mathbf{k}_1\cdot\mathbf{k}_2)^2}{k_1^2 k_2^2},
\label{F2}\end{aligned}$$ where $\epsilon\approx-(3/7)\Omega_m^{-1/143}$ to account for non-zero vacuum energy $\Lambda$ [@bouchet]. A successful fit into highly nonlinear scales was possible using the following physically-motivated functional forms for the amplitudes: $$\begin{aligned}
\label{eq:amplitudes}
f_{1h}(K)&=\frac{A}{(1+BK^2)^2}, \\
f_{2h}(K)&=\frac{C}{(1+DK^{-1})^3}, \\
f_{3h}(K)&=F\exp(-K/E). \end{aligned}$$ The parameters $A-F$ at redshift $z=0$ across the range $0.1\,h\,\text{Mpc}^{-1}< K < 6\,h\,\text{Mpc}^{-1}$ take the values [@Andrei]: $$\begin{aligned}
\label{eq:orig3shape}
&A=2.45\times10^6, \quad &&B=0.054, \nonumber \\
&C=140, \quad &&D=1.9, \nonumber \\
&E=7.5\,k_{\text{NL}}, \quad &&F\equiv 1.0\,\end{aligned}$$ with $k_{\text{NL}}=0.25\,h\,\text{Mpc}^{-1}$. We note that this approximate fit applies across a much wider set of redshifts $z<10$ (at about 10% precision) and, here, $F$ has been fixed to unity to match the tree-level gravitational bispectrum as $K\rightarrow 0$ (i.e. with unit bias). Since the dark matter simulation we currently have is of much higher resolution and precision than previously, we update the best fit parameter values to the following: $$\begin{aligned}
\label{eq:dm3shape}
&A=2.64\times10^6, \quad &&B=0.057, \nonumber \\
&C=95, \quad &&D=2.0, \nonumber \\
&E=10.1\,k_{\text{NL}}, \quad &&F\equiv 1.0,\end{aligned}$$ This yields a high total correlation at $k_{\rm max} = 1.7\,h\,\text{Mpc}^{-1}$ of 98.4% with new simulation data, and 97.1% with the original three-shape model (). We note that there are some degeneracies between the three shapes, but we leave detailed error estimation of these dark matter parameters for a future publication. We also note that there are transient grid effects that temporarily increase the tree-level gravitational bispectrum for $N$-body simulations with 2LPT initial conditions (identified in previous papers [@glass; @Andrei]); even for the high redshift initial conditions used in this paper, this persists at late times leaving an offset in the dark matter bispectrum of a few percent for small $k$. This small systematic effect can be avoided with ‘glass’ initial conditions for the $N$-body simulations [@glass; @Andrei] or through quantitative analysis and subtraction (but this is not the focus of the present paper, see the discussion in [@DM]).
We can consider using the same three shapes to fit to our benchmark halo bispectrum $B_{\rm hhh}(k_1,k_2,k_3)$, but in principle we might require more than three shapes to achieve an adequate correlation. For example, bias considerations bifurcate the tree-level gravitational bispectrum () into several apparently different shapes at leading order (LO) [@Desjacques]: $$\begin{aligned}
\label{eq:halo3shape}
&B^{\text{LO}}_\text{hhh}(k_1,k_2,k_3)
= b_1^3 B^{\text{treeNL}}(k_1,k_2,k_3) \nonumber \\
+{}& b_1^2 \left[ b_2 + b_{K^2} \left( (\hat{\bf k}_1\cdot\hat{\bf k}_2) ^2 +\textstyle{\frac{1}{3}} \right)\right ]\left(P(k_1) P(k_2) + 2 \,\text{perms}.\right)\nonumber\\
+{}& B ^{\text{stoch}}_{\cal E} + b_1^3 \left(P ^{\text{stoch}}_{\cal E}P(k_1) + 2\, \text{perms}.\right)\,,\end{aligned}$$ where $b_1,\, b_2$ are the first- and second-order bias parameters, $b_{K^2}$ is the ‘tidal’ bias parameter, and $P ^{\text{stoch}}_{\cal E}$, $B ^{\text{stoch}}_{\cal E}$ are the stochastic power spectrum and bispectrum respectively. Closer examination, however, reveals that the second-order bias shape can be incorporated with appropriate scalings in the squeezed two-halo shape $ B^{\text{squeez}}$ and the stochastic bispectrum $B ^{\text{stoch}}_{\cal E} $ in the constant shape $B^{\text{const}}$ (if not subtracted as per usual). This leaves only the modulated ‘tidal’ bias term, but this can be expected to be relatively small and would be straightforward to include as an additionally modulated version of the squeezed shape $ B^{\text{squeez}}$ (a ‘four-shape’ model).
For this reason, as a preliminary exercise we endeavour to fit the original three-shape model to the measured halo bispectrum, finding the best fit parameters as: $$\begin{aligned}
&A=1.55\times10^6, \quad &&B=0.042, \nonumber \\
&C=287, \quad &&D=3.7, \nonumber \\
&E=8.0\,k_{\text{NL}}, \quad &&F=0.97.\end{aligned}$$ Again we will leave error estimation in these parameters for future work. The three-shape bispectrum calculated with these values is shown in . It gives an overall total correlation of 97.4% with our benchmark bispectrum, and a 4% $f_{nl}$ correlation fit across the entire range of the data apart from the very tip of the tetrapyd where $K/3<0.2\,h\,\text{Mpc}^{-1}$ (). Note again there are degeneracies in the model parameters for the limited wavenumber range we have used; there are significant caveats on large length scales (discussed above), as well as small length scales because we do not probe deep enough into the nonlinear regime on small scales to specify the one-halo parameters. In principle, we could use this to specify the averaged bias parameter $b_1 \approx 0.99$ (assuming this to be the dominant contribution) or we could estimate $b_1, b_2$ jointly with the power spectrum, but we would have to investigate and calibrate transient grid effects at small $k$ much more carefully [@glass] and we leave this for a future publication. Nevertheless, this analysis gives an initial indication that an accurate phenomenological fit to the halo (or galaxy) bispectrum is likely to be possible with a few well-motivated bispectrum shapes and a limited number of parameters.
Phenomenological halo catalogues\[work\]
========================================
Having characterised the halo power spectrum and bispectrum from our benchmark `ROCKSTAR` catalogue (as a proxy for a galaxy catalogue), we investigate whether these polyspectra can be accurately reproduced using fast statistical prescriptions for populating halos with subhalos, that is, without using costly $N$-body simulations for individual mocks. We first consider minimal approaches by modifying the subhalo distribution using different halo profiles or altering the average occupation number as a function of halo mass. Next, we develop this further by exploiting halo concentrations, populating individual halos using typical correlations with the occupation number, that is, incorporating statistical information related to the assembly history of halos.
Halo profile
------------
[0.49]{} {width="\linewidth"}
[0.455]{} {width="\linewidth"}
[0.49]{} {width="\linewidth"}
[0.455]{} {width="\linewidth"}
Modifying the typical halo profile significantly impacts both the power spectrum and bispectrum, especially on small length scales. We can demonstrate (see below) this by keeping the number of subhalos fixed in each halo, while displacing their radial distribution according to a profile of our choosing (such as the popular NFW profile ). First, however, we briefly study the importance of halo anisotropy. This was motivated by investigations of $N$-body simulations (such as that in ), which have revealed that the dark matter profiles of halos are not spherical, reflecting more complex internal substructure [@triaxial1; @triaxial2; @triaxial3]. The subhalos that live within those halos, therefore, also have a non-spherical distribution, as well as internal structure. We have quantified the importance of these effects by randomising the solid angular distribution of the subhalos within a halo, while keeping the radial distance to the parent halo seed unchanged. This effectively removes halo triaxiality, destroying the original internal structure of the halos. For the new ‘random angle’ halo catalogue, we have estimated both the power spectrum and the bispectrum (using the sliced $f_{nl}$ correlator () at a given $K=k_1+k_2+k_3$); the relative effect is shown by the blue lines in . There is a small diminution of power even at relatively high wavenumbers $k,K/3=1\,h\,\text{Mpc}^{-1}$, with less than a 1% and 4% decrease for the power spectrum and bispectrum respectively. Randomisation of the angles tends to reduce subhalo clustering but this remains a subpercent effect on the bispectrum for $K/3\le 0.5\,h\,\text{Mpc}^{-1}$. The small effect of a randomisation process has on the matter power spectrum has also been confirmed in [@pace]. This indicates that triaxial effects will predominantly arise from RSDs (see, for instance, [@triaxial]).
The radial halo profile can have a larger effect, notably if we populate subhalos using the NFW profile obtained from the halo dark matter distribution, as shown by the orange line in . In this case, by $k,K/3=1\,h\,\text{Mpc}^{-1}$ there are large deviations of 2% and 15% from the halo power spectrum and bispectrum respectively. This is not unexpected as we have previously seen that the dark matter NFW profile does not fit the measured subhalo profile from our benchmark catalogue (given the mass resolution of our $N$-body simulation). The discrepancies would in fact have been even larger had we used the measured concentration from `ROCKSTAR`, instead of the analytical fit for $\expval{c}$ in .
{width="\linewidth"}
{width="\linewidth"}
[0.48]{} {width="\linewidth"}
[0.48]{} {width="\linewidth"}
[0.48]{} {width="\linewidth"}
[0.445]{} {width="\linewidth"}
We turn now to effects of modelling the halo profile with a power law. As we have seen already in , a power law of $0.8 < \gamma <1.2 $ will fit most halo profiles for the subhalo distributions found in our benchmark simulation. Modelling the halos with the best fit power law inevitably removes some signal from the power spectrum and bispectrum, as the resulting halos have a uniform solid angular distribution, unlike subhalos in an $N$-body simulation. The lack of power can be seen in the $\gamma = 1$ profile shown as green line in . We can phenomenologically compensate for this effect by considering spherically symmetric halo profiles with an increased power law exponent. Coincidentally, for $\gamma = 1.5$ both the power spectrum and the bispectrum are very well fitted at all scales, with a difference of less than 0.5% up to $k,K/3 \le 1.6\,h\,\text{Mpc}^{-1}$. We can exploit this dual effect when populating the halos with a statistical halo occupation number rather than that measured from the $N$-body simulation.
Halo occupation number
----------------------
[0.37]{} {width="\linewidth"}
[0.37]{} {width="\linewidth"}
[0.37]{} {width="\linewidth"}
[0.37]{} {width="\linewidth"}
[0.37]{} {width="\linewidth"}
[0.37]{} {width="\linewidth"}
We have also investigated the effect on the power spectrum and bispectrum of assigning subhalos using the Halo Occupation Distribution (HOD). First, we populated halos using the benchmark HOD model, i.e. we assigned to each halo the measured mean number of galaxies (subhalos) for a halo of that mass. This model is shown in along with our 4-parameter fit to it. As shown in , we have found that neither the benchmark HOD nor the 4-parameter HOD fit recovers the power spectrum or the bispectrum to better than 2% at large scales $k,K/3 < 0.1\,h\,\text{Mpc}^{-1}$. The 4-parameter fit to the benchmark HOD is 4% below the simulation power spectrum, and the difference gets rapidly worse at smaller length scales. The fit to the benchmark HOD is only accurate to 10%, indicating a better functional form should be adopted. The discrepancy in the bispectrum is considerably higher than the power spectrum, and also demonstrates much worse scaling in $k$.
To better understand the power deficiency observed in from using the HOD model we first binned the parent halos by mass, then shuffled around the halo occupation number within the halos in each mass bin. Since the halo profile plays only a marginal role on large length scales, for simplicity we collapsed all objects to the centre of the parent halo, and the power spectrum of the resulting sample is shown in . The fact that this shuffling method, which preserves the statistical distribution of the halo occupation number in every mass bin, produces the same effect as the benchmark HOD strongly implies that number of subhalos in a halo depends on halo properties other than halo mass. The shuffling procedure is very similar to populating halos by using a subhalo dispersion around the mean HOD; initial experimentation indicated that including such a dispersion had no impact resolving the key bispectrum deficit.
Finally, we explored whether phenomenologically changing the parameters in our 4-parameter HOD could yield a satisfactory fit to both the power spectrum and bispectrum. As discussed in we enforce conservation of galaxy number $\Delta n_g=0$ () when changing the values of the parameters, which entails compensating by changing at least 2 parameters simultaneously. By exploring all 6 different ways to pair up the parameters, it was found that the index $\alpha$ in (), i.e. the exponent of the power law, appears to make the most dramatic contribution to the power spectrum relative to the other parameters. As can be seen in panels (a)-(c) in , boosting $\alpha$ by 4.5% helps match the benchmark power spectrum up to $k \le 0.5\,h\,\text{Mpc}^{-1}$, regardless of the choice of the other compensating parameter. However, panel (d) in the same plot reveals that this boost in $\alpha$ grossly inflates the bispectrum, resulting in more than 5% difference between $0.2\,h\,\text{Mpc}^{-1} < K/3 < 1.3\,h\,\text{Mpc}^{-1}$. We conclude that populating halos using an HOD that depends only on mass will not simultaneously recover both the benchmark power spectrum and bispectrum (with correlation discrepancies in the latter exceeding 4%).
Assembly bias
-------------
Since using the benchmark HOD yields a suppression of power in the power spectrum and bispectrum, and tuning the 4-parameter HOD model fares no better in matching both the power spectrum and bispectrum, we considered alternative methods of modelling the halo occupation number that take into account the formation history of the halos, known as assembly bias (see, for example, [@assembly_bias; @Gao2005; @Sunayama2016; @Hearin2016; @Wechsler2006]). Amongst halos with the same mass those formed at higher redshifts in $N$-body simulations are known to typically have higher concentrations $c$ [@Zhao2003; @Zhao2009; @Villarreal2017; @Wechsler2002; @Wechsler2006] (although this relationship should not be over-simplified [@Wechsler2017]). For this reason, we investigate whether incorporating halo concentration into our HOD model can simultaneously reduce the measured mock catalogue deficit in both the power spectrum and bispectrum. The probability distribution of the occupation number $N_g$ becomes $P(N_g|c,M)$, which is a function of both mass and concentration.
[0.45]{} {width="\linewidth"}
[0.45]{} {width="\linewidth"}
[0.45]{} {width="\linewidth"}
[0.45]{} {width="\linewidth"}
[0.45]{} {width="\linewidth"}
[0.45]{} {width="\linewidth"}
To gain insight into how the concentration affects halo occupation we took inspiration from [@eisenstein2] with a simple model that, first, bins parent halos by mass and then, secondly, divides these into two bins based on their concentration. The threshold for this split into concentration bins was the median concentration, such that both the higher and the lower concentration samples at a given mass have the same number of subhalos. For each mass bin, we calculated the mean occupation number in the high and low concentration bins (as well as the whole sample). shows that halos with lower concentration clearly have more subhalos than the average, amounting to a 20% difference in the mass range between of $10^{13}h^{-1}\,M_{\odot}$ and $10^{14}h^{-1}\,M_{\odot}$. The significant anticorrelation of the concentration with the number of subhalos may or may not be reflected in actual galaxy distributions because of resolution limitations and absent dynamical effects in our DM-only $N$-body simulations. If halos with high concentration are indeed typically those that are formed earlier, then the lower number of subhalos will be affected by merging of substructure which is, in turn, influenced by halo resolution (see, for example, [@merger]).
The positive impact of accounting for concentration with this simple split bin model is illustrated in for both the power spectrum and bispectrum. Here, we have populated halos with subhalos drawn from a lognormal distribution to model the total occupation number of the two concentration bins at each mass scale (see below). These results should be compared with the benchmark HOD model in where the bispectrum was very discrepant. In particular, this reduces the deficit in the bispectrum from around 6% to 3% at $K/3 = 0.2\,h\,\text{Mpc}^{-1}$, so assembly bias is clearly an important factor which should be taken into account when creating mock catalogues.
![Lognormal fits to the total occupation number and the high and low concentration bins. The vertical error bars indicate the shape parameter $\sigma$ of the fits.[]{data-label="fig:hod_distribution_lognormal2"}](Gadget3_2048_1280_z0p000_rockstar_halos_lognorm){width="0.9\linewidth"}
[0.45]{} {width="\linewidth"}
[0.45]{} {width="\linewidth"}
[0.45]{} {width="\linewidth"}
[0.45]{} {width="\linewidth"}
[0.45]{} {width="\linewidth"}
[0.45]{} {width="\linewidth"}
In light of the impact of concentration on subhalo number, our goal is to develop a more sophisticated statistical model that allows us to populate individual halos of a given mass, with or without specifying the concentration from information given by the simulation. To achieve this, we require the joint probability distribution $P(N_g\cap c\,|M)$ as a function of subhalo number $N_g$ and concentration $c$, so that we can derive $P(N_g|c,M)$ from Bayes theorem [@bayes]: $$\begin{aligned}
\label{eq:bayes}
P(N_g|c,M)=\frac{P(N_g\cap c\,|M)}{P(c|M)}.\end{aligned}$$ To find an appropriate joint distribution we first investigate the marginalised distributions for $N_g$ and $c$. It was found that the standard lognormal distribution with 2 parameters, $\text{Lognormal}(\mu,\sigma^2)$ where $e^\mu$ is known as the scale parameter and $\sigma$ the shape parameter, provides a good fit to the marginalised halo occupation number. shows the lognormal fits to the total occupation number, and occupation number in the high and low concentration bins, for several mass bins. In we show the shape and scale parameters of these fits in 100 mass bins across the whole range of the benchmark catalogue. Note that we have adopted the total occupation number, i.e. including the central galaxy instead of just the satellites, because when the average number of satellites falls below unity the lognormal fit automatically fails.
For the marginalised concentration distribution, we found that it could be more accurately modelled with a Gaussian distribution, particularly at low masses. The lognormal distribution provides a significantly worse fit, a comparison which is shown in , where we display the normalised counts in several mass bins along with the best fit values for both Gaussian and lognormal fits.
Either the Gaussian or lognormal distributions for $c$ can be easily combined with the lognormal distribution for $N_g$ to give a joint distribution. To do so we simply have to take the natural logarithm of $N_g$ and calculate the mean $\boldsymbol{\mu}$ and covariance $\boldsymbol{\Sigma}$ for this joint Gaussian distribution: $$\begin{aligned}
\label{eq:joint_gaussian}
\begin{pmatrix}
\ln(N_g) \\
X
\end{pmatrix}
\sim
\mathcal{N}(\boldsymbol{\mu},\boldsymbol{\Sigma}),\end{aligned}$$ where $X=c$ or $\ln(c)$ depending on whether a Gaussian or lognormal distribution for $c$ is desired, and $$\begin{aligned}
\label{eq:joint_gaussian2}
\boldsymbol{\mu} =
\begin{pmatrix}
\expval{\ln(N_g)} \\
\expval{X}
\end{pmatrix}, \qquad
\boldsymbol{\Sigma} =
\begin{pmatrix}
\sigma^2_{\ln(N_g)} & \sigma_{\ln(N_g),X} \\
\sigma_{\ln(N_g),X} & \sigma^2_{X}
\end{pmatrix}.\end{aligned}$$ $\sigma^2_{\ln(N_g)}$ and $\sigma^2_{X}$ are the usual variances for $\ln(N_g)$ and $X$, and $\sigma_{\ln(N_g),X}=\expval{(\ln(N_g)-
\expval{\ln(N_g)})(X-\expval{X})}$ is the covariance between them.
[0.39]{} ![Joint probability distribution for the subhalo number $N_g$ and concentration $c$ for halos in different mass bins of the benchmark `ROCKSTAR` catalogue.[]{data-label="fig:joint_distribution"}](Gadget3_2048_1280_z0p000_rockstar_halos_parents_halo_profile_conc_contour_bin4 "fig:"){width="\linewidth"}
[0.39]{} ![Joint probability distribution for the subhalo number $N_g$ and concentration $c$ for halos in different mass bins of the benchmark `ROCKSTAR` catalogue.[]{data-label="fig:joint_distribution"}](Gadget3_2048_1280_z0p000_rockstar_halos_parents_halo_profile_conc_contour_bin5 "fig:"){width="\linewidth"}
[0.39]{} ![Joint probability distribution for the subhalo number $N_g$ and concentration $c$ for halos in different mass bins of the benchmark `ROCKSTAR` catalogue.[]{data-label="fig:joint_distribution"}](Gadget3_2048_1280_z0p000_rockstar_halos_parents_halo_profile_conc_contour_bin6 "fig:"){width="\linewidth"}
[0.39]{} ![Joint probability distribution for the subhalo number $N_g$ and concentration $c$ for halos in different mass bins of the benchmark `ROCKSTAR` catalogue.[]{data-label="fig:joint_distribution"}](Gadget3_2048_1280_z0p000_rockstar_halos_parents_halo_profile_conc_contour_bin7 "fig:"){width="\linewidth"}
[0.39]{} ![Joint lognormal-Gaussian fit to the joint distribution in which should be compared with benchmark distribution shown in . []{data-label="fig:joint_distribution_lognormal_normal"}](Gadget3_2048_1280_z0p000_rockstar_halos_parents_halo_profile_conc_contour_fit_norm_conc_bin4 "fig:"){width="\linewidth"}
[0.39]{} ![Joint lognormal-Gaussian fit to the joint distribution in which should be compared with benchmark distribution shown in . []{data-label="fig:joint_distribution_lognormal_normal"}](Gadget3_2048_1280_z0p000_rockstar_halos_parents_halo_profile_conc_contour_fit_norm_conc_bin5 "fig:"){width="\linewidth"}
[0.39]{} ![Joint lognormal-Gaussian fit to the joint distribution in which should be compared with benchmark distribution shown in . []{data-label="fig:joint_distribution_lognormal_normal"}](Gadget3_2048_1280_z0p000_rockstar_halos_parents_halo_profile_conc_contour_fit_norm_conc_bin6 "fig:"){width="\linewidth"}
[0.39]{} ![Joint lognormal-Gaussian fit to the joint distribution in which should be compared with benchmark distribution shown in . []{data-label="fig:joint_distribution_lognormal_normal"}](Gadget3_2048_1280_z0p000_rockstar_halos_parents_halo_profile_conc_contour_fit_norm_conc_bin7 "fig:"){width="\linewidth"}
To draw from the joint distribution one would then sample from the joint Gaussian distribution and exponentiate the result as required. The joint distribution obtained from the `ROCKSTAR` halo benchmark is shown for various mass bins in . For comparison, we show for the same mass bins calculated both from the joint lognormal distribution in and from the joint lognormal-Gaussian distribution in . The joint lognormal-Gaussian distribution appears to reproduce the benchmark distribution more accurately, though small discrepancies remain at high mass.
In order to obtain $P(N_g|M,c)$ we first shift the distribution for $\ln(N_g)$ from $\mathcal{N}(\expval{\ln(N_g)},\sigma^2_{\ln(N_g)})$ to $\mathcal{N}(\expval{\ln(N_g)}',\sigma^{\prime2}_{\ln(N_g)})$, where [@conditional_gaussian] $$\begin{aligned}
\label{eq:new_gaussian}
\expval{\ln(N_g)}'
&=\expval{\ln(N_g)}+\frac{\sigma_{\ln(N_g),X}}{\sigma^2_{X}}(X-\expval{X}) \\
\sigma^{\prime2}_{\ln(N_g)}
&=\sigma^2_{\ln(N_g)}-\frac{\sigma^2_{\ln(N_g),X}}{\sigma^2_{X}},\end{aligned}$$ then exponentiate draws from this shifted Gaussian distribution. This shift can be derived using the bivariate Gaussian distribution in , the Gaussian distribution for $X$ and Bayes theorem (). For the benchmark catalogue in we show the parameters of the lognormal and Gaussian fits to $c$, and the correlation coefficient $$\begin{aligned}
r=\frac{\sigma_{\ln(N_g),X}}{\sqrt{\sigma_{\ln(N_g)}\sigma_{X}}}\end{aligned}$$ obtained for the joint Gaussian distribution in . It is worth noting that there are only minor differences in the correlation coefficient between the Gaussian and lognormal cases, with a robust value of around $r\approx-0.5$ found for the mass range $M=10^{13}-10^{14}h^{-1}\,M_{\odot}$.
[0.39]{} ![Joint lognormal fit to the joint distribution in .[]{data-label="fig:joint_distribution_lognormal"}](Gadget3_2048_1280_z0p000_rockstar_halos_parents_halo_profile_conc_contour_fit_bin4 "fig:"){width="\linewidth"}
[0.39]{} ![Joint lognormal fit to the joint distribution in .[]{data-label="fig:joint_distribution_lognormal"}](Gadget3_2048_1280_z0p000_rockstar_halos_parents_halo_profile_conc_contour_fit_bin5 "fig:"){width="\linewidth"}
[0.39]{} ![Joint lognormal fit to the joint distribution in .[]{data-label="fig:joint_distribution_lognormal"}](Gadget3_2048_1280_z0p000_rockstar_halos_parents_halo_profile_conc_contour_fit_bin6 "fig:"){width="\linewidth"}
[0.39]{} ![Joint lognormal fit to the joint distribution in .[]{data-label="fig:joint_distribution_lognormal"}](Gadget3_2048_1280_z0p000_rockstar_halos_parents_halo_profile_conc_contour_fit_bin7 "fig:"){width="\linewidth"}
[0.49]{} {width="\linewidth"}
[0.455]{} {width="\linewidth"}
[0.49]{} {width="\linewidth"}
[0.455]{} {width="\linewidth"}
[0.49]{} {width="\linewidth"}
[0.455]{} {width="\linewidth"}
In summary, we can now implement our assembly bias model using the joint probability distribution $P(N_g|M,c)$ using one of four possible methods:
- For an individual halo, use the joint lognormal distribution to draw a suitable value for $N_g$ by shifting the Gaussian distribution for $\ln(N_g)$ using the the concentration $c$ given for that halo by `ROCKSTAR`;
- Follow the same procedure as in 1 but with the joint lognormal-Gaussian, shifting the Gaussian distribution for $\ln(N_g)$ using the individual halo concentration given by `ROCKSTAR`;
- Use the joint lognormal distribution for $N_g$ and $c$, but draw values at random for $c$ from the Gaussian distribution for $\ln(c)$, thus eliminating the need for the simulation to provide this information.
- Follow the same procedure as in 3 but with the joint lognormal-Gaussian distribution, drawing both $c$ and $N_g$ randomly, so the simulation again does not provide information about concentration. (For methods 3 and 4 we impose a lower bound of 2 for random draws of $c$, which is lowest value of $c$ calculated by `ROCKSTAR`.)
[0.49]{} {width="\linewidth"}
[0.455]{} {width="\linewidth"}
[0.37]{} {width="\linewidth"}
[0.37]{} {width="\linewidth"}
The resulting power spectra and bispectra from these prescriptions for creating mock catalogues are shown in , with a comparison also to the two-bin concentration model described above. As explained in the kink at $K/3=1.1\,h\,\text{Mpc}^{-1}$ is due to the geometry of the tetrapyd rather than a physical discontinuation. All these methods, endeavouring to incorporate assembly bias in some form, offer a very substantial improvement over the simplest HOD case shown in . Out of these four possibilities, the superior methods also exploit knowledge of individual halo concentrations given by the `ROCKSTAR` simulation (which to some extent also includes the simpler two-bin method described earlier). The well-motivated joint lognormal-Gaussian modelling of the occupation number and concentration, with a power law halo profile of $\gamma=1.2$, yields a better than 1% accuracy in the power spectrum and 4% accuracy in the bispectrum for $ k,K/3<1.0\,h\,\text{Mpc}^{-1}$, which is significantly better than methods previously investigated in this paper. Moreover, both its power spectrum and bispectrum are flatter than the joint lognormal-lognormal case which makes it the more suitable model. It is clear that some information about the assembly history of halos is certainly helpful when creating mock catalogues targeting an accurate halo bispectrum, as it can be used a proxy for concentration. Information about the merger history of halos can be obtained by fast simulation methods without resorting to $N$-body simulations (see, for example, PINOCCHIO [@pinocchio]). A number of methods have been developed to correlate halo concentration with halo mass and redshift [@merger_history_1; @merger_history_2; @merger_history_3], and furthermore the authors of [@merger_history_4] have shown that these models, combined with an empirical model of environmental effects on halo formation times, gives the correct mean concentration and scatter as a function of halo mass.
As can be seen from , there is still some room for improvement to obtain high precision mock power spectra and bispectra to match the benchmark results. In we explore changes to the halo profile for the joint lognormal-Gaussian model to curtail the excess power at small scales. It is clear that a value of $\gamma=1.2$, which is in the range of best fit values shown in , gives both a flat relative power spectrum and bispectrum. We also studied methods by which we might be able to generically boost the power spectrum and bispectrum across all scales, notable large length scales. From our investigations of different mass halos, we found that the high mass halos dominate the power at large scales, due to their high occupation number. One way to boost the power is therefore to add an extra galaxy to every parent halo above a certain mass threshold. We tested this tweak using the joint lognormal-Gaussian model, the results of which are shown in . We found that $M=2\times10^{14}\,h^{-1} M_\odot$ seems to be the appropriate mass threshold, which coupled with a radial profile of $r^{-1.2}$ allows us to obtain a fit to both the power spectrum and bispectrum to 1% accuracy between $0.04\,h\,\text{Mpc}^{-1}<k<1.1\,h\,\text{Mpc}^{-1}$. The average occupation number at this mass threshold is about 11, therefore this boost is at the 10% level in magnitude. A more natural, continuous transition, such as the $\operatorname{erfc}$ function used in the 5-parameter HOD model, can be adopted instead of a step function to obtain smoother behaviour. This may seem a rather contrived way to boost power, but it is presumably compensating some missing physical correlation (such as triaxiality etc.).
Another means by which to achieve a power boost is to raise the the power law exponent index $\alpha$ in the marginalised HOD (), as we did with the 4-parameter model. Instead of using the analytical form as we did previously, we change the occupation number drawn from the joint distribution by scaling the number of satellites by this factor: $$\begin{aligned}
\label{eq:log_alpha_boost}
\left(\frac{M-\kappa' M_0}{M_1}\right)^{\alpha'}/\left(\frac{M-\kappa M_0}{M_1}\right)^\alpha,\end{aligned}$$ as we boost both $\alpha$ and $\kappa$ to conserve particle number. We use the best fit parameters for $\alpha$, $\kappa$, $M_0$ and $M_1$, and the results are shown in . The power spectrum results are comparably to the extra galaxy method above, but this has the additional property of over-boosting the bispectrum, as we have observed in the 4-parameter HOD case. Finally, we show the 3D bispectrum tetrapyd of these improved models in which are qualitatively indistinguishable from the bispectrum obtained directly from the benchmark halo distribution.
Conclusions \[sec:conclusions\]
===============================
In this paper we have applied the fast bispectrum estimator [`MODAL-LSS`]{} [@DM] to accurately measure the bispectrum from a large mock galaxy catalogue. This catalogue was generated from a [`GADGET-3`]{} $N$-body simulation using the `ROCKSTAR` halo-finder. We have provided a quantitative three-shape fit to the resulting halo bispectrum, comparing it with the corresponding bispectrum of the underlying dark matter, studied previously [@Andrei]. A key goal has been to determine phenomenological methods to create fast mock catalogues that can reproduce the benchmark halo bispectrum from `ROCKSTAR`. In doing so we have restricted ourselves to using only the mass, position and concentration information for parent halos, relying on statistical modelling of the halo profile and occupation number to recover the benchmark power spectrum and bispectrum. We modelled these effects in configuration space to obtain accurate mock power spectra and bispectra, and we aim to incorporate further observational effects such as RSDs in a future paper.
Halo profile
------------
An important ingredient in a phenomenological galaxy catalogue is the spatial distribution of the galaxies within a parent halo. The subhalo radial number density found for parent halos (separated into a number of mass bins) was not well matched by the average NFW dark matter profile found in the same halo mass range. On the other hand, as suggested, e.g., by [@subgen] a power law profile of the form $\rho\propto r^{-\gamma}$, with $\gamma\sim1$, works well as a universal profile across a wide range of halo masses spanning three orders of magnitude.
By randomising the solid angular distribution of the benchmark halos we have also quantified the power loss in the power spectrum and bispectrum if halo substructure and triaxiality are not preserved, that is, by fixing the original subhalo number and then retaining radial distances while randomising angular positions. The effect of this internal redistribution was modest with deviations less than 1% and 4% at $k,K/3=1\,h\,\text{Mpc}^{-1}$ for the power spectrum and bispectrum correlator respectively. These lost correlations mean that the best fit power law profile near $\gamma\approx1$ is necessarily power deficient at small scales. However, we have found that phenomenological values around $\gamma \approx1.5$ apparently help to recover this power loss to less than 0.5% up to $k,K/3=2\,h\,\text{Mpc}^{-1}$ in both power spectrum and bispectrum. Note that these profile modifications are constrained by using the original occupation numbers for individual halos, which is information generally only available from costly nonlinear simulations.
Halo occupation distribution
----------------------------
To statistically model the number of galaxies within a parent halo, we have investigated the popular practice of using an HOD that only depends on halo mass, $\bar{N}_g(M)$. We observed that using the measured mean number of galaxies for a halo of a given mass $M$ to repopulate the parent halos leads to a power deficit of about 2% in the power spectrum at large scales where $k < 0.1\,h\,\text{Mpc}^{-1}$, and greater differences at smaller length scales. The loss of power in the bispectrum is more pronounced with much poorer scaling, yielding deviations exceeding 10% by $k = 0.5\,h\,\text{Mpc}^{-1}$. We found that the same effect can be reproduced if one shuffles the given halo occupation numbers within the mass bins (or by using a dispersion around the mean HOD value). Clearly this simple HOD prescription for populating halos destroys important correlations, so it suggests that other physical mechanisms are contributing to the number of galaxies per halo, rather than just the halo mass.
Nevertheless, we have attempted to recover this power loss by tuning the four-parameter HOD model given by (). The best fit parameters actually lead to further power loss at all scales, perhaps because the HOD fit is only accurate up to 10%, which suggests that a better functional form should be adopted to match HODs from simulations. After tweaking the parameters while keeping galaxy number constant we found that boosting the power law exponent $\alpha$ by 4.5% raised the power spectrum to the correct level to $k = 0.5\,h\,\text{Mpc}^{-1}$ irrespective of choice in the other parameter, but unfortunately this results in substantially over-boosting the bispectrum (overcompensating at around the 10% level). We infer that an HOD model which only depends on halo mass cannot accurately reproduce both the power spectrum and bispectrum of a benchmark mock catalogue.
Assembly bias
-------------
These investigations led us to incorporate further information in the HOD that takes into account the formation history of the halos to determine the halo occupation number. Motivated by other assembly bias studies in the literature such as [@assembly_bias; @Gao2005; @Sunayama2016; @Hearin2016; @Wechsler2006], we have developed a new prescription using a joint probability distribution to model correlations between the halo occupation number $N_g$ and concentration $c$ found in the benchmark catalogue. Even an extension which just separates halos of a given mass into two concentration bins [@eisenstein2] - representing above and below median values for $c$ - yields more accurate power spectra and bispectra with improved scaling.
We have found that the marginalised distribution for halo concentration is well described by a Gaussian distribution across the entire mass range of the benchmark, while taking care to impose an appropriate lower bound when drawing from the distribution. On the other hand the marginalised halo occupation number is well fitted with a lognormal distribution. Our assembly bias model is therefore a joint lognormal-Gaussian bivariate distribution which depends on halo mass, $P(N_g\cap c\,|M)$. A non-zero covariance between the two variables imply that halo concentration is correlated with halo occupation number, and we find the correlation coefficient is $r\approx-0.5$ for a mass range of $M=10^{13}-10^{14}h^{-1}\,M_{\odot}$. In terms of the assembly history within an $N$-body simulation, we can interpret higher halo concentration causing fewer subhalos because of earlier halo formation, that is, in this case there is more time for the merger of substructure (a factor which depends to some extent on our benchmark resolution). We were also able to obtain very similar results using a joint lognormal-lognormal distribution for the halo number and concentration.
Prescriptions for fast mock catalogue polyspectra
-------------------------------------------------
One of the key results of our paper is that our assembly bias model for populating halos can recover the benchmark power spectrum to within 1% and the bispectrum to within 4% across the entire range of scales of the simulation. In its most accurate form this involves using a joint lognormal-Gaussian probability distribution for $N_g$ and $c$, coupled with a radial power law halo profile with $\gamma=1.2$, together with the concentrations found for individual halos. Without use of individual halo concentrations, we could assign both concentration and halo number statistically, obtaining good bispectrum scaling though with a 2% and 5% deficit emerging for the power spectrum and bispectrum respectively. These assembly bias prescriptions represent a considerable improvement over all the other methods we investigated in this paper and can be deployed with fast mock catalogue generators.
We also explored ways to phenomenologically reduce this small remaining power deficit. Modifying the index in the four-parameter HOD model, as before, encountered the problem of over-boosting the bispectrum. However, motivated by the dominant contributions of high mass halos, we considered enhancing this by adding an extra galaxy to all parent halos above a certain mass threshold $M>2\times10^{14}\,h^{-1} M_\odot$. We were able to obtain a 1% fit to both the benchmark power spectrum and bispectrum in the range $0.04\,h\,\text{Mpc}^{-1}<k<1.1\,h\,\text{Mpc}^{-1}$.
Finally we note a few caveats about the mock catalogue population methods we have proposed. Our assumption that galaxies can be identified with subhalos will have an important impact on both the spatial distribution and occupation number of the parent halos; clearly this approach can be developed further and made more realistic by increasing resolution and incorporating more physical mechanisms in the simulations. For example, our present mass resolution with a particle mass of $M_p=2.093\times10^{10}$ may be insufficient to ensure finer substructures are resolved and preserved during halo mergers; it would be prudent in future to expand these investigations by exploring the dependence on simulation resolution. We also note that our most accurate assembly bias model relies on concentration information for individual halos obtained from the mock catalogue simulation. This is not necessarily available from all fast simulation generators and halo finder codes, but algorithms such as PINOCCHIO can provide the merger history of dark matter halos, which in turn could be converted into halo concentrations. Nevertheless, by statistically sampling the Gaussian distribution for concentration we were still able to obtain a good power spectrum and bispectrum fit, and this model can be further fine-tuned with the galaxy boost.
In summary, motivated by assembly bias, we have developed a statistical prescription for populating parent halos with subhalos which can simultaneously reproduce both the halo power spectrum and bispectrum obtained from nonlinear $N$-body simulations. We anticipate that this robust approach can be adapted to match polyspectra obtained from more sophisticated $N$-body and hydrodynamic simulations. Combining this relatively simple methodology with fast estimators like [`MODAL-LSS`]{} [@DM] should enable the bispectrum to become a key diagnostic tool, both for breaking degeneracies in cosmological parameter estimation and for quantitatively analysing gravitational collapse and other physical effects on highly nonlinear length scales.
Acknowledgements {#sec:acknowledgements}
================
Many thanks to Tobias Baldauf and James Fergusson for many enlightening conversations, and to Oliver Friedrich and Cora Uhlemann for comments on the manuscript. Kacper Kornet provided invaluable technical support for which we are very grateful.
This work was undertaken on the COSMOS Shared Memory system at DAMTP, University of Cambridge operated on behalf of the STFC DiRAC HPC Facility. This equipment is funded by BIS National E-infrastructure capital grant ST/J005673/1 and STFC grants ST/H008586/1, ST/K00333X/1.
This work used the COSMA Data Centric system at Durham University, operated by the Institute for Computational Cosmology on behalf of the STFC DiRAC HPC Facility (www.dirac.ac.uk. This equipment was funded by a BIS National E-infrastructure capital grant ST/K00042X/1, DiRAC Operations grant ST/K003267/1 and Durham University. DiRAC is part of the National E-Infrastructure.
MM acknowledges support from the European Union’s Horizon 2020 research and innovation program under Marie Sklodowska-Curie grant agreement No 6655919.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We detail our ongoing work in Flint, Michigan to detect pipes made of lead and other hazardous metals. After elevated levels of lead were detected in residents’ drinking water, followed by an increase in blood lead levels in area children, the state and federal governments directed over \$125 million to replace water service lines, the pipes connecting each home to the water system. In the absence of accurate records, and with the high cost of determining buried pipe materials, we put forth a number of predictive and procedural tools to aid in the search and removal of lead infrastructure. Alongside these statistical and machine learning approaches, we describe our interactions with government officials in recommending homes for both inspection and replacement, with a focus on the statistical model that adapts to incoming information. Finally, in light of discussions about increased spending on infrastructure development by the federal government, we explore how our approach generalizes beyond Flint to other municipalities nationwide.'
author:
- Jacob Abernethy
- Alex Chojnacki
- Arya Farahi
- Eric Schwartz
- Jared Webb
bibliography:
- 'flintrefs.bib'
title: 'ActiveRemediation: The Search for Lead Pipes in Flint, Michigan'
---
<ccs2012> <concept> <concept\_id>10010520.10010553.10010562</concept\_id> <concept\_desc>Computer systems organization Embedded systems</concept\_desc> <concept\_significance>500</concept\_significance> </concept> <concept> <concept\_id>10010520.10010575.10010755</concept\_id> <concept\_desc>Computer systems organization Redundancy</concept\_desc> <concept\_significance>300</concept\_significance> </concept> <concept> <concept\_id>10010520.10010553.10010554</concept\_id> <concept\_desc>Computer systems organization Robotics</concept\_desc> <concept\_significance>100</concept\_significance> </concept> <concept> <concept\_id>10003033.10003083.10003095</concept\_id> <concept\_desc>Networks Network reliability</concept\_desc> <concept\_significance>100</concept\_significance> </concept> </ccs2012>
Introduction
============
The story of the Flint Water Crisis is long and has many facets, involving government failures, public health challenges, and social and economic justice. As Flint struggled financially after the 2008 housing crisis, the state of Michigan installed emergency managers to implement several cost saving measures. One of these actions was to switch Flint’s drinking water source from the Detroit system to the local Flint river in April 2014. The new water had different chemical characteristics which were overlooked by water officials. Of course many water systems have lead pipes, but these pipes are typically coated with layers of deposits, and the water is treated appropriately in order to prevent corrosion and the leaching of heavy metals. City officials failed to follow such necessary procedures, the pipes began to corrode, Flint’s drinking water started to give off a different color and smell [@Mlive-toxicleadgetsintoFlintwater:url], and Flint residents were exposed to elevated levels of lead for nearly two years before the problems received proper attention. In August 2015 environmental engineers raised alarm bells about contaminated water[^1] [@torrice2016lead], not long after a pediatrician observed a jump in the number of Flint children with high blood lead levels[^2][@hanna2016elevated], and by January 2016 the Flint Water Crisis was international news.
As attention to the problem was growing, government officials at all levels got involved in managing the damage and pushing recovery efforts. In looking for the primary source of lead in Flint’s water distribution, attention turned to Flint’s *water service lines*, the pipes that connect homes to the city water system. These service lines are hypothesized to be the prime contributor to lead water contamination across the United States [@sandvig2008contribution]. Service lines, therefore, became a top priority for the City of Flint in February 2016. The Michigan state legislature eventually appropriated \$27M towards the expensive process of replacing these lines at large scale; later the U.S. Congress allocated another nearly \$100M towards the recovery effort. The group directed to execute the replacement program was called Flint Fast Action and Sustainability program (FAST Start), and their task was to remove as many hazardous service lines as possible up to funding levels.
The primary obstacle that the FAST Start team has faced throughout their work is uncertainty about the locations of lead or galvanized pipes. Although the U.S. Environmental Protection Agency requires cities to maintain an active inventory of lead service line locations, Flint failed to do so. Service line materials are in theory documented during original construction or renovation, but in practice these records are often incomplete or lost. Most importantly, because the information is buried underground, it is costly to determine the material composition of even a single pipe. Digging up an entire water service line pipe under a resident’s yard costs thousands of dollars. City officials were uncertain about the total number of hazardous service lines in the city, with estimates ranging from a few thousand to tens of thousands. Uncertainty about the service line material for individual homes has dramatic cost implications, as construction crews will end up excavating pipes that do not need to be replaced. These questions—how many pipes need to be replaced and which home’s pipes need remediation—are at the core of the work in this paper.
Beginning in 2016, our team began collaborating directly with Flint city officials, analyzing the available data to provide statistical and algorithmic support to guide decision making and data collection, focusing primarily on the work of the FAST Start pipe replacement efforts. By assembling a rich suite of datasets, including thousands of water samples, information on pipe materials, and city records, we have been able to accurately estimate the locations of homes needing service line replacement, as well as those with safe pipes, in order to target recovery resources more effectively. Specifically, we have combined statistical models with active learning methods that sequentially seek out homes with hazardous water infrastructure. Along the way we have developed web-based and mobile applications for coordination among government offices, contractors, and residents. Over time, the number of homes’ service lines inspected and replaced has increased, as seen in Figure \[fig:maps\_by\_time\].
![ Progress of the replacement program. By March of 2016, only 36 homes had undergone replacement (top left); by December 2016, a total of 762 homes either been inspected or fully replaced (top right); as of September of 2017, this had grown to a total of 6,506 homes (bottom). Homes labeled green were selected for replacement but were deemed safe after copper lines were discovered by contractors.[]{data-label="fig:maps_by_time"}](images/map-dangerous-cumulative-rowe.png "fig:"){width="1.1in"} ![ Progress of the replacement program. By March of 2016, only 36 homes had undergone replacement (top left); by December 2016, a total of 762 homes either been inspected or fully replaced (top right); as of September of 2017, this had grown to a total of 6,506 homes (bottom). Homes labeled green were selected for replacement but were deemed safe after copper lines were discovered by contractors.[]{data-label="fig:maps_by_time"}](images/map-dangerous-cumulative-rowe-dec-2016.png "fig:"){width="1.1in"} ![ Progress of the replacement program. By March of 2016, only 36 homes had undergone replacement (top left); by December 2016, a total of 762 homes either been inspected or fully replaced (top right); as of September of 2017, this had grown to a total of 6,506 homes (bottom). Homes labeled green were selected for replacement but were deemed safe after copper lines were discovered by contractors.[]{data-label="fig:maps_by_time"}](images/map-dangerous-cumulative-rowe-sept-2017.png "fig:"){width="2.15in"}
In the present paper, we detail the challenges faced by decision-makers in Flint, and describe our nearly two years of work to support their efforts. With the understanding that many municipalities across the US and the world will need to undertake similar steps, we propose a generic framework which we call , that lays out a data driven approach to efficiently replace hazardous water infrastructure at large scale. We describe our implementation of in Flint, and describe the empirical performance and potential for cost savings. To our knowledge, this is the first attempt to predict the pipe materials house-by-house throughout a water system using incomplete data and also the first to propose a statistical method for adaptively selecting homes for inspection to replace hazardous materials in the most cost effective manner. This work illustrates a holistic, data-driven approach which can be replicated in other cities, thereby enhancing water infrastructure renovation effort with data-driven approaches.
*Key Results.* Among our main results, we emphasize that our predictive model is empirically accurate for estimating whether a Flint home’s pipes are safe/unsafe, with an AUROC score of nearly 0.92, and a true positive rate of 97%. Since our approach involves a sequential protocol that manages the selection of homes for inspection and replacement based on our statistical model, we are also able to compare the model’s total remediation cost to that of the existing protocol of officials. reduces the costly error rate (fraction of unnecessary replacements) to 2%, lowering the effective cost of each replacement by 10% and yielding about \$10M in potential savings.
*Methodology.* Let us now give a birds-eye view of our methodological template. manages the inspection and replacement of water service lines across a city, with the long-term objective of replacing the largest number of hazardous pipes in a city under a limited budget. The formal in-depth exposition of this framework will be given in Section \[sec:overall\_framework\].
Input: parcel data, available labeled homes Predict hazardous/safe material via querying , Generate inspections via Generate replacements via Input observed data to
Since the process of identifying and replacing these lines around a city is naturally sequential, the decisions and observations made earlier in the process ought to guide decisions made at future stages. With this in mind, our framework continuously maintains three subroutines that are updated as data arrives. Following the outline in Algorithm \[alg:overview\], the first of these is a , that generates probabilistic estimates of the material type of both the public and private portion of each home’s service lines. The input of this model is property data, water test results, historical records, and observed service line materials. The second subroutine is , the decision procedure that that generates a (randomized) set of homes for inspection. This should be viewed as an *active learning* protocol, with the goal of “focused exploration.” The third routine, , makes decisions as to which homes should receive line replacements; for reasons we discuss below, we typically assume that is a *greedy* algorithm.
*Roadmap.* This paper is structured as follows. We begin in Section \[sec:data\_availability\] by laying out the datasets available to us, with the story given chronologically to describe the shifting narrative as information emerged. We then explain the framework in greater detail in Section \[sec:overall\_framework\], and sketch out the statistical model mixed with the prediction, inspection, and decision-making framework. In Section \[sec:empirical\_flint\] we employ on the data available in Flint, to show the empirical performance of our proposed methods in an actual environment, as well as in a simulated environment leveraged from Flint’s data. We finish by detailing the potential for significant cost savings using our approach.
Emerging Data Story of Flint’s Pipes {#sec:data_availability}
====================================
We now describe the various sources of data and the timeline during which these became available. This is summarized in Table \[tbl:data\_history\] and more precise chronology is given throughout this section. More details will be available in the full version of this work.
Pre-crisis Information – Through mid-2015 {#sec:precrisisdata}
-----------------------------------------
In this section, we explain the relevant datasets that had been collected and maintained prior to the water crisis. This information, as we discovered later, was limited in both depth and quality.
### Parcel Data
The city of Flint generously provided us with a dataset describing each of the 55,893 parcels in the city. These data include a unique identifier for each parcel and a set of columns describing City-recorded attributes of each home, such as the property owner, address, value, and building characteristics. A complete list of the parcel features is discussed in our previous work [@chojnackietal2017kdd]. The distributions of the age of homes and their estimated values (Figure \[fig:parcel\_summaries\]) tell an important story about the kinds of properties in Flint.
![From city parcel data, distribution of home construction by year (left) and building value by dollar (right). The majority of the housing stock in Flint was built when it was a major automobile manufacturing hub, before current regulations about lead infrastructure were in place. Flint has experienced significant economic decline in recent years, leading to depressed real estate prices.[]{data-label="fig:parcel_summaries"}](images/histogram_yearbuilt.png "fig:"){width="1.6in"} ![From city parcel data, distribution of home construction by year (left) and building value by dollar (right). The majority of the housing stock in Flint was built when it was a major automobile manufacturing hub, before current regulations about lead infrastructure were in place. Flint has experienced significant economic decline in recent years, leading to depressed real estate prices.[]{data-label="fig:parcel_summaries"}](images/histogram_homevalue.png "fig:"){width="1.6in"}
### City Records of Service Lines {#sec:cityrecords}
Initially, Flint struggled to produce any record of the materials in the city’s service lines. Eventually, officials discovered a set of over 100,000 index cards in the basement of the water department[^3] (see top of Figure \[fig:cityrecords\]). As part of a pro bono collaboration, the handwritten records have been digitized by [Captricity.com](Captricity.com) and provided to the City of Flint.[^4] Around the same time, a set of hand-annotated maps were discovered that contained markings for each parcel that specified a record of each home’s service line (bottom of Figure \[fig:cityrecords\]). The map data was digitized by a group of students from the GIS Center at the University of Michigan-Flint lead by the director Prof. Martin Kaufman [@Mlive-Flintdataonleadwater:url]. Many of the entries in the city’s records list *two* materials for a given record, such as “Copper/Lead,” but they do not specify the precise meaning of the multiple labels. However, our latest evidence suggests that, at least in the typical case, the double records were intended to specify that the second label (“Lead” in “Copper/Lead”) indicates the public service line material (water main to curb stop), and the first label describes the private service line (curb stop to home), while an entry that is simply given as “Copper” may refer to both sections or only one. Lastly, there are a number of entries in the records that say “Copper/?” for the service line material, indicating missing information for the service line on the original handwritten records. Many other records are simply blank, recorded as “Unknown/Other.”
![City officials located a set of over 100,000 handwritten index cards (top) with recorded work information dating back over 100 years, and annotated maps with data on home SLs (bottom). Red circles added to emphasize markings denoting material types.[]{data-label="fig:cityrecords"}](images/flint_city_records_3x5.png "fig:"){height="2.5in"} ![City officials located a set of over 100,000 handwritten index cards (top) with recorded work information dating back over 100 years, and annotated maps with data on home SLs (bottom). Red circles added to emphasize markings denoting material types.[]{data-label="fig:cityrecords"}](images/service_line_record_screenshot.png "fig:"){height="1.3in"}
Peak of Crisis & Replacement Pilot {#sec:deq-in-home-inspection}
----------------------------------
In the wake of the crisis the State of Michigan began to discuss plans for lead abatement in Flint. It had become clear to lawmakers in Michigan that they would need to invest in a large-scale removal of lead pipes from the city. To begin, FAST Start initiated a pilot phase, with the goal of replacing the service lines of a small set of residences. Flint’s Mayor and the FAST Start team awarded a contract to Rowe Engineering to replace pipes at 36 homes around the city. They selected these homes based on risk factors including the presence of high water lead levels, pregnant women, and children younger than 6 years old. Nearly all of the homes, 33 of 36, had some hazardous material (lead or galvanized) in one or both portions of the service lines, while only 3 were safe. Therefore, the number of homes with physical verifications of both service line portions through September 2016 was only 36 out of over 55,000 homes. A map showing the progress of replacement in Flint can be found in Figure \[fig:maps\_by\_time\].
Meanwhile, in order to gather reliable information about private part of the service lines, the Michigan Department of Environmental Quality (DEQ) directed a team of officials and volunteers from the local plumbers union to personally inspect a sample of the homes of Flint residents. The public portion of the service line runs entirely under the street and sidewalk, while the private portion runs directly into the basement of the residents’ home. Thus, the private portion can be inspected without any digging. The DEQ inspectors submitted their inspection results. As of June of 2016, the department had collected a data from over 3,000 home inspections. We consider this data to be reliable, since it was curated by DEQ officials who provided it to our team. This dataset allowed us to partially evaluate the reliability of the city records discussed in Section \[sec:precrisisdata\]. It is important to note that the comparison is not “apples to apples,” as the DEQ inspections were private-portion only whereas the labels in the city records did not specify which portion of the line was indicated. We report the confusion matrix between DEQ inspection data and city records in Table \[tbl:sl-records-vs-truth-slr-hvi\]. The comparison suggests that, while the records were correlated with ground truth, the discrepancies were substantial.
Large-Scale Replacement, Mid-2016 to Now
----------------------------------------
Our group at the University of Michigan began engaging with the FAST Start team in the summer of 2016. One of the critical decisions the team needed to make was the selection of homes that would be recommended for service line replacement. According to the FAST Start payment agreements, contractors receive roughly half (\$2500) the cost of a full replacement (\$5,000) for excavated homes with copper on both public and private portions, due to removing concrete, refilling concrete, machine use, and labor. The choice of homes was deemed critically important, as the excavation of a home’s service line that discovers a “safe” (e.g., copper) pipe is effectively wasted money, aside from the benefit of learning of the pipe’s true material. Our work has focused on minimizing such unnecessary excavations, using the tools we describe below.
### Early Replacement Activity and Findings (Fall 2016) {#sub:replacement_phase_1_and_early_results}
By summer 2016, FAST Start had selected a set of 200 homes for replacement, scheduled to begin August, $31^{\rm st}$. This selection is called Phase One. Like the Pilot Phase, their criteria included the presence of high water lead levels, pregnant women, children under six years old, as well as veterans and the elderly. In the present section, we describe how we helped facilitate data collection for Phase One, and how the results forced us to rethink our objectives and adjust our models.
By late September 2016, the early data from the service line replacement program began to arrive, and the rate of lead and other hazardous pipes discovered was alarming; 96% (165/171) of excavations revealed lead in the public portion of the line. These findings differed significantly from the city records, which had previously indicated that among those homes only 40% would contain lead in either portion. As data from Phase One arrived it was becomingly increasingly clear that *likely over 20,000 homes* have unsafe pipes serving their water – dramatically higher than earlier estimates. Critically, as these discoveries were being made, a debate was taking place in the U.S. Congress discussing the possibility of more than \$100M in funding for the Flint’s recovery efforts.
With the debate in the Congress ongoing, our team decided to put out an informal report to raise the alarm about the extent of the lead issue, and several news outlets reported on our findings [e.g. @MRadio-Flintmighthave:url; @FarmoreFlinthomes:url]. This effort lead to a formal report in November of 2016 that provided a more precise estimate of the number of lead replacements likely to be needed [@CityOfFlint:url], which was provided to the city’s mayor, the DEQ, and the U.S. Environmental Protection Agency. Our report, based on comparing the city records and the data gathered from contractors, suggested that the number of needed replacements would be between 20,600 and 37,100. The large range accounts for the inherent uncertainty in data collection and model assumptions, as well as the question of *occupancy*. One challenge that is specific to Flint is the fact that around one third of the city’s homes are not occupied, a rate that is the *highest in the country*[^5].
### Contractor Data Collection Application {#sec:collection}
With thousands of homes scheduled to have their water service lines excavated by multiple contractors, the collection and management of the data generated by this large-scale effort would prove to be a logistical challenge. While initially there was a plan in place to collect data via paper forms that would later get transferred to a spreadsheet, it was increasingly clear that digitally recording information, and storing it centrally, would be a more effective strategy and less prone to error.
![Mobile and web app, developed by the authors, to gather replacement data from contractors on-site.[]{data-label="fig:app_sl_collection"}](images/flintlines_app.png){width="2.5in"}
Our team volunteered to facilitate the data collection efforts. In the fall of 2016, we developed a web and mobile application with various access levels. The latest version of this app is a custom-built web application using Python and the web framework using Flask. The users, on-site contractors as well as DEQ and Fast Start officials, are asked to select homes and to fill in essential information about service line work accomplished at each site. This information includes the excavated pipe materials, lengths, dates, and data on the home’s residents. The output of the form appears in real-time in a live database with mapping capabilities. We adopted a tiered permissions structure with password-protected information to maintain the privacy of the data. The app continues to be used as of this writing for tracking progress for the public and for paying contractors for completed work.
### Hydrovac Digging: Inspection without Replacement {#sec:unbiased_data_and_the_hydrovac_pilot}
The foremost challenge of a large-scale service line replacement program is the uncertainty about which homes possess safe service lines and which homes have lines made of hazardous materials. As of the summer of 2016, the only concrete verified data on pipe materials across the city consisted of the 36 data points provided by the Rowe engineering. By the end of Phase One, this number increased to about 250 homes. At this point, the excavation of pipes at a single home would cost anywhere from \$2,500-5,000, a prohibitively high cost for data collection. At the same time, the available replacement data consisted of *cherry-picked homes*: houses were selected for line replacement if they were presumed to have an overwhelming likelihood of lead. These addresses and were highly concentrated in only three neighborhoods (see Figure \[fig:maps\_by\_time\]) and provided nothing close to a representative sample of the broader city. We therefore realized, and emphasized to members of FAST Start, that the effort required a cheaper, quicker, and more statistically sound method to gather data.
![Using a hydrovac truck for inspection, requires a large truck and crew (left) and exposes the pipe material underground (right). []{data-label="fig:hydrovac"}](images/hydrovac_process.jpg "fig:"){height="1.1in"} ![Using a hydrovac truck for inspection, requires a large truck and crew (left) and exposes the pipe material underground (right). []{data-label="fig:hydrovac"}](images/hydrovac-dirt.jpg "fig:"){height="1.1in"}
After a lengthy discussion with water infrastructure experts and contractors, a new alternative emerged: *hydrovac inspections*. A hydro-vacuum truck, or simply a hydrovac (see Figure \[fig:hydrovac\]), has two main components: a high-pressure jet of water used to loosen soil and a powerful vacuum hose that sucks the loosened material into a holding tank. The hydrovac technique allows workers to dig a small hole quickly and then inspect whatever is observed underground. It is ideal for determining service line materials, as it can dig at the location of the home’s curb box (connects the home’s service line at the property line to the water main), and observe the pipe materials for both the public and private portions of the service line. The cost can be as low as \$250 per inspection and often does not require prior approval from residents, as the digging site is mostly confined to city property. One limitation is that the hydrovac can only dig through the soil, and not through driveway or sidewalk pavement. This limitation led to unsuccessful excavations 20%-25% of the time, according to the hydrovac engineers.
The selection of homes for hydrovac inspection was one of the primary contributions of our team to FAST Start’s efforts, and we were given wide discretion for “sampling” homes. This reflects the political and logistical challenges of service line replacement, as full excavation of service lines required a much longer process with oversight by the city council. We would emphasize that, in the following section where we describe our sequential decision protocols, our primary focus was on the model and inspection subroutines, and we assume the replacements are made using a simple greedy strategy.
Prediction & Decision Framework {#sec:overall_framework}
===============================
In this section, we formally define the sequential decision-making problem for a city, in our case the city of Flint, seeking to remove all of the lead service lines from its homes under the following conditions: (i) for almost all homes, the service line materials of homes are unknown; (ii) there is a method of inspection to collect information; (iii) it is costly to excavate service lines that do not need to be replaced; and (iv) there is a fixed budget for replacement and inspection.
There are $N$ total homes in the city, and it is unknown which homes need new service lines. We let the unknown label for home $i$ be $y_i \in \{0,1\}$, taking on the value 1 if the home needs a replacement and 0 otherwise. Note that a home needs replacement if either the public *or* private portion of the service line is hazardous. We also have information about each home, denoted by a vector $x_i$, with $m$ features, that describe it (see Section \[sec:data\_availability\]). We want to learn the label $y_i$ given $x_i$, for each $i=1,\ldots,N$. We divide the procedure to find out these labels into two steps: first, a statistical model for prediction (); and second, an algorithm that decides which homes to observe next ().
There is another decision rule, , that determines which pipes to replace next. is a *greedy* algorithm. That is this algorithm recommends that the replacement crew should go to the homes with the highest probabilities of having hazardous pipes. Given that, our is focused on learning, and uses that learning to reduce costs.
{#sec:model_for_prediction}
In this section, we describe , which assign a probability that a service line contains hazardous materials. is a novel combination of predictive modeling using machine learning and Bayesian data analysis. First, a machine learning prediction model gives a prediction for the public and private portion of each home’s service line using known features. These predictions then become the parameters to prior distributions in a hierarchical Bayesian model designed to correct some of the limitations to the machine learning model.
### Machine Learning Layer
The machine learning layer of outputs a probability of having a hazardous service line material for each home for which the material is unknown. Specifically, this layer gives a prediction, $\hat y_{i,k} = f_\theta(X_{i,k})$, the probability that service line portion $k$ for home $i$ is hazardous, and $X_{i,k}$ is a vector of features, described in Section \[sec:precrisisdata\]. After examining several models empirically (see Section \[sec:empirical\_prediction\]) we chose the machine learning layer, $f_{\theta}()$, to be XGBoost, a boosted ensemble of classification trees [@chen2016xgboost].
### Hierarchical Bayesian Spatial Model Layer {#sec:HBayesModel}
One limitation of classification algorithms is how they handle unobserved variables, which may be correlated with the outcome. We address this limitation with a hierarchical Bayesian spatial model. This accounts for unobserved heterogeneity related to geographic location and similiarity of homes, which is used in hierarchical spatial models with conditional autoregressive structure [@gelman2014bayesian; @gelfand2003proper; @lee2011comparison; @lee2013carbayes]. Empirically, each geographic region across the city (e.g., voting precincts) has a different number of observed service lines. While a city-level (pooled) model ignores precinct differences and a separate (unpooled) model for each precinct is limited by small sample sizes or even no observations, our full hierarchical (partially pooled) model strikes a balance with shrinkage. Precincts with little information will have their parameters pulled towards the city-wide distribution. Details of the Bayesian model, and how these are combined with the machine learning layer, are explained further in the full version of the paper.
{#sec:alg_for_selecting}
Now we describe , which utilizes active learning [@balcan2013statistical; @balcan2010true; @liu2008active] to efficiently allocate scarce resources to find and replace hazardous service lines. In general, a decision-maker may choose any active learning algorithm for inspection. In this work, we implement a version of Importance Weighted Active Learning (IWAL).
Notation Explanation
--------------------------------------------- --------------------------------------------------------
$\mathcal{X}$ observable feature space for each parcel/home
$x_i, y_i$ observable features for home $i$, label for home $i$
${\mathbf{h}}_t/{\mathbf{r}}_t$ indicates “home $i$ inspected/replaced at $t$?”
$y_t^{\mathbf{h}}/ y_t^{\mathbf{r}}$ indicates “learned $i$’s’ label via inspect./replace?”
$Q_{it}$ indicates “learned $i$’s label at $t$?”
$q_{it}$ indicates “already know $i$’s label at $t$?”
$c^h, C^{{\mathbf{r}}+}, C^{{\mathbf{r}}-}$ cost of inspect., successful SLR, & failed SLR
$U_t, L_t$ set of labeled/unlabeled data at $t$
: Summary of notation []{data-label="tbl:notation"}
### Active Learning Setup: Inspection and Replacement
We begin by describing the problem of efficiently locating and replacing hazardous pipes in a pool-based active learning framework (see Algorithm \[alg:al\_mab\]). Consider a budget of $B$ total queries and a pool ${\mathcal{P}}= \left \{ x_1,\ldots,x_n \right \}$ of unlabeled homes. Then at each time period $t$ the algorithm will produce a probability vector $\phi_t = (\phi_{1,t},\ldots,\phi_{n,t})$ that gives the probability that any home $i$ is chosen at $t$.
Contractors can determine the material of a service line via either hydrovac inspection or service line replacement. When home $i$ is chosen for hydrovac inspection at time $t$, we denote ${\mathbf{h}}_t = i$. When the service line for home $i$ is replaced at time $t$, we denote ${\mathbf{r}}_t = i$. Once inspected or replaced, $y_i$ is known for all subsequent rounds $t, t+1, \ldots $ and $p_{i,k}$ becomes 1 or 0, and we define $q_{i,t}=1$ if home $i$ has been observed through round $t$. $n_t^{{\mathbf{h}}}$ and $n_t^{{\mathbf{r}}}$ are the number of hydrovac and replacement visits, respectively. The number of successful replacements is denoted as $n_t^{{\mathbf{r}}+}$ (true positives) and the number of unnecessary replacements as $n_t^{{\mathbf{r}}-}$ (false positives).
We initially set $U_0 = {\mathcal{P}}$, and let $U_t = \left\{ x_i | q_{i,t} = 0 \right\}$ be the set of homes whose service line material is unknown at time $t$, and $L_t$ be the set of homes with known service line materials. Finally, the budget also allows for a fixed number of inspections $d$ for each period. The problem is how to select these $d$ homes with unknown labels at each period $t$ to maximize information gained.
### Simple Active Learning Heuristics: Uniform and Greedy
We first propose several benchmark strategies for selecting homes for inspection. This family of algorithms randomly alternate between *random exploration* of the unobserved data and *greedy inspection* of the highest-predicted hazardous homes. As we see in Table \[tbl:decision\_rules\], these decision rules differ in the costs they incur.
- **HVI uniform** *(egreedy(1.0))*: Select homes uniformly at random from the pool of those with unknown service lines.
- **HVI greedy** *(egreedy(0.0))* Select the homes most likely to have hazardous service lines, based on current model estimates.
- **HVI $\varepsilon$-greedy** *(egreedy($\varepsilon$))*: For a $1-\varepsilon$ fraction of the inspections, select *greedily*, that is select homes for HVI based on the highest predicted likelihood of danger. For the remaining $\varepsilon$ fraction, select homes uniformly at random for HVI. We experiment with values $\varepsilon = \{0.1,0.3,0.5\}$. Also, we note that **HVI uniform** and **HVI greedy** are special cases, with $\varepsilon$ set to $1.0$ and $0.0$, respectively.
### Importance Weighted Active Learning
We propose an algorithm that takes in the current beliefs about whether each home has hazardous pipe material, and outputs a decision of which homes should be inspected next period. This proposal is a variant of the Importance Weighted Active Learning (IWAL) algorithm [@beygelzimer2009importance]. The key idea behind IWAL is to sample unlabelled data from a *biased* distribution, with more weighted placed on examples with greater uncertainty, and then after obtaining the desired labels to incorporate the new date on the next iteration of model training. Our implementation of this approach takes the part of which is core to Algorithm \[alg:al\_mab\]. A full explanation of our <span style="font-variant:small-caps;">IWAL</span> implementation will be available in the full version of the paper.
### Analyzing Costs
There are two categories of costs incurred in Algorithm \[alg:al\_mab\]: hydrovac inspections and replacement visits. Hydrovac inspections always cost the same amount and are denoted $c^{\mathbf{h}}$. Service line replacement costs, however, depend on what is actually in the ground. If contractors excavate a service line that does not need be replaced, we still incur a cost $c^{{\mathbf{r}}-}$ for labor and equipment, even though no replacement occurred. On the other hand, if contractors uncover a line that needs to be replaced then the direct cost of replacement is $c^{{\mathbf{r}}+}$.
But *effective cost per successful replacement* is greater than its direct cost, and we define formally it as $\text{TotalCosts} / n^{{\mathbf{r}}+}$, where $$\text{TotalCosts} = c^{{\mathbf{h}}} n^{{\mathbf{h}}} + c^{{\mathbf{r}}+} n^{{\mathbf{r}}+} + c^{{\mathbf{r}}-} n^{{\mathbf{r}}-}$$ (See Algorithm \[alg:al\_mab\]). In Flint, hydrovac inspection costs are summarized in Table \[tbl:decision\_rules\]. We note that the effective cost of a successful replacement is driven by two factors: the model accuracy ($\text{HitRate}^{{\mathbf{r}}}$) and the ratio of their costs, $c^{{\mathbf{r}}-}/c^{{\mathbf{h}}}$. Since unnecessary replacement visits can be avoided by prior inspection with a hydrovac, these two metrics, which naturally vary by city, will be critical guides to applying this approach to other cities.
An Empirical Analysis in Flint {#sec:empirical_flint}
==============================
In our empirical analysis, we use the data of the confirmed service line material from the 6,505 homes identified and replaced by Flint FAST Start, as of September 30, 2017 collected via our data collection app. This data is combined with our supplementary datasets describing homes (Section \[sec:data\_availability\]) and we train a suite of classification models to predict the presence of hazardous service line materials for a given home, and the predictive power of each model is measured on hold-out sets of homes (Section \[sec:empirical\_prediction\]). After selecting a strong empirical model, we utilize the model predictions in our decision-making algorithms, which recommend those homes which will be most informative for inspection, and also those most likely containing hazardous service line materials for replacement (Section \[sec:empirical\_optimization\]).
We emphasize that our methods and models were utilized by FAST Start officials for the management of the hydrovac process, and during the early days of the efforts we were given discretion over which homes would receive inspections. We used this freedom to select statistically representative samples, as well as targeted inspections on homes of interest. In practice, our modeling efforts had less impact on the choice of replacement homes, as these decisions carried greater political and logistical challenges.
Classification Algorithm Performance {#sec:empirical_prediction}
------------------------------------
Selecting a robust, precise, unbiased, and properly calibrated classification algorithm is key for our proposed active learning framework. Ultimately, the selected decision-making algorithm requires both accurate and well-calibrated probability estimates when selecting the next round of homes to investigate. To select such a classification model, we employ several machine learning model and compare them across various performance metrics. These metrics include the Area Under Receiver Operating Characteristic curve (AUROC), learning curves, and confusion matrices (including accuracy and precision). Using these scores, we find that tree-based methods are the most successful and robust category of models for this data. In particular, the model for gradient boosted trees implemented in the package `XGBoost` exhibits the strongest performance with a fewest data points.
### ROC and Learning Curves
The overall accuracy of the best performing XGBoost model, based on a holdout set of 1,606 homes (25% of available data), is 91.6%, with a false-positive rate of 3% and false-negative rate of 27%. The homes falling in the top 81% of predicted probabilities are classified as having hazardous service lines. The ROC curves and AUROC scores show XGBoost’s superior performance with an AUROC score of 0.939 on average in a range of \[0.925, 0.951\], Figure \[fig:roc\_curve\_one\_run\] and \[fig:roc\_scores\_classifiers\]). While the ROC curves show a single run of each model, the AUROC scores are shown as distributions of 100 bootstrapped samples obtained using a stratified cross-validation strategy with 75%/25% of the data randomly selected for training/validation. We further examine AUROC scores using learning curves (Figure \[fig:learn\_curve\]), using random subsets of data to illustrate diminishing returns of additional data on model performance using AUROC. We also introduce, *temporal learning curves*. These temporal learning curves reflect the exact order of data collection in 2016-17, and they show the AUROC as we re-estimate the model every two-week period to predict the danger for all remaining not-yet-visited homes. We finally ensure that the model’s predicted probabilities, which we use to quantify our prediction uncertainty, are indeed well-calibrated probabilities. [^6]
![ROC curves measuring predictions of XGBoost, RandomForest, and lasso logistic regression on a random holdout set of all available data.[]{data-label="fig:roc_curve_one_run"}](images/roc_curves_classifiers_one_run.png){width="0.7\linewidth"}
![Empirical distributions of AUROC scores of classifiers over several runs on random holdout sets. Both XGBoost and RandomForest show marked performance improvement over lasso logistic regression, and XGBoost gives marginal improvement on RandomForest.[]{data-label="fig:roc_scores_classifiers"}](images/auroc_dist_compare.png){width="0.7\linewidth"}
![Temporal Learning curves for classification of hazardous service line materials. XGBoost consistently outperforms the other classifiers, especially at the beginning of the timeline when there is less data available.[]{data-label="fig:learn_curve"}](images/kdd_temporal_learning_curves.png){width="0.8\linewidth"}
### Risk factors
Now that we have a robust predictive model, we can look at which features of a home and its surrounding neighborhood are the most predictive feature in identifying homes with hazardous service lines. But we are cautious to not make any causal claims from this analysis. We obtain the feature importance values[^7] produced by each model by training with 20 bootstrapped samples of the data and reported the average feature importance values. The most informative home features relate to its *age*, *value*, and *location*, suggesting that the context (place and time) in which the home was built, as expected, is strongly correlated with service line material. For instance, homes built during and before World War II and those that are lower in value are more likely to contain lead in their public service line. Two additional features were the *city records* and the *DEQ private SL inspection reports*. Each was shown to be a noisy but useful predictor, as indicated earlier in Table \[tbl:sl-records-vs-truth-slr-hvi\].
: Evaluation {#sec:empirical_optimization}
------------
We now discuss our implementation of the framework applied to the particular case of Flint’s large-scale pipe replacement program. With over \$100M in investment, Flint is a perfect testbed to compare the performance of our proposed methods (developed in Section \[sec:alg\_for\_selecting\]) with the actual empirical performance of the work of FAST Start thus far. Our goal is to show a high potential for savings by minimizing the number of unnecessary replacement visits, thus replacing more hazardous lines under the same budget.
### Experimental testbed, and potential biases.
Any experimental framework needs a quality dataset, with known labels for a large sample which we can evaluate our procedure. Fortunately for the City of Flint, where contractors have been working for over 18 months, we have a total of [6,506]{} observations of service line materials. A natural choice for an experimental environment, which we call [<span style="font-variant:small-caps;">ActualFlint</span>]{}, is to use the set of observed homes in Flint as a template for the overall city, i.e. a municipality with precisely [6,506]{} homes whose service line material we can query as needed.
A major challenge of relying solely on observed data is that the actual home selection process is biased, in both the hydrovac inspections and the line replacements. While a certain fraction of the home selection was random, it was often reasonably arbitrary due to political and logistical constraints. For instance, many of the homes selected for service line replacement were chosen to maximize lead discovery. To assess the effect of sample bias, we developed an experimental environment, [<span style="font-variant:small-caps;">SimulatedFlint</span>]{}, in which we suppose Flint contains only those properties *not in* the observed dataset. For this dataset, labels are assigned based on the labeled hold-out data. With observed data as training, we used a K-Nearest-Neighbors (KNN) classifier to estimate a probability for each unknown home, and then sampled a Bernoulli random variable – “safe”/“unsafe” – to assign labels. This randomized dataset has lower potential selection bias concerns. In the reported results below, we focus on [<span style="font-variant:small-caps;">ActualFlint</span>]{}, but we note that results from [<span style="font-variant:small-caps;">SimulatedFlint</span>]{}were nearly equivalent.
### Backtesting Simulation on [<span style="font-variant:small-caps;">ActualFlint</span>]{}
We quantify the cost savings from implementing our algorithm by comparing the sequential selection of homes from the proposed decision rules to what the Flint FAST Start initiative actually did in 2016-17. The goal is to stretch the allocated funds to remove hazardous pipes from as many homes as possible. One source of inefficiency in spending is unnecessary service line replacement (SLR) visits (the false-positive error rate). Therefore, our key performance metric is the SLR hit rate, i.e. the percentage of homes visited for replacement that required replacement.
*The proposed approach greatly improves the hit rate.* Our key finding from the simulation shows that we predict a reduced rate of costly unnecessary replacements visits from 18.8% (actual) to 2.0% (proposed). Figure \[fig:backtesting-100-even-ffs-vs-iwal\] illustrates the direct comparison of hit rates for our proposed approach, <span style="font-variant:small-caps;">IWAL</span>(0.7), based on our [<span style="font-variant:small-caps;">ActualFlint</span>]{}simulation, compared to Flint FAST Start.
*Second, the cost savings are substantial.* The proposed algorithm, with a higher hit rate, increases the number of homes that receive service line replacements for the same number of visits. This, in turn, reduces the *effective cost* of a successful service line replacement. The effective cost includes both the direct costs of successful replacement visit and the average costs incurred by exploring homes from hydrovac inspections or unnecessary replacement visits. Having access to the exact same set of 6,505 homes actually observed, we find that the algorithm on average saves an additional 10.7% in funds per successful replacement (see Table \[tbl:costs\]). Across 18,000 total planned service line replacements, this would extend to an expected savings of about \$11M out of current spending. In terms of the overall removal of lead pipes, this is approximately equivalent to 2,100 additional homes in the city that would receive safe water lines. These estimates are made using the current costs in Flint, where hydrovac inspection costs $c^h=\$250$, unnecessary replacement costs $c^{{\mathbf{r}}-}=\$2,500$, and successful replacement costs $c^{{\mathbf{r}}+}=\$5,000$.
![ Tracking hit rates over time, the proposed IWAL algorithm (blue; mean = 98.0%) outperform actual (green; mean = 81.2%; thick line is smoothed plot) []{data-label="fig:backtesting-100-even-ffs-vs-iwal"}](images/backtesting-100-even-hr-bytime-ffs-vs-iwal-slr.png){width=".7\linewidth"}
*The proposed approach outperforms a competitive set of natural benchmark strategies.* Instead of only comparing our proposed method to what actually occurred, we also consider a range of alternative methods. In particular, greedy (egreedy with 0% exploration) inspects the highest rate of hazardous homes inspected (HVI hitrate 91%), and uniform (egreedy with 100% exploration) inspects the lowest (63%). But IWAL does better with a more principled approach, selecting homes that are likely to be most informative, with risk probabilities near 70%. Figure \[fig:backtesting-100-even-hr-bytime-greedy-iwal-unif\] shows how IWAL and two greedy heuristics differ. Higher HVI hit rate is not better; instead, it is the choice of which homes to explore with inspection that matters. The uncertainty in performace of each algorithm comes from sampling variation from running 25 independent simulated experiments. We prefer IWAL to alternatives because it has greater savings and is less sensitive to tuning parameters.
![HVI Hit Rates. egreedy(0) tends to over-inspect whereas egreedy(1) is too conservative. IWAL more effectively optimizes HVI hit rate.[]{data-label="fig:backtesting-100-even-hr-bytime-greedy-iwal-unif"}](images/backtesting-100-even-hr-bytime-greedy-iwal-unif-hvi.png){width=".7\linewidth"}
*We acknowledge some assumptions in our simulations.* First, we only consider the cost of each job and not the time required for crews to move between homes, where there may be logistical issues with redirecting teams around the city. Second, in this analysis we have treated the [<span style="font-variant:small-caps;">ActualFlint</span>]{}as having only 6,506 homes of which all are visited. This creates an arbitrary finite end point, as the algorithm runs out of homes with unsafe service lines. To avoid this effect, the above calculations, figures, and tables are based on the first 4,500 replacement visits and 2,250 hydrovac inspections. Of course, to validate this, we would need access to a larger set, and thus we turn to our larger simulation using a full size of Flint. Finally, the results are robust to resource allocation schedule and batch size. We recognize that we used a schedule of SLR and HVI activities different than Flint FAST Start. To disentangle the confound between our choice of algorithms and the schedule, we ran an additional version of the [<span style="font-variant:small-caps;">ActualFlint</span>]{}backtest, with the schedule as closely aligned with Flint FAST Start in 2016-17 as possible. Across alternative scenarios tested the results differed only slightly.
### Results from [<span style="font-variant:small-caps;">SimulatedFlint</span>]{}
In our second simulation, we demonstrate the potential value of deploying the algorithm at scale and characterize the long-term performance of the algorithms. Via [<span style="font-variant:small-caps;">SimulatedFlint</span>]{}we find that the proposed algorithms, with the aim of replacing hazardous lines from 18,000 homes out of a simulated city of 48,000 homes, can achieve 11.8% savings relative to the current rate of spending. The best algorithm using IWAL yields an average effective cost of \$5,133 per successful replacement, better than \$5,818 observed in Flint (Table \[tbl:costs\]). As a final note, the proposed algorithms’ SLR hit rates are all above 98.0%.
Acknowledgments {#acknowledgments .unnumbered}
===============
The authors would like to thank the FAST Start team for their phenomenal work and openness to collaboration. This includes Brigadier General (Ret.) Michael McDaniel, Ryan Doyle, Major Nicholas Anderson, and Kyle Baisden. Professors Lutgarde Raskin and Terese Olson, environmental engineering faculty at U-M, provided invaluable scientific support throughout. We are incredibly grateful to the work of, and communication with, Professor Martin Kaufman and Troy Rosencrantz at U-M Flint’s GIS Center. We would like to thank Captricity, especially their machine learning team, Michael Zamora, Michael Zamora, David Shewfelt, and Kayla Pak for making the data accessible, and Kuang Chen for the generous support. We had major support from Mark Allison and his team of U-M Flint students. Rebecca Pettengill was enormously generous with her time and ability to help in the Flint community. We thank U-M Professors Marc Zimmerman and Rebecca Cunningham for their encouraging and helpful discussions. Among the many students involved in this work, we would like to recognize the roles of Jonathan Stroud and Chengyu Dai. And this work would not have happened without the expertise and enthusiasm of the students in the Michigan Data Science Team (MDST, <http://midas.umich.edu/mdst/>, [@farahi2018mdst]). The authors appreciate the many seminar and conference participants at U-M and elsewhere for their feedback on the academic work. The authors gratefully acknowledge the financial support of the Michigan Institute for Data Science (MIDAS), U-M’s Ross School of Business, Google.org, and National Science Foundation CAREER grant IIS 1453304.
[^1]: Prior work by the authors involved estimation of water lead contamination [@abernethy2016flint].
[^2]: For further analysis of blood lead levels, see [@potash2015predictive]
[^3]: http://www.npr.org/2016/02/01/465150617/flint-begins-the-long-process-of-fixing-its-water-problem
[^4]: We would like to thank Captricity, especially their machine learning team, Michael Zamora, Michael Zamora, David Shewfelt, and Kayla Pak for making the data accessible.
[^5]: <https://www.reuters.com/article/us-flint-vacancies-idUSKCN0VK08L>
[^6]: While not shown here, we also considered ExtraTrees, AdaBoost (with decision tree classifier), and Ridge Regression (regularized with L2 loss), but performance was lower than the three presented. Full details on hyperparameter optimization will be available in the full version.
[^7]: We calculate feature importance by weight, which is the normalized frequency with which a feature appears in a tree amongst the ensemble.
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- |
G. Ghirlanda [^1], O. S. Salafia, A. Pescalli, G. Ghisellini, R. Salvaterra, E. Chassande–Mottin,\
M. Colpi, F. Nappo, P. D’Avanzo, A. Melandri, M. G. Bernardini, M. Branchesi\
S. Campana, R. Ciolfi, S. Covino, D. Götz, S. D. Vergani, M. Zennaro, G. Tagliaferri
bibliography:
- 'journals.bib'
- 'ghirlanda.bib'
title: Short GRBs at the dawn of the gravitational wave era
---
Introduction
============
The population of short Gamma Ray Bursts (SGRBs) is still poorly understood due to the relatively few events with measured redshift . Available information is rather sparse, but the low density of the close circumburst medium [@2013ApJ...776...18F; @Fong:2015fp], the variety of galaxy morphologies [e.g. @2015JHEAp...7...73D], the lack of any associated supernova in the nearby SGRBs and the possible recent detection of a “kilonova” [@1989Natur.340..126E; @1998ApJ...507L..59L; @Yang:2015lr; @Yang:2015hl; @Jin:2016rr; @Jin:2015cr] signature [@2013ApJ...774L..23B; @2013Natur.500..547T], all hint to an origin from the merger of two compact objects (e.g. double neutron stars) rather than from a single massive star collapse.
However, the prompt $\gamma$–ray emission properties of SGRBs and the sustained long lasting X–ray emission (despite not ubiquitous in short GRBs - @Sakamoto:2009lr) and flaring activity suggest that the central engine and radiation mechanisms are similar to long GRBs. Despite still based on a couple of breaks in the optical light curves, it seems that also SGRBs have a jet: current measures of $\theta_{\rm jet}$ are between 3$^\circ$ and 15$^\circ$ while lower limits seem to suggest a wider distribution . Recently, it has been argued that the customary dividing line at $T_{90}=2\,\rm{s}$ between short and long GRBs provides a correct classification for *Fermi* and *CGRO* GRBs, but it is somewhat long for *Swift* bursts [@2013ApJ...764..179B].
A renewed interest in the population of SGRBs is following the recent opening of the gravitational wave (GW) “window” by the LIGO–Virgo discovery of GW150914 [@Abbott:2016lr] and by the most recent announcement of another event, GW151226, detected within the first data acquisition run [@2016arXiv160604856T; @Abbott:2016lr]. Despite no electromagnetic (EM) counterpart was identified within the large localisation region of these event, there are encouraging prospects for forthcoming GW discoveries to have an EM–GW association, thanks to the aLIGO–Virgo synergy and world wide efforts for ground and space based follow up observations.
If the progenitors are compact object binary (NS–NS or NS–BH - @Giacomazzo:2013fk) mergers, SGRBs are one of the most promising electromagnetic counterparts of GW events detectable by the advanced interferometers. Other EM counterparts are expected in the optical [@Metzger:2012fj], X-ray [@Siegel:2016qy; @Siegel:2016lq] and radio bands [@Hotokezaka:2016uq]. The rate of association of GW events with SGRBs is mainly determined by the rate of SGRBs within the relatively small horizon set by the sensitivity of the updated interferometers aLIGO and Advanced Virgo [@2016LRR....19....1A]. However, current estimates of local SGRB rates range from 0.1–0.6 Gpc$^{-3}$ yr$^{-1}$ (e.g. Guetta & Piran 2005; 2006) to 1–10 Gpc$^{-3}$ yr$^{-1}$ to even larger values like 40-240 Gpc$^{-3}$ yr$^{-1}$ [^2].
Such rate estimates mainly depend on the luminosity function and redshift distribution of SGRBs. These functions are usually derived by fitting the peak flux distribution of SGRBs detected by BATSE . Due to the degeneracy in the parameter space (when both and are parametric functions), the redshift distribution was compared with that of the few SGRBs with measured $z$. The luminosity function has been typically modelled as a single or broken power law, and in most cases it was found to be similar to that of long GRBs or even steeper [$L^{-2}$ and $L^{-3}$ - @2015MNRAS.448.3026W WP15 hereafter]. Aside from the mainstream, [@2015MNRAS.451..126S] modelled all the distributions with lognormal functions.
The redshift distribution (the number of SGRBs per comoving unit volume and time at redshift $z$) has been always assumed to follow the cosmic star formation rate with a delay which is due to the time necessary for the progenitor binary system to merge. With this assumption, various authors derived the delay time $\tau$ distribution, which could be a single power law $P(\tau)\propto\tau^{-\delta}$ (e.g. with $\delta=1-2$, , ; D14; WP15) with a minimum delay time $\tau_{\rm min}=10-20$ Myr, or a peaked (lognormal) distribution with a considerably large delay (e.g. 2–4 Gyr, @2005AAS...20715803N [[email protected]]; WP15). Alternatively, the population could be described by a combination of prompt mergers (small delays) and large delays [@2011ApJ...727..109V] or to the combination of two progenitor channels, i.e. binaries formed in the field or dynamically within globular clusters [e.g. @2008MNRAS.388L...6S].
Many past works, until the most recent, feature a common approach: parametric forms are assumed for the compact binary merger delay time distribution and for the SGRB luminosity function; free parameters of such functions are then constrained through (1) the small sample of SGRBs with measured redshifts and luminosities and (2) the distribution of the $\gamma$–ray peak fluxes of SGRBs detected by past and/or present GRB detectors. A number of other observer frame properties, though, are available: fluence distribution, duration distribution, observer frame peak energy. The latter have been considered in [@2015MNRAS.451..126S] which, however, lacks a comparison with rest–frame properties of SGRBs as done in this article. Another issue was the comparison of the model predictions with small and incomplete samples of SGRBs with measured $z$. Indeed, only recently D14 worked with a flux–limited complete sample of SGRBs detected by *Swift*.
The aim of this paper is to determine the redshift distribution and the luminosity function of the population of SGRBs, using all the available observational constraints of the large population of bursts detected by the –Gamma Burst Monitor (GBM) instrument. These constraints are: (1) the peak flux, (2) the fluence, (3) the observer frame duration and (4) the observer frame peak energy distributions. In addition we also consider as constraints (5) the redshift distribution, (6) the isotropic energy and (7) the isotropic luminosity of a complete sample of SGRBs detected by (D14). This is the first work aimed at deriving and of SGRBs which considers constraints 2–4 and 6–7. Moreover, we do not assume any delay time distribution for SGRBs but derive directly, for the first time, their redshift distribution by assuming a parametric form.
In §2 we describe our sample of SGRBs without measured redshifts detected by /GBM, which provides observer–frame constraints 1–4, and the (smaller) complete sample of SGRBs of D14, which provides rest–frame constraints 5–7. One of the main results of this paper is that the of SGRBs is flatter than claimed before in the literature: by extending standard analytic tools present in the literature, we show (§3) that a steep is excluded when all the available constraints (1–7) are considered. We then employ a Monte Carlo code (§4) to derive the parameters describing the and of SGRBs. In §5 and §6 the results on the and of SGRBs are presented and discussed, respectively, and in §7 we compute the local rate of SGRBs, discussing our results in the context of the dawning GW era. We assume standard flat $\Lambda$CDM cosmology with $H_0 = 70$ km s$^{-1}$ Mpc$^{-1}$ and $\Omega_{\rm{m}} = 0.3$ throughout the paper.
Sample selection {#sec:sample_selection}
================
As stated in the preceding section, the luminosity function and redshift distribution of SGRBs have been derived by many authors, by taking into account the following two constraints:
1. the peak flux distribution of large samples of SGRBs detected by *CGRO*/BATSE or *Fermi*/GBM;
2. the redshift distribution of the SGRBs with measured $z$.
However, a considerable amount of additional information on the prompt $\gamma$–ray emission of SGRBs can be extracted from the BATSE and GBM samples. In particular, we can learn more about these sources by considering the distributions of:
1. the peak energy of the observed $\nu F_{\nu}$ spectrum;
2. the fluence $F$;
3. the duration $T_{90}$.
Moreover, for the handful of events with known redshift $z$, we have also access to the[^3]
1. isotropic luminosity ;
2. isotropic energy .
Observer-frame constraints: *Fermi*/GBM sample
----------------------------------------------
For the distributions of the observer frame prompt emission properties (constraints 1, 3, 4, 5) we consider the sample of 1767 GRBs detected by *Fermi*/GBM (from 080714 to 160118) as reported in the on–line spectral catalogue[^4]. It contains most of the GRBs published in the second (first 4 years) spectral catalogue of /GBM bursts [@2014ApJS..211...12G], plus events detected by the satellite in the last two years. 295 bursts in the sample are SGRBs (i.e. with $T_{90}\le$ 2 s). According to [@2013ApJ...764..179B], for both the and *CGRO* GRB population, this duration threshold should limit the contamination from collapsar-GRBs to less than 10% (see also WP15).
We select only bursts with a peak flux (computed on 64ms timescale in the 10-1000 keV energy range) larger than 5 in order to work with a well defined sample, less affected by the possible incompleteness close to the minimum detector flux. With this selection, our sample reduces to 211 SGRBs, detected by *Fermi*/GBM in 7.5 years within its field of view of $\sim$70% of the sky.\
We consider the following prompt emission properties of the bursts in the sample, to be used as constraints of our population synthesis model:
- the distribution of the 64ms peak flux $P_{64}$ (integrated in the 10-1000 keV energy range). This is shown by black symbols in the top left panel of Fig. \[fg1\];
- the distribution of the observed peak energy of the prompt emission spectrum (black symbols, bottom left panel in Fig. \[fg1\]);
- the distribution of the fluence $F$ (integrated in the 10–1000 keV energy range) (black symbols, bottom middle panel in Fig. \[fg1\]);
- the distribution of the duration $T_{90}$ of the prompt emission (black symbols, bottom right panel in Fig. \[fg1\]);
Short GRB spectra have a typical observer frame peak energy distribution centred at relatively large values ($\sim 0.5-1$ MeV), as also shown by the distribution in the bottom left panel of Fig. \[fg1\]. For this reason, we adopt here the peak flux $P_{64}$ and fluence $F$ computed in the wide 10–1000 keV energy range as provided in the spectral catalogue of bursts rather than the typically adopted 50–300 keV peak flux (e.g. from the archive) which would sample only a portion of the full spectral curvature.
The distributions of the peak flux, fluence, peak energy and duration are shown in Fig. \[fg1\] with black symbols. Error bars are computed by resampling each measurement ($P$, $F$, and $T_{90}$) within its error with a normal distribution. For each bin, the vertical error bars represent the standard deviation of the bin heights of the resampled distributions.
Rest-frame constraints: *Swift* SBAT4 sample
--------------------------------------------
For the redshift distribution and the rest frame properties of SGRBs (constraints 2, 6 and 7) we consider the sample published in D14. It consists of bursts detected by *Swift*, selected with criteria similar to those adopted for the long GRBs in [@2012ApJ...749...68S], with a peak flux (integrated in the 15–150 keV energy range and computed on a 64 ms timescale) $P_{64}\ge 3.5$ photons cm$^{-2}$ s$^{-1}$. This corresponds to a flux which is approximately 4 times larger than the –BAT minimum detectable flux on this timescale. We call this sample SBAT4 (Short BAT 4) hereafter. The redshift distribution of the SBAT4 sample is shown in the top right panel of Fig. \[fg1\] (solid black line). Within the SBAT4 sample we consider the 11 GRBs with known $z$ and determined and (the distributions of these quantities are shown in the inset of Fig.\[fg1\], top–right panel, with black and gray lines respectively). The gray shaded region is span by the distribution when the five SGRBs in the sample with unknown $z$ are all assigned the minimum or the maximum redshift of the sample.
-3.4 truecm -0.8 truecm -6 truecm -0.8 truecm -2.5 truecm
The and $\Psi(z)$ of SGRBs {#sec:LF_and_Psi}
===========================
Given the incompleteness of the available SGRB samples, particularly with measured $z$, no direct method can be applied to derive the shape of the SGRB luminosity function and redshift distribution $\Psi(z)$ from the observations. The typical approach in this case consists in assuming some simple analytical shape for both functions, with free parameters to be determined by comparison of model predictions with observations.
For the luminosity function, a power law $$\phi(L)\propto L^{-\alpha}$$ or a broken power law $$\phi(L)\propto\left\lbrace\begin{array}{cc}
\left({L}/{L_{\rm b}}\right)^{-\alpha_1} & L<L_{\rm b}\\
\left({L}/{L_{\rm b}}\right)^{-\alpha_2} & L\geq L_{\rm b}
\end{array}\right.$$ normalised to its integral is usually assumed.
If SGRBs are produced by the merger of compact objects, their redshift distribution should follow a retarded star formation: $$\Psi(z) = \int_{z}^{\infty} \psi(z')P[t(z)-t(z')]\frac{dt}{dz'}dz'
\label{eq:retarded}$$ where represents the formation rate of SGRB progenitors in Gpc$^{-3}$ yr$^{-1}$, and $P(\tau)$ is the delay time distribution, i.e. the probability density function of the delay $\tau$ between the formation of the progenitors and their merger (which produces the SGRB). Adopting the point of view that SGRBs are produced by the coalescence of a neutron star binary (or a black hole – neutron star binary), one can assume a delay time distribution and convolve it with a of choice to obtain the corresponding SGRB formation rate $\Psi(z)$. Theoretical considerations and population synthesis suggest that compact binary coalescences should typically follow a delay time distribution $P(\tau) \propto \tau^{-1}$ with $\tau\gtrsim 10$ Myr. Eq. \[eq:retarded\] is actually a simplification, in that it implicitly assumes that the fraction of newly formed stars that will end up as members of a NS–NS binary is fixed. The actual fraction very likely depends on metallicity and on the initial mass function, and thus on redshift in a statistical sense.
Among the most recent studies of the and of SGRBs we consider the work of D14 and WP15 in the following for comparison in more detail. D14 assume a power law shape for both the and the delay time distribution $P(\tau)$, and they adopt the parametric function of [@2001MNRAS.326..255C] for the cosmic star formation history, with parameter values from [@2006ApJ...651..142H]. They assume that SGRBs follow the correlation $E_{\rm peak}=337 {\rm keV} (L_{\rm iso}/2\times 10^{52} {\rm erg s^{-1}})^{0.49}$ and that their spectrum is a Band function [@1993ApJ...413..281B] with low and high energy photon spectral indices -0.6 and -2.3, respectively. They constrain the free parameters by fitting the peak flux distribution and the redshift distribution of bright short bursts with measured $z$. They find $\phi(L)\propto L^{-2.17}$ between $10^{49}$ erg s$^{-1}$ and $10^{55}$ erg s$^{-1}$, and $P(\tau)\propto \tau^{-1.5}$ with a minimum delay of $20$ Myr. The dashed blue lines in Fig. \[fg1\] are obtained through Eq. 4 and Eq. 5 using the same parameters as D14: their model (limited to $P_{\rm lim}\ge 5$ in order to be compared with the sample selected in this work) reproduces correctly the peak flux distribution (top left panel of Fig. \[fg1\]) of SGRBs and the redshift distribution of the bright SGRBs detected by (top right panel in Fig. \[fg1\]).
The preferred model for $\phi(L)$ in WP15 is a broken power law, with a break at $2\times 10^{52}$ erg s$^{-1}$ and pre- and post break slopes of $-1.9$ and $-3.0$ respectively. Their preferred models are either a power law delay time distribution $P(\tau)\propto \tau^{-0.81}$ with a minimum delay of $20$ Myr or a lognormal delay time distribution with central value $2.9$ Gyr and sigma $\leq 0.2$. Differently from D14, rather than assuming the correlation they assign to all SGRBs a fixed rest frame $E_{\rm p,rest}=800$ keV. The dot–dashed cyan lines in Fig. \[fg1\] are the model of WP15 (we show the lognormal $P(\tau)$ case).
In the following we show how the results of WP15 and D14, both representative of a relatively steep luminosity function, compare with the other additional constraints (bottom panels of Fig. \[fg1\]) that we consider in this work.
From population properties to observables {#sec:analytical_methods}
-----------------------------------------
Given the two functions and $\Psi(z)$, the peak flux distribution can be derived as follows: $$N(P_{1}<P<P_{2})=\frac{\Delta\Omega}{4 \pi}\int_{0}^{\infty}dz \frac{dV(z)}{dz}\frac{\Psi(z)}{1+z}\int_{L(P_{1},z)}^{L(P_{2},z)}\phi(L)dL
\label{eq:pfluxdistribution}$$ where $\Delta\Omega/4\pi$ is the fraction of sky covered by the instrument/detector (which provides the real GRB population with which the model is to be compared) and $dV(z)/dz$ is the differential comoving volume. The flux $P$ corresponding to the luminosity $L$ at redshift $z$ is[^5]: $$P(L,z,E_{\rm peak},\alpha)=\frac{L}{4\pi d_{L}(z)^2}\, \frac{ \int_{\epsilon_{1}(1+z)}^{\epsilon_{2}(1+z)} N(E|E_{\rm peak},\alpha)dE}{\int_{0}^{\infty}EN(E|E_{\rm peak},\alpha)dE}$$ where $d_{L}(z)$ is the luminosity distance at redshift $z$ and $N(E|E_{\rm peak},\alpha)$ is the rest frame photon spectrum of the GRB. The photon flux $P$ is computed in the rest frame energy range $[(1+z)\epsilon_{1},(1+z)\epsilon_{2}]$ which corresponds to the observer frame $[\epsilon_{1},\epsilon_{2}]$ band.
The SGRB spectrum is often assumed to be a cut-off power law, i.e. $N(E|E_{\rm peak},\alpha)\propto E^{-\alpha}\exp(-E(2-\alpha)/E_{\rm peak})$, or a Band function [@1993ApJ...413..281B]. Typical parameter values are $\alpha\sim0.6$ and, for the Band function, $\beta\sim2.3-2.5$. The peak energy is either assumed fixed (e.g. $800$ keV in WP15) or derived assuming that SGRBs follow an correlation in analogy to long bursts [e.g. D14; @2011ApJ...727..109V]. Recent evidence supports the existence of such a correlation among SGRBs , with similar parameters as that present in the population of long GRBs [@2004ApJ...609..935Y].
In order to compare the model peak flux distribution obtained from Eq. \[eq:pfluxdistribution\] with the real population of GRBs, only events with peak flux above a certain threshold $P_{\rm lim}$ are considered. The integral in Eq. \[eq:pfluxdistribution\] is thus performed over the $(L,z)$ range where the corresponding flux is larger than $P_{\rm lim}$.
In D14 the assumption of the correlation () between the isotropic luminosity and the rest frame peak energy allows us to derive, from Eq. \[eq:pfluxdistribution\], also the expected distribution of the observer frame peak energy : $$N(E_{1,{\rm p,o}}<E<E_{2, {\rm p,o}})=\int_{0}^{\infty}dz\,C(z)\int_{L(E_{1{\rm p,o}},z)}^{L(E_{2,{\rm p,o}},z)}\phi(L)dL
\label{eq:epdistribution}$$ where $E_{\rm p,o}$ is the peak energy of the observed $\nu\,F(\nu)$ spectrum, and we let $C(z) = [\Delta\Omega/4\pi][\Psi(z)/(1+z)][dV(z)/dz]$. The limits of the luminosity integral are computed by using the rest frame correlation $E_{\rm p}=Y\,L^{m_y}$, namely $$L(E_{{\rm p,o}},z) = \left(\frac{E_{\rm p}}{Y}\right)^{1/m_y} = \left(\frac{(1+z)E_{\rm p,o}}{Y}\right)^{1/m_y}$$ In order to compare the distribution of with real data, the integral in Eq. \[eq:epdistribution\], similarly to Eq. \[eq:pfluxdistribution\], is performed over values of $L(E_{\rm p, o},z)$ corresponding to fluxes above the limiting flux adopted to extract the real GRB sample (e.g. 5 for SGRBs selected from the sample).
Similarly, by assuming an correlation to hold in SGRBs [see D14; @Tsutsui:2013lr; @2006MNRAS.372..233A; @2015MNRAS.448..403C], i.e. $E_{\rm p}=A\,E^{m_a}$, we can derive a relation between luminosity and energy (–), which reads $$L(E) = \left(\frac{A}{Y}\right)^{1/m_y} E^{m_a/m_y}$$ This is then used to compute the fluence distribution, where the fluence is related to the isotropic energy as $F=E(1+z)/4\pi\,d_{L}(z)^{2}$: $$N(F_{1}<F<F_{2})=\int_{0}^{\infty}dz\,C(z)\int_{L(E_{1})}^{L(E_{2})}\phi(L)dL$$ again by limiting the integral to luminosities which correspond to fluxes above the given limiting flux.
Finally, considering the spiky light curves of SGRBs, we can assume a triangular shape and thus let $2E/L\sim T$ in the rest frame of the source. Therefore, it is possible to combine the and correlations to derive the model predictions for the distribution of the duration to be compared with the observed one: $$N(T_{1,\rm o}<T<T_{2,\rm o})=\int_{0}^{\infty}dz\,C(z)\int_{L(T_{1,{\rm o}},z)}^{L(T_{2,{\rm o}},z)}\phi(L)dL$$ where $$L(T_{\rm o},z) = \left[\left( \frac{Y}{A} \right)^{1/m_a}\frac{2(1+z)}{T_o}\right]^{1/(1-m_y/m_a)}$$
Excluding a steep luminosity function
-------------------------------------
The bottom panels of Fig. \[fg1\] show the distributions of peak energy (left), fluence $F$ (middle) and duration $T_{90}$ (right) of the sample of short GRBs described in §2 (black symbols). Predictions using the same parameters as in D14 are shown by dashed blue lines in Fig. \[fg1\]: while the $P$ and $z$ distributions are correctly reproduced (top panels of Fig. \[fg1\]), the model is inconsistent with the distributions of peak energy , fluence $F$ and duration (bottom panels of Fig. \[fg1\]). For the D14 model we have assumed the correlation reported in that paper to derive the fluence and (in combination with the correlation) the duration distribution. Since WP15 assume a unique value of the peak energy it is not possible to derive the fluence and duration of their model, unless independent functions for these parameters are assumed. Therefore, the model of WP15 (dot–dashed cyan line in Fig. \[fg1\]) is compared only in the peak flux, redshift (top panels) and observed peak energy (bottom left panel of Fig. \[fg1\]).
In conclusion, a steep with either a power law distribution of delay times favoring short delays (as in D14) or a nearly unique long delay time (as in the log–normal model of WP15) correctly reproduce the observer frame peak flux distribution of GRBs[^6] and the redshift distribution of bright short bursts. However, they do not reproduce the peak energy, fluence and duration distributions of the same population of SGRBs.
Motivated by these results, we implemented a Monte Carlo (MC) code aimed at deriving the and of SGRBs which satisfy all the constraints (1–7) described above. The reason to choose a MC method is that it allows to easily implement the dispersion of the correlations (e.g. and ) and of any distribution assumed (which are less trivial to account for in an analytic approach as that shown above).
Monte Carlo simulation of the SGRB population
=============================================
-1.8 truecm ![\[fig:MCscheme\]Scheme of the procedure followed in the MC to generate the observables of each synthetic GRB.](MC_scheme.pdf "fig:"){width="0.9\columnwidth"} -2.5 truecm
In this section we describe the Monte Carlo (MC) code adopted to generate the model population. Such population is then compared with the real SGRB samples described above in order to constrain the model parameters (§4). Our approach is based on the following choices:
1. Customarily, Eq. \[eq:retarded\] has been used to compute the redshift distribution $\Psi(z)$ of SGRBs from an assumed star formation history $\psi(z)$ and a delay time distribution $P(\tau)$. As stated in §\[sec:LF\_and\_Psi\], this approach implies simplifications that we would like to avoid. To make our analysis as general as possible, we here adopt a generic parametric form for the redshift distribution $\Psi(z)$ of SGRBs. *A posteriori*, if one believes the progenitors to be compact binaries, the delay time distribution can be recovered by direct comparison of our result with the star formation history of choice. We parametrise the following [@2001MNRAS.326..255C], namely: $$\Psi(z) = \frac{1+p_{1}z}{1+\left(z/z_{\rm p}\right)^{p_{2}}}
\label{eq:psi_cole}$$ which has a rising and decaying part (for $p_{1}>0$, $p_{2}>1$) and a characteristic peak roughly[^7] corresponding to $z_{p}$;
2. In order to have a proper set of simulated GRB parameters, it is convenient to extract $E_{\rm p}$ from an assumed probability distribution. We consider a broken power law shape for the distribution: $$\phi(E_{\rm p}) \propto
\begin{cases}
\left({E_{p}}/{E_{\rm p,b}}\right)^{-a_1} & \text{ $E_{p} \leq E_{\rm p,b}$} \\
\left({E_{p}}/{E_{\rm p,b}}\right)^{-a_2} & \text{ $E_{p}> E_{\rm p,b}$}
\end{cases}
\label{lf}$$ Through the and correlations, accounting also for their scatter, we can then associate to $E_{\rm p}$ a luminosity and an energy . The luminosity function of the population is then constructed as a result of this procedure;
3. We assume the and correlations to exist and we write them respectively as $$\log_{10}(E_{\rm p}/670\,{\rm keV}) = q_{\rm{Y}} + m_{\rm{Y}}\log_{10}(L/10^{52}\rm{erg\,s^{-1}})
\label{eq:yone}$$ and $$\log_{10}(E_{\rm p}/670\,{\rm keV}) = q_{\rm{A}} + m_{\rm{A}}\log_{10}(E_{\rm iso}/10^{51}\rm{erg})
\label{eq:ama}$$ After sampling $E_{\rm p}$ from its probability distribution (Eq. \[lf\]), we associate to it a luminosity (resp. energy) sampled from a lognormal distribution whose central value is given by Eq. 14 (resp. 15) and $\sigma=0.2$. The SGRBs with measured redshift are still too few to measure the scatter of the corresponding correlations. We assume the same scatter as measured for the correlations holding for the population of long GRBs [@Nava:2012lr];
4. For each GRB, a typical Band function prompt emission spectrum is assumed, with low and high photon spectral index $-0.6$ and $-2.5$ respectively. We keep these two parameters fixed after checking that our results are unaffected by sampling them from distributions centred around these values[^8].
For each synthetic GRB, the scheme in Fig. \[fig:MCscheme\] is followed: a redshift $z$ is sampled from $\Psi(z)$ and a rest frame peak energy is sampled from $\phi(E_{\rm p})$; through the () correlation a luminosity (energy ) is assigned, with lognormal scatter; using redshift and luminosity (energy), via the assumed spectral shape, the peak flux $P$ (fluence $F$) in the observer frame energy range 10–1000 keV is derived. The observer frame duration $T$ is obtained as $2(1+z)E/L$, i.e. the light curve is approximated with a triangle[^9]. Let us refer to this scheme as “case (a)”.
\[sec:caseb\] The minimum and maximum values of $E_{\rm p}$ admitted are $E_{\rm p,min} = 0.1\,\rm{keV}$ and $E_{\rm p,max} = 10^5\,\rm{keV}$. These limiting values correspond to a minimum luminosity $L_{\rm min}$ and a maximum luminosity $L_{\rm max}$ which depend on the correlation. While the maximum luminosity is inessential (in all our solutions the high luminosity slope $\alpha_2 \gtrsim 2$), the existence of a minimum luminosity might affect the observed distributions. We thus implemented an alternative scheme (“case (b)”) where the minimum luminosity $L_{\rm min}$ is a parameter, and values of $E_{\rm p}$ which correspond to smaller luminosities are rejected.
In order to investigate the dependence of our results on the assumption of the and correlations, we also implemented a third MC scheme (“case (c)”) where independent (from the peak energy and between themselves) probability distributions are assumed for the luminosity and duration. A broken power law $$P(L) \propto \left\lbrace\begin{array}{lr}
(L/L_{\rm b})^{-\alpha_1} & L\leq L_{\rm b}\\
(L/L_{\rm b})^{-\alpha_2} & L>L_{\rm b}\\
\end{array}\right.\label{eq:lf}$$ is assumed for the luminosity distribution, and a lognormal shape $$P(T_{\rm{r}}) \propto \exp\left[-\frac{1}{2}\left(\frac{(\log(T_{\rm{r}}) - \log(T_c)}{\sigma_{Tc}}\right)^2\right]
\label{eq:tdist}$$ is assumed for the rest frame duration $T_{\rm{r}} = T/(1+z)$ probability distribution. Again, the energy of each GRB is computed as $E = L T_{\rm{r}}/2$, i.e. the light curve is approximated with a triangle.
Finding the best fit parameters
===============================
In case (a) there are 10 free parameters: three $(p_{1},z_{\rm p},p_{2})$ define the redshift distribution (Eq. \[eq:psi\_cole\]), three $(a_1,a_2,E_{\rm p,b})$ define the peak energy distribution (Eq. \[lf\]) and four $(q_Y,m_Y,q_A,m_A)$ define the and correlations (Eqs. \[eq:yone\]&\[eq:ama\]). Our constraints are the seven distributions shown in Fig. \[fg1\] (including the top right panel insets).
{width="\textwidth"}
In order to find the best fit values and confidence intervals of our parameters, we employed a Monte Carlo Markov Chain (MCMC) approach based on the Metropolis-Hastings algorithm [@hastings1970monte]. At each step of the MCMC:
- we displace each parameter[^10] $p_i$ from the last accepted value. The displacement is sampled from a uniform distribution whose maximum width is carefully tuned in order to avoid the random walk to remain stuck into local maxima;
- we compute the Kolmogorov-Smirnov (KS) probability $P_{\rm KS,j}$ of each observed distribution to be drawn from the corresponding model distribution;
- we define the goodness of fit $\mathcal{G}$ of the model as the sum of the logarithms of these KS probabilities[^11], i.e. ;
- we compare $g=\exp({\mathcal{G}})$ with a random number $r$ sampled from a uniform distribution within $0$ and $1$: if $g>r$ the set of parameters is “accepted”, otherwise it is “rejected”.
We performed tests of the MCMC with different initial parameters, to verify that a unique global maximum of $\mathcal{G}$ could be found. Once properly set up, 200,000 steps of the MCMC were run. After removing the initial burn in, the autocorrelation length of each parameter in the chain was computed, and the posterior density distribution of each parameter (and the joint distribution of each couple of parameters) was extracted with the `getDist` python package[^12]. The resulting 1D and 2D marginalized distributions are shown in Fig. \[fig:triangolo\], where black dashed (black dot-dashed) lines indicate the position of the mean (mode) of the marginalized density of each parameter. The filled contours represent the 68% (darker red) and 95% (lighter red) probability areas of the joint density distributions. The means, modes and $68\%$ probability intervals of the 1D marginalized distributions are summarised in Table \[tab:mcmc\_results\].a, where the corresponding luminosity function parameters are also reported.
**(a) case with correlations and no minimum luminosity**
Parameter Mean Mode $68\%$ C.I.
------------------ --------- --------- -----------------
$a_1$ $0.53$ $0.8$ $(0.2,1)$
$a_2$ $4$ $2.6$ $(1.9,4.4)$
$E_{\rm peak,b}$ $1600$ $1400$ $(880,2000)$
$m_Y$ $0.84$ $0.69$ $(0.58,0.88)$
$m_A$ $1.1$ $0.91$ $(0.76,1.2)$
$q_Y$ $0.034$ $0.068$ $(-0.069,0.18)$
$q_A$ $0.042$ $0.033$ $(-0.061,0.13)$
$p_1$ $2.8$ $1.8$ $(0.59,3.7)$
$z_p$ $2.3$ $2.7$ $(1.7,3.2)$
$p_2$ $3.5$ $1.7$ $(0.94,4)$
$\alpha_1$ $0.53$ $0.88$ $(0.39,1.0)$
$\alpha_2$ $3.4$ $2.2$ $(1.7,3.7)$
$L_b$ $2.8$ $2.1$ $(0.91,3.4)$
: Summary of Monte Carlo Markov Chain results. C.I. = confidence interval. $E_{\rm peak,b}$, $L_b$ and $T_c$ are in units of keV, $10^{52}$ erg s$^{-1}$ and s, respectively. \[tab:mcmc\_results\]
**(b) case with correlations and minimum luminosity**
Parameter Mean Mode $68\%$ C.I.
------------------ --------- --------- -----------------
$a_1$ $0.39$ $0.24$ $(-0.15,0.8)$
$a_2$ $3.5$ $2.5$ $(1.9,3.7)$
$E_{\rm peak,b}$ $1400$ $1100$ $(730,1700)$
$m_Y$ $0.88$ $0.76$ $(0.61,0.97)$
$m_A$ $1.1$ $0.95$ $(0.77,1.2)$
$q_Y$ $0.045$ $0.077$ $(-0.039,0.17)$
$q_A$ $0.043$ $0.053$ $(-0.037,0.14)$
$p_1$ $3.1$ $2.4$ $(1,4.2)$
$z_p$ $2.5$ $3$ $(1.9,3.3)$
$p_2$ $3$ $1.3$ $(0.9,3.1)$
$\alpha_1$ $0.38$ $0.47$ $(0.034,0.98)$
$\alpha_2$ $3$ $2.1$ $(1.7,3.2)$
$L_b$ $2.3$ $1.5$ $(0.71,2.8)$
: Summary of Monte Carlo Markov Chain results. C.I. = confidence interval. $E_{\rm peak,b}$, $L_b$ and $T_c$ are in units of keV, $10^{52}$ erg s$^{-1}$ and s, respectively. \[tab:mcmc\_results\]
**(c) case with no correlations**
Parameter Mean Mode $68\%$ C.I.
-------------------- --------- --------- -----------------
$a_1$ $-0.61$ $-0.55$ $(-0.73,-0.41)$
$a_2$ $2.8$ $2.5$ $(2.1,2.9)$
$E_{\rm peak,b}$ $2200$ $2100$ $(1900,2500)$
$\alpha_1$ $-0.15$ $-0.32$ $(-1.5,0.81)$
$\alpha_2$ $2.0$ $1.8$ $(1.2,2.8)$
$L_b$ $0.63$ $0.79$ $(0.32,1.6)$
$T_c$ $0.11$ $0.11$ $(0.084,0.13)$
$\sigma_{\rm{Tc}}$ $0.91$ $0.90$ $(0.79,1.0)$
$p_1$ $3.1$ $2.0$ $(0.51,4.1)$
$z_p$ $2.5$ $2.8$ $(2.0,3.3)$
$p_2$ $3.6$ $2.0$ $(1.1,3.7)$
: Summary of Monte Carlo Markov Chain results. C.I. = confidence interval. $E_{\rm peak,b}$, $L_b$ and $T_c$ are in units of keV, $10^{52}$ erg s$^{-1}$ and s, respectively. \[tab:mcmc\_results\]
For the solution represented by the mean values in Table \[tab:mcmc\_results\].a, the minimum luminosity is $L_{\rm min}\sim 10^{47}$ erg s$^{-1}$. For comparison, we tested case (b) fixing $L_{\rm min}=10^{50}$ erg s$^{-1}$. This is the highest minimum luminosity one might assume, since the lowest SGRB measured luminosity in the sample considered is $L=1.2\times 10^{50}$ erg s$^{-1}$ (D14). Table \[tab:mcmc\_results\].b summarises the results of the analysis after 200,000 MCMC steps. The two cases are consistent within one sigma. The best fit luminosity function in case (b) is slightly shallower at low luminosities (i.e. there is a slight decrease in $\alpha_1$) than in case (a), and it remains much shallower than in D14 and WP15.
Finally, we tested case (c) performing 200,000 MCMC steps. In this case, the free parameters are eleven: three $(p_{1},z_{\rm p},p_{2})$ for $\Psi(z)$ and three $(a_1,a_2,E_{\rm p,b})$ for $\phi(E_{\rm p})$ as before, plus three $(\alpha_1,\alpha_2,L_{\rm b})$ for the luminosity function (Eq. \[eq:lf\]) and two $(T_c,\sigma_{\rm Tc})$ for the intrinsic duration distribution (Eq. \[eq:tdist\]). Consistently with case (a) and case (b) we assumed two broken power laws for $\phi(E_{\rm p})$ and . Results are listed in Table \[tab:mcmc\_results\].c. We find that if no correlations are present between the peak energy and the luminosity (energy), the luminosity function and the peak energy distributions become peaked around characteristic values. This result is reminiscent of the findings of [@2015MNRAS.451..126S] who assumed lognormal distributions for these quantities.
Discussion of the results
=========================
Luminosity function
-------------------
In case (a) we find that the luminosity function is shallow ($\alpha_1 = 0.53^{+0.47}_{-0.14}$ - and flatter than 1.0 within the 68% confidence interval) below a break luminosity $\sim 3 \times 10^{52}$ erg s$^{-1}$ and steeper ($\alpha_2=3.4^{+0.3}_{-1.7}$) above this characteristic luminosity. The minimum luminosity $\sim 5\times 10^{47}$ erg s$^{-1}$ is set by the minimum $E_{\rm p}$ coupled with the correlation parameters (see §\[sec:caseb\]). Similar parameters for the are obtained in case (b), where a minimum luminosity was introduced, thus showing that this result is not strongly dependent on the choice of the minimum luminosity of the .
If we leave out the correlations (case (c)), we find that the distributions of the peak energy and luminosity are peaked. However, the 68% confidence intervals of some parameters, common to case (a) and (b), are larger in case (c). In particular, the slope $\alpha_1$ of the luminosity function below the break is poorly constrained, despite this cannot be steeper than 0.81 (at the 68% confidence level). We believe that the larger uncertainty on the best fit parameters in case (c) is due to the higher freedom allowed by the uncorrelated luminosity function, peak energy distribution and duration distribution.
Redshift distribution
---------------------
![Comparison between various predicted SGRB redshift distributions. The grey dashed line represents the convolution of the MD14 cosmic SFH with a delay time distribution $P(\tau)\propto \tau^{-1}$ with $\tau>20 \rm{Myr}$ (the normalization is arbitrary). The pink solid line (pink dotted line) represents the redshift distribution of NS–NS binary mergers predicted by [@2013ApJ...779...72D] in their *high end* (*low end*) metallicity evolution scenario (standard binary evolution model). The blue dashed line and cyan dot–dashed line are the SGRB redshift distributions according to D14 and to WP15 respectively. The red solid line is our result in case (a), while the orange triple dot dashed line is our result in case (c). In both cases we used the mean parameter values as listed in Table \[tab:mcmc\_results\].\[fig:sfh\_comparison\]](sfh_comparison.pdf){width="0.9\columnwidth"}
Figure \[fig:sfh\_comparison\] shows a comparison of our predicted redshift distributions (case (a): red solid line; case (c): orange triple dot-dashed line; mean values adopted) with the following other redshift distributions:
- the convolution of the star formation history (SFH) with the delay time distribution $P(\tau)\propto \tau^{-1}$ with $\tau> 20\rm{Myr}$, grey dashed line (the normalisation is arbitrary);
- the redshift distribution of NS–NS mergers as predicted by [@2013ApJ...779...72D] (we refer to the standard binary evolution case in the paper) based on sophisticated binary population synthesis, assuming two different metallicity evolution scenarios: *high-end* (pink solid line) and *low-end* (pink dotted line);
- the SGRB redshift distribution found by D14, which is obtained convolving the SFH by [@2006ApJ...651..142H] with a delay time distribution $P(\tau)\propto \tau^{-1.5}$ with $\tau> 20\rm{Myr}$, blue dashed line;
- the SGRB redshift distribution found by WP15, which is obtained convolving an SFH based on Planck results with a lognormal delay time distribution $P(\tau)\propto \exp\left[-\left(\ln \tau - \ln \tau_0\right)^2/\left(2\sigma^2\right) \right]$ with $\tau_0 = 2.9\rm{Gyr}$ and $\sigma < 0.2$ (we used $\sigma = 0.1$), cyan dot–dashed line.
The redshift distribution by D14 peaks between $z\sim 2$ and $z\sim 2.5$, i.e. at a higher redshift than the MD14 SFH (which peaks at $z\sim 1.9$). This is due to the short delay implied by the delay time distribution assumed in D14, together with the fact that the [@2006ApJ...651..142H] SFH peaks at higher redshift than the MD14 SFH. On the other hand, the redshift distribution by WP15 peaks at very low redshift ($\sim 0.8$) and predicts essentially no SGRBs with redshift $z\gtrsim 2$, because of the extremely large delay implied by their delay time distribution.
Assuming the MD14 SFH (which is the most up-to-date SFH available) to be representative, our result in case (a) seems to be compatible with the $P(\tau)\propto \tau^{-1}$ delay time distribution (grey dashed line), theoretically favoured for compact binary mergers. In case (c), on the other hand, the redshift distribution we find seems to be indicative of a slightly smaller average delay with respect to case (a). Since the cosmic SFH is still subject to some uncertainty, and since the errors on our parameters $(p_1,z_p,p_2)$ are rather large, though, no strong conclusion about the details of the delay time distribution can be drawn.
and correlations
------------------
Our approach allowed us, in cases (a) and (b), to derive the slope and normalization of the intrinsic and correlations of SGRBs. [@Tsutsui:2013lr] finds, for the and correlations of SGRBs, slope values $0.63\pm0.05$ and $0.63\pm0.12$, respectively. Despite our mean values for $m_{Y}$ and $m_{A}$ (Tab. 1) are slightly steeper, the 68% confidence intervals reported in Tab. 1 are consistent with those reported by [@Tsutsui:2013lr]. In order to limit the free parameter space we assumed a fixed scatter for the correlations and a fixed normalisation center for both (see Eq. 14 and Eq. 15). This latter choice, for instance, introduces the small residual correlation between the slope and normalisation of the parameters (as shown in Fig. \[fig:triangolo\]).
Inspection of Fig. \[fig:triangolo\] reveals another correlation in the MCMC chain between the normalizations $q_{\rm{Y}}$ and $q_{\rm{A}}$ of the and correlations: this is expected, because the ratio of the two normalizations is linked to the duration of the burst. Indeed, from Eqs. \[eq:ama\] & \[eq:yone\] one has $$q_{\rm{Y}} - q_{\rm{A}} = \log\left(\frac{E^{m_{\rm{A}}}}{L^{m_{\rm{Y}}}}\right) + 52 m_{\rm{Y}} - 51 m_{\rm{A}}$$ Since $m_{\rm{A}}$ and $m_{\rm{Y}}$ are close, the argument of the logarithm is $\sim E/L \propto T$, and since there is a typical duration, this induces an approximately linear correlation between $q_{\rm{A}}$ and $q_{\rm{Y}}$, which is what we find.
-6.8 truecm -1.4truecm {width="\textwidth"} -7 truecm
Local SGRB rate
===============
The local rate of SGRBs is particularly important for the possible connection with gravitational wave events to be detected by the advanced interferometers (Advanced LIGO - @LIGO-Scientific-Collaboration:2015fr [@Abbott:2016fk]; Advanced Virgo - @Acernese:2015zr).
The first such detection, named GW150914, has been interpreted according to General Relativity as the space–time perturbation produced by the merger of two black holes (with masses $M_1\sim 29$ M$_{\odot}$ and $M_2\sim 36$ M$_{\odot}$) at a distance of $\sim$410 Mpc ($z = $0.09). The full analysis of the aLIGO first run cycle revealed a second binary black hole merger event, GW151226 [@2016arXiv160604856T]. In this case the involved masses are smaller ($M_1\sim 14.2$ M$_{\odot}$ and $M_2\sim 7.5$ M$_{\odot}$) and the associated distance is only slightly larger ($\sim$440 Mpc)[^13].
GW150914 represents a challenge for the theory of formation and evolution of stellar origin BHs [@GW150914_astrophysical_implications; @Belczynski:2016uq; @Spera2016] being the most massive stellar-mass black hole observed so far. The masses of GW151226 are close to the ones observed in galactic X-ray binaries [@Ozel:2010db]. Both sources are an exquisite direct probe of General Relativity in the strong field dynamical sector [@GW150914_test_GR].
Considering the detections resulting from the analysis of the “O1” aLIGO interferometers, the rate of BH-BH merger is 9–240 Gpc$^{-3}$ yr$^{-1}$, assuming different BH mass distributions [@2016arXiv160604856T]. For the sake of comparison, in Fig. \[fig:rate\] we show this range of rates (vertical green bar) in yr$^{-1}$ computed at the distance of GW150914.
However, the best is yet to come in the field of GW. Indeed, while no electromagnetic counterpart has been associated either to GW150914 [@2016MNRAS.tmpL..45E; @Troja:2016kx; @2016arXiv160204156S; @2016ApJ...820L..36S; @2016arXiv160204198S; @Annis:2016ys; @Kasliwal:2016fr; @Morokuma:2016zr; @Ackermann:2016nx but see @Connaughton:2016uq [@2016ApJ...821L..18P; @2016arXiv160205050Y; @2016arXiv160204542Z; @2016arXiv160205529M; @2016arXiv160207352L]] and to GW151226 [@2016arXiv160604538C; @2016arXiv160604795S; @Adriani:2016qf; @Evans:2016ul; @Copperwheat:2016ve; @Racusin:2016gf], possible future detections of GW produced by compact binary mergers could lead to the first association of an electromagnetic with a gravitational signal [@Branchesi:2011vn; @Metzger:2012fj]. In the case of NS–NS and NS–BH mergers, SGRBs are candidates to search for among other possible counterparts in the optical [@Metzger:2012fj], X-ray [@Siegel:2016qy; @Siegel:2016lq], and radio bands [@Hotokezaka:2016uq].
------------------------------------------------------------------------------------------------------------
R D H
------------------------------------------------------------------------------------------------ --- --- ---
*[NS–NS]{} & $\le$200 Mpc & $\le$300 Mpc & $\le$450 Mpc\
Model (a) & $0.007_{-0.003}^{+0.001}$ & $0.024_{-0.007}^{+0.004}$ & $0.077_{-0.028}^{+0.014}$\
Model (c) & $0.028_{-0.010}^{+0.005}$ & $0.095_{-0.034}^{+0.017}$ & $0.299_{-0.108}^{+0.054}$\
*[NS–BH]{} & $\le$410 Mpc & $\le$615 Mpc & $\le$927 Mpc\
Model (a) & $0.060_{-0.022}^{+0.011}$ & $0.20_{-0.07}^{+0.035}$ & $0.572_{-0.206}^{+0.103}$\
Model (c) & $0.232_{-0.083}^{+0.042}$ & $0.605_{-0.218}^{+0.109}$ & $1.158_{-0.417}^{+0.208}$\
**
------------------------------------------------------------------------------------------------------------
: Short GRB rates in $\rm{yr^{-1}}$ (68% errors) within the volume corresponding to different distances: R = “limiting distance for binary inspiral detection by aLIGO, averaged over sky location and binary inclination”, D = “limiting distance for a face–on binary, averaged on sky location”, H = “limiting distance (*horizon*) for a face–on binary”. Limiting distances are obtained considering the aLIGO design sensitivity to NS–NS or NS–BH inspirals (top and bottom portions of the table, respectively). \[rate\]
There is a considerable number of predictions for the rate of SGRBs within the horizon of GW detectors in the literature. The rather wide range of predictions, extending from 0.1 Gpc$^{-3}$ yr$^{-1}$ to $>200$ Gpc$^{-3}$ yr$^{-1}$ , can be tested and further constrained by forthcoming GW-SGRB associations [@Coward:2014yq; @Branchesi:2012rt]. If SGRBs have a jet, one must account for the collimation factor, i.e. multiply the rate by $f_b = \langle(1-\cos\theta_{\rm jet})^{-1}\rangle$, in order to compare such predictions with the compact binary merger rate. Once the luminosity function and rate of SGRBs is determined, the fraction of SGRBs above a limiting flux $P_{\rm min}$ within a given redshift $z$ is: $$N(<z)=\int_{0}^{z}dz\,C(z)\int_{L \ge L(P_{\rm min},z)}\phi(L)dL$$ where $L(P_{\rm min},z)$ represents, at each redshift $z$, the minimum luminosity corresponding to the flux limit $P_{\rm lim}$ (e.g. of a particular GRB detector).
Fig. \[fig:rate\] shows the rate of SGRBs within a given redshift $z$ (zoomed up to $z<0.1$). The different curves are obtained using the formation rate and luminosity function by D14 and WP15 (shown by the dashed blue and dot-dashed cyan lines respectively) and the results of our case (a) (red solid line) and case (c) (triple dot–dashed orange line).
These curves represent the population of SGRBs detectable in $\gamma$–rays by current flying instruments. At redshifts as low as those shown in Fig. \[fig:rate\], even bursts populating the lowest end of the luminosity function can be observed above the flux limits of available GRB detectors (e.g. the /GBM). The that we derive (see Fig. \[fig:sfh\_comparison\]) rises, below the peak, in a way similar to those adopted in the literature (e.g. D14 and WP15). The lower rates predicted by our models with respect to those of D14 and WP15 are thus mainly due to our flatter .
The distance within which aLIGO should have been able to detect NS–NS mergers during “O1” was estimated to be $60$–$80\,\rm{Mpc}$, which corresponds to redshift $z\sim$0.014–0.0185 (dark grey shaded region in Fig. \[fig:rate\]) [@The-LIGO-Scientific-Collaboration:2016ys]. We use this distance to pose an upper limit on the NS–NS merger rate (star symbol and arrow in Fig. \[fig:rate\]), given the non detection of any such events in the 48.6 days of “O1” data [@The-LIGO-Scientific-Collaboration:2016ys].
If SGRBs have a jet, and if the jet is preferentially launched in the same direction as the orbital angular momentum, the inspiral of the progenitor binary could be detected up to a larger distance [up to a factor $2.26$ larger, see @horizon_to_range_factor], because the binary is more likely to be face–on. Let us define the following three typical distances:
- we indicate by R (*range*) the limiting distance for the detection of a compact binary inspiral, averaged over all sky locations and over all binary inclinations with respect to the line of sight;
- we indicate by D (*distance to face–on*) the limiting distance for the detection of a *face–on* compact binary inspiral, averaged over all sky locations. ;
- we indicate by H (*horizon*) the maximum limiting distance for the detection of a *face–on* compact binary inspiral, i.e. the limiting distance at the best sky location.
Table \[rate\] shows R, D and H for both NS–NS binaries and BH–NS binaries, corresponding to the design sensitivity of Advanced LIGO, together with the expected rates of SGRBs (according to our models (a) and (c)) within the corresponding volumes. The local rate of SGRBs predicted by our model (a) is $\rho_{0,a}=0.20^{+0.04}_{-0.07}$ yr$^{-1}$ Gpc$^{-3}$ and for model (c) $\rho_{0,c}=0.8^{+0.3}_{-0.15}$ yr$^{-1}$ Gpc$^{-3}$. The distance R for NS–NS binary inspiral at design aLIGO sensitivity, which corresponds to 200 Mpc ($z\approx 0.045$), is shown by the vertical light gray shaded region in Fig. \[fig:rate\].
Fig. \[fig:rate\] also shows the predictions of population synthesis models for double NS merger [@2015ApJ...806..263D] or the estimates based on the Galactic population of NS [@2015MNRAS.448..928K] which bracket the pink dashed region in Fig. \[fig:rate\].
By comparing the SGRB models in Fig.\[fig:rate\] with these putative progenitor curves, assuming that all NS–NS binary mergers yield a SGRB, we estimate the average jet opening angle of SGRBs as $\langle\theta_{\rm jet}\rangle\sim3^\circ -6^\circ$ in case (a) (solid red line in Fig. \[fig:rate\]). The local rates by D14 and WP15 instead lead to an average angle $\langle\theta_{\rm jet}\rangle\sim7^\circ -14^\circ$. These estimates represent minimum values of the average jet opening angle, because they assume that all NS–NS binary mergers lead to a SGRB. We note that our range is consistent with the very few SGRBs with an estimated jet opening angle: GRB 051221A ($\theta_{\rm jet}=7^\circ$, @2006ApJ...650..261S), GRB 090426 ($\theta_{\rm jet}=5^\circ$, ), GRB 111020A ($\theta_{\rm jet}=3^\circ-8^\circ$, @2012ApJ...756..189F), GRB 130603B ($\theta_{\rm jet}=4^\circ-8^\circ$, @2013ApJ...776...18F) and GRB 140903A [@Troja:2016kx]. Similarly to the population of long GRBs [@2012MNRAS.420..483G], the distribution of $\theta_{\rm jet}$ of SGRBs could be asymmetric with a tail extending towards large angles, i.e. consistently with the lower limits claimed by the absence of jet breaks in some SGRBs .
Conclusions
===========
We derived the luminosity function , redshift distribution and local rate of SGRBs. Similarly to previous works present in the literature, we fitted the properties of a synthetic SGRB population, described by the parametric and , to a set of observational constraints derived from the population of SGRBs detected by and . Any acceptable model of the SGRB population must reproduce their prompt emission properties and their redshift distributions. Our approach features a series of improvements with respect to previous works present in the literature:
- (observer frame) constraints: we extend the classical set of observational constraints (peak flux and - for few events - redshift distribution) requiring that our model should reproduce the peak flux $P$, fluence $F$, peak energy and duration $T$ distributions of 211 SGRBs with $P_{64}\geq 5\,\rm{ph\,s^{-1}\,cm^{-2}}$ as detected by the GBM instrument on board the satellite. The uniform response of the GBM over a wide energy range (10 keV – few MeV) ensures a good characterisation of the prompt emission spectral properties of the GRB population and, therefore, of the derived quantities, i.e. the peak flux and the fluence;
- (rest frame) constraints: we also require that our model reproduces the distributions of redshift, luminosity and energy of a small sample (11 events) of SGRBs with $P_{64}\geq 3.5\,\rm{ph\,s^{-1}\,cm^{-2}}$ (selected by D14). This sample is 70% complete in redshift and therefore it ensures a less pronounced impact of redshift–selection biases in the results;
- method: we parametrize as in Eq. 12 and derive the redshift distribution of SGRBs independently from their progenitor nature and their cosmic star formation history. Instead, the classical approach depends (i) on the assumption of a specific cosmic star formation history $\psi(z)$ and (ii) on the assumption of a delay time distribution $P(\tau)$;
- method: we derive our results assuming the existence of intrinsic and correlations in SGRBs (“case (a)”), similarly to what has been observed in the population of long GRBs. However, since evidence of the existence of such correlations in the population of SGRBs is still based on a limited number of bursts, we also explore the case of uncorrelated peak energy, luminosity and energy (“case (c)”).
Our main results are:
1. the luminosity function of SGRBs (case (a)), that we model with a broken power law, has a slope $\alpha_1 = 0.53^{+0.47}_{-0.14}$ (68% confidence interval) below the break luminosity of $L_{\rm b} = 2.8^{+0.6}_{-1.89}\times 10^{52}$ erg s$^{-1}$ and falls steeply above the break with $\alpha_2 = 3.4^{+0.3}_{-1.7}$. This solution is almost independent from the specific assumption of the minimum luminosity of the (case (b)). Moreover, it implies an average isotropic equivalent luminosity $\left\langle L \right\rangle \approx 1.5\times 10^{52}\,\rm{erg\,s^{-1}}$ (or $3\times 10^{52}\,\rm{erg\,s^{-1}}$ in case (c)), which is much larger than e.g. $\left\langle L \right\rangle \approx 3\times 10^{50}\,\rm{erg\,s^{-1}}$ from D14 or $\left\langle L \right\rangle \approx 4.5\times 10^{50}\,\rm{erg\,s^{-1}}$ from WP15;
2. the redshift distribution of SGRBs peaks at $z\sim1.5$ and falls rapidly above the peak. This result is intermediate between those reported in the literature which assume either a constant large delay or a power law distribution favoring small delays. We find that our is consistent with the MD14 SFH retarded with a power law delay time distribution $\propto \tau^{-1}$;
3. as a by-product we find that, if SGRBs feature intrinsic and correlations, they could be slightly steeper than those derived with the current small sample of short bursts with redshift, e.g. [@Tsutsui:2013lr], but still consistent within their 68% confidence intervals;
4. if we assume that there are no correlations between and () (case (c)), we find similarly that the is flat at low luminosities and the formation rate peaks at slightly larger redshift ($z\sim 2$);
5. we estimate the rate of SGRBs as a function of $z$ within the explorable volume of advanced LIGO and Virgo for the detection of double NS mergers or NS–BH mergers. Assuming the design aLIGO sensitivity averaged over sky location and over binary orbital plane orientation with respect to the line of sight, NS–NS mergers can be detected up to 200 Mpc (410 Mpc for NS–BH mergers). This is usually referred to as the detection *range* for these binaries. The rate of SGRBs within the corresponding volume is $\sim$7$\times10^{-3}$ yr$^{-1}$ (0.028 yr$^{-1}$ for NS–BH merger distance), assuming the existence of and correlations for the population of short bursts (model (a)). Rates larger by a factor $\sim 4$ are obtained if no correlation is assumed (model (c)). If binaries producing observable SGRBs are preferentially face–on (which is the case if the GRB jet is preferentially aligned with the orbital angular momentum), then the actual explorable volume extends to a somewhat larger distance [a factor of $\sim 1.5$ larger, see @schutz2011], increasing the rates of coincident SGRB–GWs of about a factor of $3.4$ [@schutz2011];
6. we compare our SGRB rates with the rates of NS mergers derived from population synthesis models or from the statistics of Galactic binaries. This enables us to infer an average opening angle of the population of SGRBs of 3$^\circ$–6$^\circ$ (assuming that all SGRBs are produced by the NS–NS mergers) which is consistent with the few bursts with $\theta_{\rm jet}$ measured from the break of their afterglow light curve.
Our SGRB rate estimates might seem to compromise the perspective of a joint GW–SGRB observation in the near future. We note, though, that these rates refer to the prompt emission of SGRBs whose jets point towards the Earth. SGRBs not pointing at us can still be seen as “orphan” afterglows (i.e. afterglows without an associated prompt emission - see e.g. @Ghirlanda:2015fk [@Rhoads1997] for the population of long GRBs) especially if the afterglow emission is poorly collimated or even isotropic [e.g. @Ciolfi:2015kx]. The luminosity of the afterglow correlates with the jet kinetic energy, which is thought as proportional to the prompt luminosity. Point 1 above shows that the average luminosity in the prompt emission, as implied by our result, is higher by nearly two orders of magnitude than previous findings. This enhances the chance of observing an orphan afterglow in association to a GW event (e.g. @Metzger:2015fk). Efforts should go in the direction of finding and identifying such orphan afterglows as counterparts of GW events.
Acknowledgments {#acknowledgments .unnumbered}
===============
We acknowledge the financial support of the UnivEarthS Labex program at Sorbonne Paris Cité (ANR-10-LABX-0023 and ANR-11-IDEX-0005-02) and the "programme PTV de l’Observatoire de Paris' ” and GEPI for the financial support and kind hospitality during the implementation of part of this work. R.C. is supported by MIUR FIR Grant No. RBFR13QJYF. We acknowledge ASI grant I/004/11/1. We thank the referee for useful comments.
[^1]: E–mail:[email protected]
[^2]: All these rates are not corrected for the collimation angle, i.e. they represent the fraction of bursts whose jets are pointed towards the Earth, which can be detected as $\gamma$–ray prompt GRBs.
[^3]: For the sake of neatness, throughout this work we will sometimes drop the “$\rm{iso}$” subscript, so that $L_{\rm{iso}}$ and $E_{\rm{iso}}$ will be equivalently written as $L$ and $E$ respectively. For the same reason, the peak energy $E_{\rm peak,obs}$ ($E_{\rm peak,rest}$) of the $\nu F(\nu)$ spectrum in the observer frame (in the local cosmological rest frame) will be sometimes written as $E_{\rm p,o}$ ($E_{\rm p}$).
[^4]: <https://heasarc.gsfc.nasa.gov/W3Browse/fermi/fermigbrst.html>
[^5]: The assumption of a spectrum is required to convert the bolometric flux into a characteristic energy range for comparison with real bursts.
[^6]: Here we consider as constrain the population of /GBM GRBs. [@2011MNRAS.415.3153N] showed that that the SGRB population has similar prompt emission (peak flux, fluence and duration distribution) properties of SGRBs.
[^7]: The exact peak is not analytical, but a good approximation is $z_{\rm{peak}} \approx z_{\rm{p}}\left\lbrace p_2\left[1 + 1/\left(p_1\,z_{\rm{p}}\right)\right]-1\right\rbrace^{-1/p_2}$.
[^8]: We also tested that our results are not sensitive to a slightly different choice of the spectral parameters, i.e. low and high energy spectral index $-1.0$ and $-3.0$ respectively.
[^9]: This might seem a rough assumption, since SGRBs sometimes show multi peaked light curves. Statistical studies, however, show that the majority of SGRB lightcurves are composed of few peaks, with separation much smaller than the average duration (e.g. [@McBreen:2001fk]), which justifies the use of this assumption in a statistical sense.
[^10]: For parameters corresponding to slopes, like $m_{\rm{Y}}$ and $m_{\rm{A}}$, we actually displace the corresponding angle $\phi=\arctan(m)$, otherwise a uniform sampling of the displacement would introduce a bias towards high (i.e. steep) slopes.
[^11]: This is clearly only an approximate likelihood, since it implies an assumption of independence of each distribution from the others, but we tested that its maximisation gives consistent results.
[^12]: `getDist` is a python package written by Antony Lewis of the University of Sussex. It is a set of tools to analyse MCMC chains and to extract posterior density distributions using Kernel Density Estimation (KDE) techniques. Details can be found at <http://cosmologist.info/notes/GetDist.pdf>.
[^13]: A third event, LVT151012, was reported in [@2016arXiv160604856T] but with a small associated significance implying a probability of being of astrophysical origin of $\sim$87%.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We obtain necessary and sufficient conditions for the existence of strictly stationary solutions of multivariate ARMA equations with independent and identically distributed noise. For general ARMA$(p,q)$ equations these conditions are expressed in terms of the characteristic polynomials of the defining equations and moments of the driving noise sequence, while for $p=1$ an additional characterization is obtained in terms of the Jordan canonical decomposition of the autoregressive matrix, the moving average coefficient matrices and the noise sequence. No a priori assumptions are made on either the driving noise sequence or the coefficient matrices.'
author:
- 'Peter J. Brockwell[^1]'
- 'Alexander Lindner[^2]'
- '[Bernd Vollenbröker]{}[^3]'
title: 'Strictly stationary solutions of multivariate ARMA equations with i.i.d. noise'
---
Introduction
============
Let $m,d\in {\mathbb{N}}=\{1,2\ldots, \}$, $p,q\in {\mathbb{N}}_0 = {\mathbb{N}}\cup \{0\}$, $(Z_t)_{t\in {\mathbb{Z}}}$ be a $d$-variate noise sequence of random vectors defined on some probability space $(\Omega,{\mathcal{F}},\mathbb{P})$ and $\Psi_1,\ldots, \Psi_p \in \mathbb{C}^{m\times m}$ and $\Theta_0,\ldots, \Theta_q \in \mathbb{C}^{m\times d}$ be deterministic complex-valued matrices. Then any $m$-variate stochastic process $(Y_t)_{t\in {\mathbb{Z}}}$ defined on the same probability space $(\Omega,{\mathcal{F}},\mathbb{P})$ which satisfies almost surely $$\label{eqpq} Y_t-\Psi_1 Y_{t-1}-\ldots-\Psi_p Y_{t-p}=\Theta_0Z_t+\ldots+\Theta_qZ_{t-q},\quad
t\in\mathds{Z},$$ is called a solution of the ARMA$(p,q)$ equation (autoregressive moving average equation of autoregressive order $p$ and moving average order $q$). Such a solution is often called a VARMA (vector ARMA) process to distinguish it from the scalar case, but we shall simply use the term ARMA throughout. Denoting the identity matrix in ${\mathbb{C}}^{m\times m}$ by ${\rm Id}_m$, the [*characteristic polynomials*]{} $P(z)$ and $Q(z)$ of the ARMA$(p,q)$ equation are defined as $$\label{eq-char}
P(z) := \mbox{\rm Id}_m - \sum_{k=1}^p \Psi_k z^k \quad \mbox{and}
\quad Q(z) := \sum_{k=0}^q \Theta_k z^k\quad \mbox{for}\quad z\in
{\mathbb{C}}.$$ With the aid of the backwards shift operator $B$, equation can be written more compactly in the form $$\label{ARMA}
P(B) Y_t = Q(B) Z_t, \quad t\in {\mathbb{Z}}.$$
There is evidence to show that, although VARMA($p,q$) models with $q>0$ are more difficult to estimate than VARMA$(p,0)$ (vector autoregressive) models, significant improvement in forecasting performance can be achieved by allowing the moving average order $q$ to be greater than zero. See, for example, Athanosopoulos and Vahid [@AV], where such improvement is demonstrated for a variety of macroeconomic time series.
Much attention has been paid to [*weak ARMA processes*]{}, i.e. weakly stationary solutions to if $(Z_t)_{t\in {\mathbb{Z}}}$ is a weak white noise sequence. Recall that a ${\mathbb{C}}^r$-valued process $(X_t)_{t\in {\mathbb{Z}}}$ is [*weakly stationary*]{} if each $X_t$ has finite second moment, and if $\mathbb{E} X_t$ and ${{\rm Cov\,}}(X_t,
X_{t+h})$ do not depend on $t\in {\mathbb{Z}}$ for each $h\in {\mathbb{Z}}$. If additionally every component of $X_t$ is uncorrelated with every component of $X_{t'}$ for $t\neq t'$, then $(X_t)_{t\in {\mathbb{Z}}}$ is called [*weak white noise*]{}. In the case when $m=d=1$ and $Z_t$ is weak white noise having non-zero variance, it can easily be shown using spectral analysis, see e.g. Brockwell and Davis [@BD], Problem 4.28, that a weak ARMA process exists if and only if the rational function $z\mapsto Q(z) / P(z)$ has only removable singularities on the unit circle in ${\mathbb{C}}$. For higher dimensions, it is well known that a sufficient condition for weak ARMA processes to exist is that the polynomial $z \mapsto \det P(z)$ has no zeroes on the unit circle (this follows as in Theorem 11.3.1 of Brockwell and Davis [@BD], by developing $P^{-1}(z) = (\det P(z))^{-1}
\mbox{Adj} (P(z))$, where $\mbox{Adj}( P(z))$ denotes the adjugate matrix of $P(z)$, into a Laurent series which is convergent in a neighborhood of the unit circle). However, to the best of our knowledge necessary and sufficient conditions have not been given in the literature so far. We shall obtain such a condition in terms of the matrix rational function $z\mapsto P^{-1}(z) Q(z)$ in Theorem \[thm-4\], the proof being an easy extension of the corresponding one-dimensional result.
Weak ARMA processes, by definition, are restricted to have finite second moments. However financial time series often exhibit apparent heavy-tailed behaviour with asymmetric marginal distributions, so that second-order properties are inadequate to account for the data. To deal with such phenomena we focus in this paper on [*strict ARMA processes*]{}, by which we mean strictly stationary solutions of when $(Z_t)_{t\in {\mathbb{Z}}}$ is supposed to be an independent and identically distributed (i.i.d.) sequence of random vectors, not necessarily with finite variance. A sequence $(X_t)_{t\in {\mathbb{Z}}}$ is [*strictly stationary*]{} if all its finite dimensional distributions are shift invariant. Much less is known about strict ARMA processes, and it was shown only recently for $m=d=1$ in Brockwell and Lindner [@BL2] that for i.i.d. non-deterministic noise $(Z_t)_{t\in {\mathbb{Z}}}$, a strictly stationary solution to exists if and only if $Q(z)/P(z)$ has only removable singularities on the unit circle and $Z_0$ has finite log moment, or if $Q(z)/P(z)$ is a polynomial. For higher dimensions, while it is known that finite log moment of $Z_0$ together with $\det P(z) \neq 0$ for $|z|=1$ is [*sufficient*]{} for a strictly stationary solution to exist, by the same arguments used for weakly stationary solutions, necessary and sufficient conditions have not been available so far, and we shall obtain a complete solution to this question in Theorem \[thm-5\], thus generalizing the results of [@BL2] to higher dimensions. A related question was considered by Bougerol and Picard [@BP] who, using their powerful results on random recurrence equations, showed in Theorem 4.1 of [@BP] that if $\mathbb{E} \log^+\|Z_0\| < \infty$ and the characteristic polynomials are left-coprime, meaning that the only common left-divisors of $P(z)$ and $Q(z)$ are unimodular (see Section \[S7\] for the precise definitions), then a non-anticipative strictly stationary solution to exists if and only if $\det P(z)
\neq 0$ for $|z|\leq 1$. Observe that for the characterization of the existence of strict (not necessarily non-anticipative) ARMA processes obtained in the present paper, we shall not make any a priori assumptions on log moments of the noise sequence or on left-coprimeness of the characteristic polynomials, but rather obtain related conditions as parts of our characterization. As an application of our main results, we shall then obtain a slight extension of Theorem 4.1 of Bougerol and Picard [@BP] in Theorem \[cor-BP\], by characterizing all non-anticipative strictly stationary solutions to without any moment assumptions, however still assuming left-coprimeness of the characteristic polynomials.
The paper is organized as follows. In Section \[S2\] we state the main results of the paper. Theorem \[thm-main\] gives necessary and sufficient conditions for the multivariate ARMA$(1,q)$ model $$\label{eq1q}
Y_t -
\Psi_1 Y_{t-1} = \sum_{k=0}^q \Theta_k Z_{t-k}, \quad t\in {\mathbb{Z}},$$ where $(Z_t)_{t\in {\mathbb{Z}}}$ is an i.i.d. sequence, to have a strictly stationary solution. Elementary considerations will show that the question of strictly stationary solutions may be reduced to the corresponding question when $\Psi_1$ is assumed to be in Jordan block form, and Theorem \[thm-main\] gives a characterization of the existence of strictly stationary ARMA$(1,q)$ processes in terms of the Jordan canonical decomposition of $\Psi_1$ and properties of $Z_0$ and the coefficients $\Theta_k$. An explicit solution of , assuming its existence, is also derived and the question of uniqueness of this solution is addressed.
Strict ARMA$(p,q)$ processes are addressed in Theorem \[thm-5\]. Since every $m$-variate ARMA$(p,q)$ process can be expressed in terms of a corresponding $mp$-variate ARMA$(1,q)$ process, questions of existence and uniqueness can, in principle, be resolved by Theorem \[thm-main\]. However, since the Jordan canonical form of the corresponding $mp\times mp$-matrix $\underline{\Psi}_1$ in the corresponding higher-dimensional ARMA$(1,q)$ representation is in general difficult to handle, another more compact characterization is derived in Theorem \[thm-5\]. This characterization is given in terms of properties of the matrix rational function $P^{-1}(z) Q(z)$ and finite log moments of certain linear combinations of the components of $Z_0$, extending the corresponding condition obtained in [@BL2] for $m=d=1$ in a natural way. Although in the statement of Theorem \[thm-5\] no transformation to Jordan canonical forms is needed, its proof makes fundamental use of Theorem \[thm-main\].
Theorem \[thm-4\] deals with the corresponding question for weak ARMA$(p,q)$ processes. The proofs of Theorems \[thm-main\], \[thm-4\] and \[thm-5\] are given in Sections \[S5\], \[S4\] and \[S6\], respectively. The proof of Theorem \[thm-5\] makes crucial use of Theorems \[thm-main\] and \[thm-4\].
The main results are further discussed in Section \[S7\] and, as an application, the aforementioned characterization of non-anticipative strictly stationary solutions is obtained in Theorem \[cor-BP\], generalizing slightly the result of Bougerol and Picard [@BP].
Throughout the paper, vectors will be understood as column vectors and $e_i$ will denote the $i^{th}$ unit vector in ${\mathbb{C}}^m$. The zero matrix in ${\mathbb{C}}^{m\times r}$ is denoted by $0_{m,r}$ or simply $0$, the zero vector in ${\mathbb{C}}^r$ by $0_r$ or simply $0$. The transpose of a matrix $A$ is denoted by $A^T$, and its complex conjugate transpose matrix by $A^* = \overline{A}^T$. By $\| \cdot \|$ we denote an unspecific, but fixed vector norm on ${\mathbb{C}}^s$ for $s\in {\mathbb{N}}$, as well as the corresponding matrix norm $\|A\| = \sup_{x\in {\mathbb{C}}^s, \|x\|=1}
\|Ax\|$. We write $\log^+ (x) := \log \max \{1,x\}$ for $x\in {\mathbb{R}}$, and denote by $\mathbb{P}-\lim$ limits in probability.
Main results {#S2}
============
Theorems \[thm-main\] and \[thm-5\] give necessary and sufficient conditions for the ARMA$(1,q)$ equation and the ARMA$(p,q)$ equation , respectively, to have a strictly stationary solution. In Theorem \[thm-main\], these conditions are expressed in terms of the i.i.d. noise sequence $(Z_t)_{t\in
\mathbb{Z}}$, the coefficient matrices $\Theta_0, \ldots, \Theta_q$ and the Jordan canonical decomposition of $\Psi_1$, while in Theorem \[thm-5\] they are given in terms of the noise sequence and the characteristic polynomials $P(z)$ and $Q(z)$ as defined in .
As background for Theorem \[thm-main\], suppose that $\Psi_1\in
{\mathbb{C}}^{m\times m}$ and choose a (necessarily non-singular) matrix $S\in {\mathbb{C}}^{m\times m}$ such that $S^{-1}\Psi_1 S$ is in Jordan canonical form. Suppose also that $S^{-1}\Psi_1 S$ has $H\in {\mathbb{N}}$ Jordan blocks, $\Phi_1,\ldots,\Phi_H$, the $h^{th}$ block beginning in row $r_h$, where $r_1:=1<r_2<\cdots<r_{H}< m+1=:r_{H+1}.$ A Jordan block with associated eigenvalue $\lambda$ will always be understood to be of the form $$\begin{pmatrix} \lambda & & & 0 \\
1 & \lambda & & \\
& \ddots & \ddots & \\
0 & & 1 & \lambda
\end{pmatrix} \label{eq-Jordanblock2}$$ i.e. the entries 1 are below the main diagonal.
Observe that (\[eq1q\]) has a strictly stationary solution $(Y_t)_{t\in {\mathbb{Z}}}$ if and only if the corresponding equation for $X_t:=S^{-1}Y_t$ namely $$\label{diag1q}X_t-S^{-1}\Psi_1 SX_{t-1}=\sum_{j=0}^q S^{-1}\Theta_jZ_{t-j},\quad t\in {\mathbb{Z}},$$ has a strictly stationary solution. This will be the case only if the equation for the $h^{th}$ block, $$X_t^{(h)}:=I_h X_t, \quad t\in {\mathbb{Z}}, \label{eq-Xt}$$ where $I_h$ is the $(r_{h+1}-r_h)\times m$ matrix with $(i,j)$ components, $$\label{def-Is}
I_h(i,j)=\begin{cases}1,&{\rm if}~j=i+r_h-1,\cr
0,&{\rm otherwise},\cr
\end{cases}$$ has a strictly stationary solution for each $h=1,\ldots,H.$ But these equations are simply $$\label{block1q}X_t^{(h)}-\Phi_h X_{t-1}^{(h)}=\sum_{j=0}^q I_h S^{-1}\Theta_jZ_{n-j}, \quad t\in {\mathbb{Z}}, \quad
h=1,\ldots,H,$$ where $\Phi_h$ is the $h^{th}$ Jordan block of $S^{-1}\Psi_1 S$.
Conversely if (\[block1q\]) has a strictly stationary solution ${X'}^{(h)}$ for each $h\in\{1,\ldots,H\}$, then we shall see from the proof of Theorem \[thm-main\] that there exist (possibly different if $|\lambda_h|=1$) strictly stationary solutions $X^{(h)}$ of for each $h\in \{1,\ldots, H\}$, such that $$Y_t:=S(X_t^{(1)T}, \ldots, X_t^{(H)T})^T, \quad t\in {\mathbb{Z}}, \label{eq-solution}$$ is a strictly stationary solution of (\[eq1q\]).
Existence and uniqueness of a strictly stationary solution of (\[eq1q\]) is therefore equivalent to the existence and uniqueness of a strictly stationary solution of the equations (\[block1q\]) for each $h\in\{1,\ldots,H\}$. The necessary and sufficient condition for each one will depend on the value of the eigenvalue $\lambda_h$ associated with $\Phi_h$ and in particular on whether (a) $|\lambda_h|\in(0,1)$, (b) $|\lambda_h|>1$, (c) $|\lambda_h|=1$ and $\lambda_h\ne 1$, (d) $\lambda_h=1$ and (e) $\lambda_h=0$. These cases will be addressed separately in the proof of Theorem \[thm-main\], which is given in Section \[S5\]. The aforementioned characterization in terms of the Jordan decomposition of $\Psi_1$ now reads as follows.
[\[Strict ARMA$(1,q)$ processes\]]{} \[thm-main\]\
Let $m,d\in {\mathbb{N}}$, $q\in {\mathbb{N}}_0$, and let $(Z_t)_{t\in {\mathbb{Z}}}$ be an i.i.d. sequence of ${\mathbb{C}}^d$-valued random vectors. Let $\Psi_1 \in {\mathbb{C}}^{m\times m}$ and $\Theta_0, \ldots,
\Theta_q \in {\mathbb{C}}^{m\times d}$ be complex-valued matrices. Let $S\in
{\mathbb{C}}^{m\times m}$ be an invertible matrix such that $S^{-1} \Psi_1 S$ is in Jordan block form as above, with $H$ Jordan blocks $\Phi_h$, $h\in \{1,\ldots, H\}$, and associated eigenvalues $\lambda_h$, $h\in \{1,\ldots, H\}$. Let $r_1, \ldots, r_{H+1}$ be given as above and $I_h$ as defined by . Then the ARMA$(1,q)$ equation has a strictly stationary solution $Y$ if and only if the following statements (i) – (iii) hold:
1. For every $h\in \{1,\ldots, H\}$ such that $|\lambda_h|
\neq 0,1$, $$\mathbb{E} \log^+ \left\| \left( \sum_{k=0}^q \Phi_{h}^{q-k}
I_{h}S^{-1}\Theta_k\right)Z_0 \right\| < \infty.\label{bed1a}$$
2. For every $h\in \{1,\ldots, H\}$ such that $|\lambda_h|=1$, but $\lambda_h \neq 1$, there exists a constant $\alpha_h \in {\mathbb{C}}^{r_{h+1}-r_h}$ such that $$\label{bed2a}
\left(\sum_{k=0}^q \Phi_{h}^{q-k} I_{h} S^{-1} \Theta_k\right)Z_0 =
{\alpha}_h \; \; \mbox{\rm a.s.}$$
3. For every $h\in \{1,\ldots, H\}$ such that $\lambda_h=1$, there exists a constant $\alpha_h = (\alpha_{h,1},
\ldots,$ $\alpha_{h,r_{h+1}-r_h})^T \in {\mathbb{C}}^{r_{h+1}-r_h}$ such that $\alpha_{h,1} = 0$ and holds.
If these conditions are satisfied, then a strictly stationary solution to is given by with $$\label{eq-solution1}
X_t^{(h)} := \begin{cases} \sum_{j=0}^{\infty} \Phi_{h}^{j-q} \left(
\sum_{k=0}^{j\wedge q} \Phi_{h}^{q-k} I_{h} S^{-1}
\Theta_k\right) Z_{t-j} , & |\lambda_h| \in (0,1), \\
-\sum_{j=1-q}^{\infty} \Phi_{h}^{-j-q} \left( \sum_{k=(1-j)\vee
0}^{q} \Phi_{h}^{q-k} I_{h} S^{-1}
\Theta_k\right) Z_{t+j} , & |\lambda_h| > 1\\
\sum_{j=0}^{m+q-1} \left( \sum_{k=0}^{j\wedge q} \Phi_h^{j-k} I_h
S^{-1}\Theta_k \right)Z_{t-j}, & \lambda_h = 0,\\
f_h + \sum_{j=0}^{q-1} \left
(\sum_{k=0}^j\Phi_{h}^{j-k}I_{h}S^{-1}\Theta_k\right) Z_{t-j}, &
|\lambda_h| =1,\end{cases}$$ where $f_h \in
{\mathbb{C}}^{r_{h+1}-r_h}$ is a solution to $$\label{eq-solution6}
(\mbox{\rm Id}_{h} - \Phi_h)f_h = \alpha_h,$$ which exists for $\lambda_h=1$ by (iii) and, for $|\lambda|=1,\lambda\ne 1$, by the invertibility of $(\mbox{\rm Id}_{h} - \Phi_h)$. The series in converge a.s. absolutely.\
If the necessary and sufficient conditions stated above are satisfied, then, provided the underlying probability space is rich enough to support a random variable which is uniformly distributed on $[0,1)$ and independent of $(Z_t)_{t\in {\mathbb{Z}}}$, the solution given by and is the unique strictly stationary solution of if and only if $|\lambda_h| \neq 1$ for all $h\in \{1,\ldots,H\}$.
Special cases of Theorem \[thm-main\] will be treated in Corollaries \[cor-1\], \[cor-2\] and Remark \[rem-2\].
It is well known that every ARMA$(p,q)$ process can be embedded into a higher dimensional ARMA$(1,q)$ process as specified in Proposition \[thm-2\] of Section \[S6\]. Hence, in principle, the questions of existence and uniqueness of strictly stationary ARMA$(p,q)$ processes can be reduced to Theorem \[thm-main\]. However, it is generally difficult to obtain the Jordan canonical decomposition of the $(mp\times mp)$-dimensional matrix $\underline{\Phi}$ defined in Proposition \[thm-2\], which is needed to apply Theorem \[thm-main\]. Hence, a more natural approach is to express the conditions in terms of the characteristic polynomials $P(z)$ and $Q(z)$ of the ARMA$(p,q)$ equation . Observe that $z\mapsto \det P(z)$ is a polynomial in $z\in {\mathbb{C}}$, not identical to the zero polynomial. Hence $P(z)$ is invertible except for a finite number of $z$. Also, denoting the adjugate matrix of $P(z)$ by $\mbox{Adj} (P(z))$, it follows from Cramér’s inversion rule that the inverse $P^{-1}(z)$ of $P(z)$ may be written as $$P^{-1}(z) = (\det P(z))^{-1} \mbox{Adj} (P(z))$$ which is a ${\mathbb{C}}^{m\times m}$-valued rational function, i.e. all its entries are rational functions. For a general matrix-valued rational function $z\mapsto M(z)$ of the form $M(z) = P^{-1}(z) \widetilde{Q}(z)$ with some matrix polynomial $\widetilde{Q}(z)$, the [*singularities*]{} of $M(z)$ are the zeroes of $\det P(z)$, and such a singularity, $z_0$ say, is [*removable*]{} if all entries of $M(z)$ have removable singularities at $z_0$. Further observe that if $M(z)$ has only removable singularities on the unit circle in ${\mathbb{C}}$, then $M(z)$ can be expanded in a Laurent series $M(z) =
\sum_{j=-\infty}^\infty M_j z^j$, convergent in a neighborhood of the unit circle. The characterization for the existence of strictly stationary ARMA$(p,q)$ processes now reads as follows.
[\[Strict ARMA$(p,q)$ processes\]]{} \[thm-5\]\
Let $m,d, p\in {\mathbb{N}}$, $q\in {\mathbb{N}}_0$, and let $(Z_t)_{t\in {\mathbb{Z}}}$ be an i.i.d. sequence of ${\mathbb{C}}^d$-valued random vectors. Let $\Psi_1,
\ldots, \Psi_p \in {\mathbb{C}}^{m\times m}$ and $\Theta_0, \ldots, \Theta_q
\in {\mathbb{C}}^{m\times d}$ be complex-valued matrices, and define the characteristic polynomials as in . Define the linear subspace $$K := \{ a \in {\mathbb{C}}^d : \mbox{the distribution of}\; a^* Z_0 \; \mbox{is degenerate to a Dirac
measure}\}$$ of ${\mathbb{C}}^d$, denote by $K^\perp$ its orthogonal complement in ${\mathbb{C}}^d$, and let $s:= \dim K^\perp$ the vector space dimension of $K^\perp$. Let $U\in {\mathbb{C}}^{d\times d}$ be unitary such that $U \, K^\perp = {\mathbb{C}}^s \times \{0_{d-s}\}$ and $U\, K =
\{0_s\}\times {\mathbb{C}}^{d-s}$, and define the ${\mathbb{C}}^{m\times d}$-valued rational function $M(z)$ by $$z \mapsto M(z) := P^{-1}(z) Q(z) U^* \left(
\begin{array}{ll} \mbox{\rm Id}_s & 0_{s,d-s}
\\ 0_{d-s,s} & 0_{d-s,d-s} \end{array} \right) .
\label{meromorphic}$$ Then there is a constant $u\in {\mathbb{C}}^{d-s}$ and a ${\mathbb{C}}^{s}$-valued i.i.d. sequence $(w_t)_{t\in {\mathbb{Z}}}$ such that $$\label{eq-uw}
U Z_t = \left( \begin{array}{c} w_t \\ u \end{array} \right)\quad
\mbox{a.s.} \quad \forall\; t\in {\mathbb{Z}},$$ and the distribution of $b^* w_0$ is not degenerate to a Dirac measure for any $b\in {\mathbb{C}}^s\setminus \{0\}$. Further, a strictly stationary solution to the ARMA$(p,q)$ equation exists if and only if the following statements (i)—(iii) hold:
1. All singularities on the unit circle of the meromorphic function $M(z)$ are removable.
2. If $M(z) = \sum_{j=-\infty}^\infty M_j z^j$ denotes the Laurent expansion of $M$ in a neighbourhood of the unit circle, then $$\label{eq-logfinite}
{\mathbb{E}}\log^+ \| M_j UZ_0\| < \infty \quad \forall\; j \in \{ mp + q -
p +1 , \ldots, mp+q\} \cup \{-p,\ldots, -1\}.$$
3. There exist $v\in {\mathbb{C}}^s$ and $g\in {\mathbb{C}}^m$ such that $g$ is a solution to the linear equation $$\label{eq-g}
P(1) g = Q(1) U^* (v^T, u^T)^T.$$
Further, if (i) above holds, then condition (ii) can be replaced by
1. If $M(z) = \sum_{j=-\infty}^\infty M_j z^j$ denotes the Laurent expansion of $M$ in a neighbourhood of the unit circle, then $\sum_{j=-\infty}^\infty M_j U Z_{t-j}$ converges almost surely absolutely for every $t\in {\mathbb{Z}}$,
and condition (iii) can be replaced by
1. For all $v\in {\mathbb{C}}^s$ there exists a solution $g=g(v)$ to the linear equation .
If the conditions (i)–(iii) given above are satisfied, then a strictly stationary solution $Y$ of the ARMA$(p,q)$ equation is given by $$\label{eq-Y}
Y_t = g + \sum_{j=-\infty}^\infty M_j (UZ_{t-j} - (v^T, u^T)^T),
\quad t\in {\mathbb{Z}},$$ the series converging almost surely absolutely. Further, provided that the underlying probability space is rich enough to support a random variable which is uniformly distributed on $[0,1)$ and independent of $(Z_t)_{t\in {\mathbb{Z}}}$, the solution given by is the unique strictly stationary solution of if and only if $\det P(z) \neq 0 $ for all $z$ on the unit circle.
Special cases of Theorem \[thm-5\] are treated in Remarks \[rem-2a\], \[rem-2c\] and Corollary \[cor-3\]. Observe that for $m=1$, Theorem \[thm-5\] reduces to the corresponding result in Brockwell and Lindner [@BL2]. Also observe that condition (iii) of Theorem \[thm-5\] is not implied by condition (i), which can be seen e.g. by allowing a deterministic noise sequence $(Z_t)_{t\in
{\mathbb{Z}}}$, in which case $M(z) \equiv 0$. The proof of Theorem \[thm-5\] will be given in Section \[S6\] and will make use of both Theorem \[thm-main\] and Theorem \[thm-4\] given below. The latter is the corresponding characterization for the existence of weakly stationary solutions of ARMA$(p,q)$ equations, expressed in terms of the characteristic polynomials $P(z)$ and $Q(z)$. That $\det P(z) \neq 0$ for all $z$ on the unit circle together with $\mathbb{E} (Z_0) = 0$ is sufficient for the existence of weakly stationary solutions is well known, but that the conditions given below are necessary and sufficient in higher dimensions seems not to have appeared in the literature so far. The proof of Theorem \[thm-4\], which is similar to the proof in the one-dimensional case, will be given in Section \[S4\].
[\[Weak ARMA$(p,q)$ processes\]]{} \[thm-4\]\
Let $m,d,p\in {\mathbb{N}}$, $q\in {\mathbb{N}}_0$, and let $(Z_t)_{t\in \mathbb{Z}}$ be a weak white noise sequence in ${\mathbb{C}}^d$ with expectation ${\mathbb{E}}Z_0$ and covariance matrix $\Sigma$. Let $\Psi_1, \ldots, \Psi_p \in
{\mathbb{C}}^{m\times m}$ and $\Theta_0, \ldots, \Theta_q \in {\mathbb{C}}^{m\times
d}$, and define the matrix polynomials $P(z)$ and $Q(z)$ by . Let $U\in {\mathbb{C}}^{d\times d}$ be unitary such that $U \Sigma U^* = \begin{pmatrix} D & 0_{s,d-s} \\
0_{d-s,s} & 0_{d-s,d-s} \end{pmatrix}$, where $D$ is a real $(s\times s)$-diagonal matrix with the strictly positive eigenvalues of $\Sigma$ on its diagonal for some $s\in \{0,\ldots, d\}$. (The matrix $U$ exists since $\Sigma$ is positive semidefinite). Then the ARMA$(p,q)$ equation admits a weakly stationary solution $(Y_t)_{t\in {\mathbb{Z}}}$ if and only if the ${\mathbb{C}}^{m\times
d}$-valued rational function $$z \mapsto M(z) := P^{-1} (z) Q(z) U^* \left( \begin{array}{ll}
\mbox{\rm Id}_s & 0_{s,d-s} \\ 0_{d-s,s} & 0_{d-s,d-s} \end{array}
\right)$$ has only removable singularities on the unit circle and if there is some $g\in {\mathbb{C}}^m$ such that $$\label{eq-g2}
P(1)\, g = Q(1) \, {\mathbb{E}}Z_0.$$ In that case, a weakly stationary solution of is given by $$\label{eq-weakly2} Y_t = g +
\sum_{j=-\infty}^\infty M_j \, U(Z_{t-j}- {\mathbb{E}}Z_0),\quad t\in {\mathbb{Z}},$$ where $M(z) = \sum_{j=-\infty}^\infty M_j z^j$ is the Laurent expansion of $M(z)$ in a neighbourhood of the unit circle, which converges absolutely there.
It is easy to see that if $\Sigma$ in the theorem above is invertible, then the condition that all singularities of $M(z)$ on the unit circle are removable is equivalent to the condition that all singularities of $P^{-1}(z) Q(z)$ on the unit circle are removable.
Proof of Theorem \[thm-main\] {#S5}
=============================
In this section we give the proof of Theorem \[thm-main\]. In Section \[S5a\] we show that the conditions (i) — (iii) are necessary. The suffiency of the conditions is proven in Section \[S-sufficient\], while the uniqueness assertion is established in Section \[S-uniqueness\].
The necessity of the conditions {#S5a}
-------------------------------
Assume that $(Y_t)_{t\in \mathbb{Z}}$ is a strictly stationary solution of equation (\[eq1q\]). As observed before Theorem \[thm-main\], this implies that each of the equations admits a strictly stationary solution, where $X_t^{(h)}$ is defined as in . Equation is itself an ARMA$(1,q)$ equation with i.i.d. noise, so that for proving (i) – (iii) we may assume that $H=1$, that $S = {\rm
Id}_m$ and that $\Phi := \Psi_1$ is an $m\times m$ Jordan block corresponding to an eigenvalue $\lambda$. Hence we assume throughout Section \[S5a\] that $$\label{eq1q2}
Y_t - \Phi Y_{t-1} = \sum_{k=0}^q \Theta_k Z_{t-k}, \quad t\in {\mathbb{Z}},$$ has a strictly stationary solution with $\Phi\in {\mathbb{C}}^{m\times m}$ of the form , and we have to show that this implies (i) if $|\lambda| \neq 0, 1$, (ii) if $|\lambda|=1$ but $\lambda\neq 1$, and (iii) if $\lambda=1$. Before we do this in the next subsections, we observe that iterating the ARMA$(1,q)$ equation (\[eq1q2\]) gives for $n\ge q$ $$\begin{aligned}
Y_t
&=& \sum_{j=0}^{q-1}\Phi^j\left(\sum_{k=0}^j\Phi^{-k}\Theta_k\right)Z_{t-j}+\sum_{j=q}^{n-1}\Phi^j\left(\sum_{k=0}^q\Phi^{-k}\Theta_k\right)Z_{t-j}\notag\\
&& +
\sum_{j=0}^{q-1}\Phi^{n+j}\left(\sum_{k=j+1}^q\Phi^{-k}\Theta_k\right)Z_{t-(n+j)}+\Phi^nY_{t-n}.\label{it1}\end{aligned}$$
### The case $|\lambda| \in (0,1)$. {#S-3-1-1}
Suppose that $|\lambda| \in (0,1)$ and let $\varepsilon \in
(0,|\lambda|)$. Then there are constants $C, C'\geq 1$ such that $$\begin{aligned}
\left\Vert\Phi^{-j}\right\Vert \le C \cdot|\lambda|^{-j}\cdot j^m
\leq (C') (|\lambda|-\varepsilon)^{-j} \quad\mbox{for all
}j\in\mathds{N},\end{aligned}$$ as a consequence of Theorem 11.1.1 in [@GolubVanLoan]. Hence, we have for all $j\in {\mathbb{N}}_0$ and $t\in {\mathbb{Z}}$ $$\begin{aligned}
\left\Vert \left(\sum_{k=0}^q\Phi^{-k}\Theta_k\right)Z_{t-j} \right\Vert
&\le& {C'}(|\lambda|-\varepsilon)^{-j}\left\Vert
\Phi^{j}\left(\sum_{k=0}^q\Phi^{-k}\Theta_k\right)Z_{t-j}\right\Vert.\label{golub}\end{aligned}$$ Now, since $\lim_{n\to\infty}\Phi^n=0$ and since $(Y_t)_{t\in{\mathbb{Z}}}$ and $(Z_t)_{t\in{\mathbb{Z}}}$ are strictly stationary, an application of Slutsky’s lemma to equation (\[it1\]) shows that $$\begin{aligned}
Y_t &=&
\sum_{j=0}^{q-1}\Phi^{j}\left(\sum_{k=0}^j\Phi^{-k}\Theta_k\right)Z_{t-j}
+{\mathds{P}}\mbox{-}\lim_{n\to\infty}\sum_{j=q}^{n-1}\Phi^{j}\left(\sum_{k=0}^q\Phi^{-k}\Theta_k\right)Z_{t-j}.
\label{eq-uniqueness1}\end{aligned}$$ Hence the limit on the right hand side exists and, as a sum with independent summands, it converges almost surely. Thus it follows from equation (\[golub\]) and the Borel-Cantelli lemma that $$\begin{aligned}
\lefteqn{\sum_{j=q}^{\infty}{\mathds{P}}\left(\left\Vert\sum_{k=0}^q\Phi^{-k}\Theta_kZ_{0}
\right\Vert> {C'}(|\lambda|-\varepsilon)^{-j} \right)}\\
&\le&
\sum_{j=q}^{\infty}{\mathds{P}}\left(\left\Vert\Phi^j\left(\sum_{k=0}^q\Phi^{-k}\Theta_k\right)Z_{-j}\right\Vert>1\right)<\infty,\end{aligned}$$ and hence $\mathds{E}\left( \log^+
\left\Vert\left(\sum_{k=0}^q\Phi^{-k}\Theta_k\right)Z_{0}\right\Vert\right)<\infty.$ Obviously, this is equivalent to condition (i).
### The case $|\lambda|>1$. {#S-3-1-2}
Suppose that $|\lambda| > 1$. Multiplying equation (\[it1\]) by $\Phi^{-n}$ gives for $n\ge q$ $$\begin{aligned}
\Phi^{-n}Y_t
&=& \sum_{j=0}^{q-1}\Phi^{-(n-j)}\left(\sum_{k=0}^j\Phi^{-k}\Theta_k\right)Z_{t-j}+\sum_{j=1}^{n-q}\Phi^{-j}\left(\sum_{k=0}^q\Phi^{-k}\Theta_k\right)Z_{t-n+j}\notag\\
&& +
\sum_{j=0}^{q-1}\Phi^{j}\left(\sum_{k=j+1}^q\Phi^{-k}\Theta_k\right)Z_{t-(n+j)}+Y_{t-n}.\end{aligned}$$ Defining $\tilde{\Phi}:=\Phi^{-1}$, and substituting $u=t-n$ yields $$\begin{aligned}
Y_u
&=& - \sum_{j=0}^{q-1}\tilde{\Phi}^{-j}\left(\sum_{k=j+1}^q\Phi^{-k}\Theta_k\right)Z_{u-j}-\sum_{j=1}^{n-q}\tilde{\Phi}^{j}\left(\sum_{k=0}^q\Phi^{-k}\Theta_k\right)Z_{u+j}\notag\\
&&
-\sum_{j=0}^{q-1}\tilde{\Phi}^{n-j}\left(\sum_{k=0}^j\Phi^{-k}\Theta_k\right)Z_{u+n-j}+\tilde{\Phi}^{n}Y_{u+n}
.\label{itminus}\end{aligned}$$ Letting $n\to\infty$ then gives condition (i) with the same arguments as in the case $|\lambda| \in (0,1)$.
### The case $|\lambda|=1$ and symmetric noise $(Z_t)$. {#S-3-1-3}
Suppose that $Z_0$ is symmetric and that $|\lambda|=1$. Denoting $$J_1 := \Phi - \lambda \, {\rm Id}_m \quad \mbox{and} \quad J_l := J_1^l
\quad \mbox{for}\quad j\in {\mathbb{N}}_0,$$ we have $$\begin{aligned}
\Phi^{j} &=& \sum_{l=0}^{m-1}\binom{j}{l}\lambda^{j-l}J_l,\quad
j\in\mathds{N}_0,\end{aligned}$$ since $J_l=0$ for $l\geq m$ and $\binom{j}{l} = 0$ for $l>j$. Further, since for $l\in \{0,\ldots, m-1\}$ we have $$\begin{aligned}
J_l=\left(e_{l+1},e_{l+2},...,e_{m},0_{m},...,0_{m}\right)\in\mathbb{C}^{m\times
m},\end{aligned}$$ with unit vectors $e_{l+1},...,e_m$ in $\mathbb{C}^{m}$, it is easy to see that for $i=1,...,m$ the $i^{th}$ row of the matrix $\Phi^j$ is given by $$e_i^T\Phi^{j} = \sum_{l=0}^{m-1}\binom{j}{l}\lambda^{j-l}e_i^TJ_l =
\sum_{l=0}^{i-1}\binom{j}{l}\lambda^{j-l}e_{i-l}^T, \quad j\in
{\mathbb{N}}_0.\label{phij}$$ It follows from equations (\[it1\]) and (\[phij\]) that for $n\ge q$ and $t\in {\mathbb{Z}}$, $$\begin{aligned}
e_i^TY_t
&=& \sum_{j=0}^{q-1}\left(\sum_{l=0}^{i-1}\binom{j}{l}\lambda^{j-l}e_{i-l}^T\right)\left(\sum_{k=0}^j\Phi^{-k}\Theta_k\right)Z_{t-j}\notag\\
&&+\sum_{j=q}^{n-1}\left(\sum_{l=0}^{i-1}\binom{j}{l}\lambda^{j-l}e_{i-l}^T\right)\left(\sum_{k=0}^q\Phi^{-k}\Theta_k\right)Z_{t-j}\notag\\
&& + \sum_{j=0}^{q-1}\left(\sum_{l=0}^{i-1}\binom{n+j}{l}\lambda^{n+j-l}e_{i-l}^T\right)\left(\sum_{k=j+1}^q\Phi^{-k}\Theta_k\right)Z_{t-(n+j)}\notag\\
&&+
\sum_{l=0}^{i-1}\binom{n}{l}\lambda^{n-l}e_{i-l}^TY_{t-n}.\label{it2}\end{aligned}$$ We claim that $$\label{bed2c}
e_i^T \sum_{k=0}^q \Phi^{-k} \Theta_k Z_t = 0 \;\; \mbox{a.s.}\quad
\forall\; i \in \{1,\ldots, m\} \quad \forall\; t\in {\mathbb{Z}},$$ which clearly gives conditions (ii) and (iii), respectively, with $\alpha=\alpha_1=0_m$. Equation will be proved by induction on $i=1,\ldots, m$. We start with $i=1$. From equation (\[it2\]) we know that for $n\ge q$ $$\begin{aligned}
& &{e_1^TY_t-\lambda^{n}e_{1}^TY_{t-n}-\sum_{j=0}^{q-1}\lambda^{j}e_{1}^T\left(\sum_{k=0}^j\Phi^{-k}\Theta_k\right)Z_{t-j}-\sum_{j=0}^{q-1}\lambda^{n+j}e_{1}^T\left(\sum_{k=j+1}^q\Phi^{-k}\Theta_k\right)Z_{t-(n+j)}}\notag\\
& = &
\sum_{j=q}^{n-1}\lambda^{j}e_{1}^T\left(\sum_{k=0}^q\Phi^{-k}\Theta_k\right)Z_{t-j}.\label{IAc}\end{aligned}$$ Due to the stationarity of $(Y_t)_{t\in\mathds{Z}}$ and $(Z_t)_{t\in\mathds{Z}}$, there exists a constant $K_1>0$ such that $$\begin{aligned}
&&{\mathds{P}}\left(\left|
e_1^TY_t-\lambda^{n}e_{1}^TY_{t-n}-\sum_{j=0}^{q-1}\lambda^{j}e_{1}^T\left(\sum_{k=0}^j\Phi^{-k}\Theta_k\right)Z_{t-j}
\right.\right.\notag\\&&\quad\quad\quad\quad\quad\quad\quad\quad\left.\left.-\sum_{j=0}^{q-1}\lambda^{n+j}e_{1}^T\left(\sum_{k=j+1}^q\Phi^{-k}\Theta_k\right)Z_{t-(n+j)}\right|<K_1\right)\ge\frac{1}{2}\quad\forall
n\geq q.\end{aligned}$$ By this implies $$\begin{aligned}
&&{\mathds{P}}\left(\left|\sum_{j=q}^{n-1}\lambda^{j}e_{1}^T\left(\sum_{k=0}^q\Phi^{-k}\Theta_k\right)Z_{t-j}\right|<K_1\right)\ge\frac{1}{2}\quad\forall
n\geq q. \label{eq-symmetrise1}\end{aligned}$$ Therefore $\left|\sum_{j=q}^{n-1}\lambda^{j}e_{1}^T\left(\sum_{k=0}^q\Phi^{-k}\Theta_k\right)Z_{t-j}\right|$ does not converge in probability to $+\infty$ as $n\to\infty$. Since this is a sum of independent and symmetric terms, this implies that it converges almost surely (see Kallenberg [@Kallenberg], Theorem 4.17), and the Borel-Cantelli lemma then shows that $$\begin{aligned}
e_1^T\left(\sum_{k=0}^q\Phi^{-k}\Theta_k\right)Z_{t}=0,\quad t\in
{\mathbb{Z}},\end{aligned}$$ which is for $i=1$. With this condition, equation (\[IAc\]) simplifies for $t=0$ and $n\geq q$ to $$\begin{aligned}
e_1^TY_0-\lambda^{n}e_{1}^TY_{-n}=\sum_{j=0}^{q-1}\lambda^{j}e_{1}^T\left(\sum_{k=0}^j\Phi^{-k}\Theta_k\right)Z_{-j}+\sum_{j=0}^{q-1}\lambda^{n+j}e_{1}^T\left(\sum_{k=j+1}^q\Phi^{-k}\Theta_k\right)Z_{-(n+j)}.\end{aligned}$$ Now setting $t:=-n$ in the above equation, multiplying it with $\lambda^t=\lambda^{-n}$ and recalling that $e_1^T \Phi^j =
\lambda^j e_1^T$ by yields for $t\le-q$ $$\begin{aligned}
e_1^TY_t=-\sum_{j=0}^{q-1}e_{1}^T \Phi^j
\left(\sum_{k=j+1}^q\Phi^{-k}\Theta_k\right)Z_{t-j}+\lambda^te_1^T\left(Y_0-\sum_{j=0}^{q-1}\Phi^{j}\left(\sum_{k=0}^j\Phi^{-k}\Theta_k\right)Z_{-j}\right).\end{aligned}$$ For the induction step let $i\in \{2,\ldots, m\}$ and assume that $$\begin{aligned}
e_r^T\left(\sum_{k=0}^q\Phi^{-k}\Theta_k\right)Z_{t}=0\;\;
\mbox{a.s.},\quad r\in\{1,...,i-1\},\; \; t\in {\mathbb{Z}},\label{IVc1}\end{aligned}$$ together with $$\begin{aligned}
e_r^TY_t =
-e_{r}^T\sum_{j=0}^{q-1}\Phi^j\left(\sum_{k=j+1}^q\Phi^{-k}\Theta_k\right)Z_{t-j}+\begin{cases}
\displaystyle 0, & r\in \{1,\ldots, i-2\}, \; t \leq -rq,\\
\displaystyle \lambda^te_r^TV_r, & r=i-1,\; t \leq -rq,
\end{cases}\label{IVc2}\end{aligned}$$ where $$V_r:=\lambda^{(r-1)q} \left( Y_{-(r-1)q}
-\sum_{j=0}^{q-1}\Phi^j\left(\sum_{k=0}^j\Phi^{-k}\Theta_k\right)Z_{-j-(r-1)q}\right),
\quad r \in \{1,\ldots, m\}.$$ We are going to show that this implies $$\begin{aligned}
e_i^T\left(\sum_{k=0}^q\Phi^{-k}\Theta_k\right)Z_{t}=0\;
\;\mbox{a.s.},\quad t \in {\mathbb{Z}},\label{ISc}\end{aligned}$$ and $$\begin{aligned}
e_i^TY_t &=&
-e_{i}^T\sum_{j=0}^{q-1}\Phi^j\left(\sum_{k=j+1}^q\Phi^{-k}\Theta_k\right)Z_{t-j}+\lambda^te_i^TV_i
\; \; \mbox{a.s.}, \quad t \leq -iq,\label{IScBeh2}\end{aligned}$$ together with $$\begin{aligned}
e_{i-1}^T V_{i-1}=0.\label{IScBeh3}\end{aligned}$$ This will then imply . For doing that, in a first step we are going to prove the following:
\[lemma\] Let $i\in \{2,\ldots, m\}$ and assume (\[IVc1\]) and (\[IVc2\]). Then it holds for $t\leq -(i-1)q$ and $n\geq q$, $$\begin{aligned}
e_i^TY_t -\lambda^ne_i^TY_{t-n}
&=& \sum_{j=0}^{q-1}e_{i}^T\Phi^j\left(\sum_{k=0}^j\Phi^{-k}\Theta_k\right)Z_{t-j}+\sum_{j=q}^{n-1}\lambda^je_i^T\left(\sum_{k=0}^q\Phi^{-k}\Theta_k\right)Z_{t-j}\notag\\
&&+\lambda^n\sum_{j=0}^{q-1}e_{i}^T\Phi^j\left(\sum_{k=j+1}^q\Phi^{-k}\Theta_k\right)Z_{t-(n+j)}+n\lambda^{t-1}e_{i-1}^TV_{i-1},\label{ISc7}\end{aligned}$$
Let $t\leq -(i-1)q$ and $n\geq q$. Using and , the last summand of can be written as $$\begin{aligned}
\lefteqn{\sum_{l=0}^{i-1}\binom{n}{l}\lambda^{n-l}e_{i-l}^TY_{t-n}}
\notag\\
&=&
\lambda^ne_i^TY_{t-n}+\sum_{r=1}^{i-1}\binom{n}{i-r}\lambda^{n-(i-r)}e_{r}^TY_{t-n},
\notag \\
&=& \lambda^ne_i^TY_{t-n}-\sum_{j=0}^{q-1}\left(\sum_{r=1}^{i-1}\sum_{l=0}^{r-1}\binom{j}{l}\binom{n}{i-r}\lambda^{n-(i-r)}\lambda^{j-l}e_{r-l}^T\right)\left(\sum_{k=j+1}^q\Phi^{-k}\Theta_k\right)Z_{t-(n+j)}\notag\\
&&+n\lambda^{t-1}e_{i-1}^T V_{i-1}\\
&=& \lambda^ne_i^TY_{t-n}-\sum_{j=0}^{q-1}\left( \sum_{s=1}^{i-1}\binom{n+j}{s}\lambda^{n+j-s}e_{i-s}^T\right)\left(\sum_{k=j+1}^q\Phi^{-k}\Theta_k\right)Z_{t-(n+j)}\notag\\
&&+\lambda^n\sum_{j=0}^{q-1}\left(\sum_{s=1}^{i-1}\binom{j}{s}\lambda^{j-s}e^T_{i-s}\right)\left(\sum_{k=j+1}^q\Phi^{-k}\Theta_k\right)Z_{t-(n+j)}+n\lambda^{t-1}e_{i-1}^TV_{i-1},\end{aligned}$$ where we substituted $s:= i-r+l$ and $p:= s-l$ and used Vandermonde’s identity $\sum_{p=1}^s \binom{j}{s-p} \binom{n}{p} =
\binom{n+j}{s} - \binom{j}{s}$ in the last equation. Inserting this back into equation (\[it2\]) and using , we get for $t\leq -(i-1)q$ and $n\geq q$ $$\begin{aligned}
\lefteqn{e_i^TY_t -\lambda^ne_i^TY_{t-n}}\\
&=&
\sum_{j=0}^{q-1}\left(\sum_{l=0}^{i-1}\binom{j}{l}\lambda^{j-l}e_{i-l}^T\right)\left(\sum_{k=0}^j\Phi^{-k}\Theta_k\right)Z_{t-j}
\\
& &
+\sum_{j=q}^{n-1}\lambda^je_i^T\left(\sum_{k=0}^q\Phi^{-k}\Theta_k\right)Z_{t-j}
+\sum_{j=0}^{q-1} \lambda^{n+j}e_{i}^T \left(\sum_{k=j+1}^q\Phi^{-k}\Theta_k\right)Z_{t-(n+j)}\notag\\
&&+\lambda^n\sum_{j=0}^{q-1}\left(\sum_{s=1}^{i-1}\binom{j}{s}\lambda^{j-s}e^T_{i-s}\right)\left(\sum_{k=j+1}^q\Phi^{-k}\Theta_k\right)Z_{t-(n+j)}\notag\\
&&+n\lambda^{t-1}e_{i-1}^TV_{i-1}.\label{ISc4}\end{aligned}$$ An application of (\[phij\]) then shows (\[ISc7\]), completing the proof of the lemma.
To continue with the induction step, we first show that (\[IScBeh3\]) holds true. Dividing (\[ISc7\]) by $n$ and letting $n\to\infty$, the strict stationarity of $(Y_t)_{t\in {\mathbb{Z}}}$ and $(Z_t)_{t\in {\mathbb{Z}}}$ imply that for $t\leq -(i-1)q$, $$n^{-1}\sum_{j=q}^{n-1}\lambda^je_i^T\left(\sum_{k=0}^q\Phi^{-k}\Theta_k\right)Z_{t-j}$$ converges in probability to $-\lambda^{t-1} e_{i-1}^T V_{i-1}$. On the other hand, this limit in probability must be clearly measurable with respect to the tail-$\sigma$-algebra $\cap_{k\in\mathds{N}}\sigma(\cup_{l\ge k}\sigma(Z_{t-l}))$, which by Kolmogorov’s zero-one law is ${\mathds{P}}$-trivial. Hence this probability limit must be constant, and because of the assumed symmetry of $Z_0$ it must be symmetric, hence is equal to 0, i.e. $$\begin{aligned}
e_{i-1}^TV_{i-1}=0 \; \;\mbox{a.s.},\end{aligned}$$ which is (\[IScBeh3\]). Using this, we get from Lemma \[lemma\] that $$\begin{aligned}
& & {e_i^TY_t
-\lambda^ne_i^TY_{t-n}-\sum_{j=0}^{q-1}e_{i}^T\Phi^j\left(\sum_{k=0}^j\Phi^{-k}\Theta_k\right)Z_{t-j}
-\lambda^n\sum_{j=0}^{q-1}e_{i}^T\Phi^j\left(\sum_{k=j+1}^q\Phi^{-k}\Theta_k\right)Z_{t-(n+j)}}\notag\\
&&=\sum_{j=q}^{n-1}\lambda^je_i^T\left(\sum_{k=0}^q\Phi^{-k}\Theta_k\right)Z_{t-j},
\quad t\leq -(i-1)q.\label{ISc9}\end{aligned}$$ Again due to the stationarity of $ (Y_t)_{t\in\mathds{Z}}$ and $
(Z_t)_{t\in\mathds{Z}}$ there exists a constant $K_2>0$ such that $$\begin{aligned}
&&{\mathds{P}}\left(\left|e_i^TY_t -\lambda^ne_i^TY_{t-n}-\sum_{j=0}^{q-1}e_{i}^T\Phi^j\left(\sum_{k=0}^j\Phi^{-k}\Theta_k\right)Z_{t-j}\right.\right.\\
&&\left.\left.\quad\quad-\lambda^n\sum_{j=0}^{q-1}e_{i}^T\Phi^j\left(\sum_{k=j+1}^q\Phi^{-k}\Theta_k\right)Z_{t-(n+j)}\right|<K_2\right)\ge\frac{1}{2}\quad\forall\;
n\geq q,\end{aligned}$$ so that $$\begin{aligned}
&&{\mathds{P}}\left(\left|
\sum_{j=q}^{n-1}\lambda^je_i^T\left(\sum_{k=0}^q\Phi^{-k}\Theta_k\right)Z_{t-j}\right|<K_2\right)\ge\frac{1}{2}\quad\forall\;
n\geq q, \; \; \; t\leq -(i-1)q.\end{aligned}$$ Therefore $\left|\sum_{j=q}^{n-1}\lambda^je_i^T\left(\sum_{k=0}^q\Phi^{-k}\Theta_k\right)Z_{t-j}\right|$ does not converge in probability to $+\infty$ as $n\to\infty$. Since this is a sum of independent and symmetric terms, this implies that it converges almost surely (see Kallenberg [@Kallenberg], Theorem 4.17), and the Borel-Cantelli lemma then shows that $
e_i^T\left(\sum_{k=0}^q\Phi^{-k}\Theta_k\right)Z_{t}=0$ a.s. for $t\leq -(i-1)q$ and hence for all $t\in {\mathbb{Z}}$, which is (\[ISc\]). Equation (\[ISc9\]) now simplifies for $t=-(i-1)q$ and $n\geq q$ to $$\begin{aligned}
\lefteqn{e_i^TY_{-(i-1)q} -\lambda^n e_i^T Y_{-(i-1)q-n}} \\
& =
&\sum_{j=0}^{q-1}e_{i}^T\Phi^j\left(\sum_{k=0}^j\Phi^{-k}\Theta_k\right)Z_{-(i-1)q-j}+
\lambda^n\sum_{j=0}^{q-1}e_{i}^T\Phi^j\left(\sum_{k=j+1}^q\Phi^{-k}\Theta_k\right)Z_{-(i-1)q-n-j}.\end{aligned}$$ Multiplying this equation by $\lambda^{-n}$ and denoting $t:=-(i-1)q-n$, it follows that for $t\le-iq$ it holds $$\begin{aligned}
e_i^TY_t &=&
-\sum_{j=0}^{q-1}e_{i}^T\Phi^j\left(\sum_{k=j+1}^q\Phi^{-k}\Theta_k\right)Z_{t-j}
\\ & & +\lambda^{t+(i-1)q}
e_i^T\left(Y_{-(i-1)q}-\sum_{j=0}^{q-1}\Phi^j\left(\sum_{k=0}^j\Phi^{-k}\Theta_k\right)Z_{-j-(i-1)q}\right)\\
&=&
-\sum_{j=0}^{q-1}e_{i}^T\Phi^j\left(\sum_{k=j+1}^q\Phi^{-k}\Theta_k\right)Z_{t-j}+\lambda^te_i^TV_i,\end{aligned}$$ which is equation (\[IScBeh2\]). This completes the proof of the induction step and hence of . It follows that conditions (ii) and (iii), respectively, hold with $\alpha_1=0$ if $|\lambda|=1$ and $Z_0$ is symmetric.
### The case $|\lambda|=1$ and not necessarily symmetric noise $(Z_t)$. {#S-3-1-4}
As in Section \[S-3-1-3\], assume that $|\lambda|=1$, but not necessarily that $Z_0$ is symmetric. Let $(Y_t', Z_t')_{t\in {\mathbb{Z}}}$ be an independent copy of $(Y_t,Z_t)_{t\in
{\mathbb{Z}}}$ and denote $\widetilde{Y}_t := Y_t - Y_t'$ and $\widetilde{Z}_t := Z_t - Z_t'$. Then $(\widetilde{Y}_t)_{t\in {\mathbb{Z}}}$ is a strictly stationary solution of $\widetilde{Y}_t - \Phi
\widetilde{Y}_{t-1} = \sum_{k=0}^q \Theta_k \widetilde{Z}_{t-k}$, and $(\widetilde{Z}_t)_{t\in {\mathbb{Z}}}$ is i.i.d. with $\widetilde{Z}_0$ being symmetric. It hence follows from Section \[S-3-1-3\] that $$\left( \sum_{k=0}^q \Phi^{q-k} \Theta_k \right) Z_0 - \left(
\sum_{k=0}^q \Phi^{q-k} \Theta_k \right) Z_0' = \left( \sum_{k=0}^q
\Phi^{q-k} \Theta_k\right) \widetilde{Z}_0 = 0.$$ Since $Z_0$ and $Z_0'$ are independent, this implies that there is a constant $\alpha\in {\mathbb{C}}^m$ such that $\sum_{k=0}^q \Phi^{q-k} \Theta_k Z_0 =
\alpha$ a.s., which is , hence condition (ii) if $\lambda\neq 1$. To show condition (iii) in the case $\lambda=1$, recall that the deviation of in Section \[S-3-1-3\] did not need the symmetry assumption on $Z_0$. Hence by there is some constant $K_1$ such that $ {\mathds{P}}(|\sum_{j=q}^{n-1} 1^{j}e_{1}^T \alpha |<K_1)\geq 1/2$ for all $n\geq q$, which clearly implies $e_1^T \alpha = 0$ and hence condition (iii).
The sufficiency of the conditions {#S-sufficient}
---------------------------------
Suppose that conditions (i) — (iii) are satisfied, and let $X_t^{(h)}$, $t\in {\mathbb{Z}}$, $h\in \{1,\ldots, H\}$, be defined by . The fact that $X_{t}^{(h)}$ as defined in converges a.s. for $|\lambda_h| \in (0,1)$ is in complete analogy to the proof in the one-dimensional case treated in Brockwell and Lindner [@BL2], but we give the short argument for completeness: observe that there are constants $a, b
> 0$ such that $\|\Phi_{h}^j \| \leq a e^{-bj}$ for $j\in
{\mathbb{N}}_0$. Hence for $b' \in (0,b)$ we can estimate $$\begin{aligned}
\lefteqn{\sum_{j=q}^\infty \mathbb{P} \left( \left\| \Phi_{h}^{j-q}
\sum_{k=0}^q \Phi_{h}^{q-k} I_{h} S^{-1}
\Theta_k Z_{t-j} \right\| > e^{-b' (j-q)} \right)}\\
& \leq & \sum_{j=q}^\infty \mathbb{P} \left( \log^+ \left(a
\left\| \sum_{k=0}^q \Phi_{h}^{q-k} I_{h} S^{-1} \Theta_k Z_{t-j}
\right\|\right)
> (b-b') (j-q) \right)
< \infty,\end{aligned}$$ the last inequality being due to the fact that $\left\| \sum_{k=0}^q
\Phi_{h}^{q-k} I_{h} S^{-1} \Theta_k Z_{t-j} \right\|$ has the same distribution as $\left\| \sum_{k=0}^q \Phi_{h}^{q-k} I_{h} S^{-1}
\Theta_k Z_{0} \right\|$ and the latter has finite log-moment by . The Borel–Cantelli lemma then shows that the event $\{\| \Phi_{h}^{j-q} \sum_{k=0}^q \Phi_{h}^{q-k} I_{h} S^{-1}
\Theta_k Z_{t-j} \| > e^{-b' (j-q)} \; \mbox{for infinitely many
$j$}\}$ has probability zero, giving the almost sure absolute convergence of the series in . The almost sure absolute convergence of if $|\lambda_h|>1$ is established similarly.
It is obvious that $((X_t^{(1)T}, \ldots, X_{t}^{(H)T})^T)_{t\in
{\mathbb{Z}}}$ as defined in and hence $(Y_t)_{t\in
{\mathbb{Z}}}$ defined by is strictly stationary, so it only remains to show that $(X_t^{(h)})_{t\in {\mathbb{Z}}}$ solves for each $h\in \{1,\ldots, H\}$. For $|\lambda_h|\neq 0,1$, this is an immediate consequence of . For $|\lambda_h|=1$, we have by and the definition of $f_h$ that $$\begin{aligned}
X_{t}^{(h)} - \Phi_{h} X_{t-1}^{(h)} & = & \alpha_h +
\sum_{j=0}^{q-1} \sum_{k=0}^j \Phi_{h}^{j-k} I_{h} S^{-1} \Theta_k
Z_{t-j} - \sum_{j=1}^{q} \sum_{k=0}^{j-1} \Phi_{h}^{j-k} I_{h}
S^{-1} \Theta_k Z_{t-j}\\
& = & \alpha_h + \sum_{j=0}^{q-1} I_{h} S^{-1} \Theta_j Z_{t-j} -
\sum_{k=0}^{q-1} \Phi_{h}^{q-k} I_{h} S^{-1} \Theta_k
Z_{t-q} \\
&= & I_{h} S^{-1} \sum_{j=0}^q \Theta_j Z_{t-j},
\end{aligned}$$ where the last equality follows from . Finally, if $\lambda_h=0$, then $\Phi_h^j=0$ for $j\geq m$, implying that $X_t^{(h)}$ defined by solves also in this case.
The uniqueness of the solution {#S-uniqueness}
------------------------------
Suppose that $|\lambda_h|\neq 1$ for all $h\in \{1,\ldots, H\}$ and let $(Y_t)_{t\in{\mathbb{Z}}}$ be a strictly stationary solution of . Then $(X_t^{(h)})_{t\in {\mathbb{Z}}}$, as defined by , is a strictly stationary solution of for each $h\in \{1,\ldots, H\}$. It then follows as in Section \[S-3-1-1\] that by the equation corresponding to , $X_t^{(h)}$ is uniquely determined if $|\lambda_h|\in (0,1)$. Similarly, $X_t^{(h)}$ is uniquely determined if $|\lambda_h|>1$. The uniqueness of $X_{t}^{(h)}$ if $\lambda_h=0$ follows from the equation corresponding to with $n\geq m$, since then $\Phi_h^j = 0$ for $j\geq m$. We conclude that $((X_t^{(1)T},\ldots, X_t^{(H)T})^T)_{t\in {\mathbb{Z}}}$ is unique and hence so is $(Y_t)_{t\in {\mathbb{Z}}}$.
Now suppose that there is $h\in \{1,\ldots, H\}$ such that $|\lambda_h|=1$. Let $U$ be a random variable which is uniformly distributed on $[0,1)$ and independent of $(Z_t)_{t\in {\mathbb{Z}}}$. Then $(R_t)_{t\in {\mathbb{Z}}}$, defined by $R_t := \lambda_h^t (0 ,\ldots 0 ,
e^{2\pi i U} )^T \in {\mathbb{C}}^{r_{h+1}-r_h}$, is strictly stationary and independent of $(Z_t)_{t\in {\mathbb{Z}}}$ and satisfies $R_t - \Phi_h
R_{t-1} = 0$. Hence, if $(Y_t)_{t\in {\mathbb{Z}}}$ is the strictly stationary solution of specified by and , then $$Y_t + S
(0_{r_2-r_1}^T, \ldots, 0_{r_h-r_{h-1}}^T, R_t^T,
0_{r_{h+2}-r_{h+1}}^T, \ldots, 0_{r_{H+1}-r_H}^T)^T, \quad t\in
{\mathbb{Z}},$$ is another strictly stationary solution of , violating uniqueness.
Proof of Theorem \[thm-4\] {#S4}
==========================
In this section we shall prove Theorem \[thm-4\]. Denote $$R := U^* \begin{pmatrix} D^{1/2} & 0_{s,d-s} \\
0_{d-s,s} & 0_{d-s,d-s} \end{pmatrix} \quad \mbox{and} \quad W_t :=
\begin{pmatrix} D^{-1/2} & 0_{s,d-s}
\\ 0_{d-s,s} & 0_{d-s,d-s} \end{pmatrix} U (Z_t- {\mathbb{E}}Z_0) , \quad t \in {\mathbb{Z}},$$ where $D^{1/2}$ is the unique diagonal matrix with strictly positive eigenvalues such that $(D^{1/2})^2 = D$. Then $(W_t)_{t\in
{\mathbb{Z}}}$ is a white noise sequence in ${\mathbb{C}}^d$ with expectation 0 and covariance matrix $\begin{pmatrix} \mbox{\rm Id}_s & 0_{s,d-s} \\
0_{d-s,s} & 0_{d-s,d-s} \end{pmatrix}$. It is further clear that all singularities of $M(z)$ on the unit circle are removable if and only if all singularities of $M'(z):= P^{-1}(z) Q(z) R$ on the unit circle are removable, and in that case, the Laurent expansions of both $M(z)$ and $M'(z)$ converge almost surely absolutely in a neighbourhood of the unit circle.
To see the sufficiency of the condition, suppose that has a solution $g$ and that $M(z)$ and hence $M'(z)$ have only removable singularities on the unit circle. Define $Y=(Y_t)_{t\in
{\mathbb{Z}}}$ by , i.e. $$Y_t = g + \sum_{j=-\infty}^\infty M_j \left( \begin{array}{ll}
D^{1/2} & 0_{s,d-s} \\ 0_{d-s,s} & 0_{d-s,d-s} \end{array} \right)
W_{t-j} = g + M'(B) W_t, \quad t\in {\mathbb{Z}}.$$ The series converges almost surely absolutely due to the exponential decrease of the entries of $M_j$ as $|j|\to\infty$. Further, $Y$ is clearly weakly stationary, and since the last $(d-s)$ components of $U (Z_t - {\mathbb{E}}Z_0)$ vanish, having expectation zero and variance zero, it follows that $$R W_t = U^* \begin{pmatrix} \mbox{\rm Id}_s & 0_{s,d-s} \\ 0_{d-s,s} &
0_{d-s,d-s} \end{pmatrix} U(Z_t-{\mathbb{E}}Z_0) = U^* U (Z_t-{\mathbb{E}}Z_0) =
Z_t- {\mathbb{E}}Z_0, \quad t\in {\mathbb{Z}}.$$ We conclude that $$P(B) (Y_t-g) = P(B) M'(B) W_t = P(B) P^{-1}(B) Q(B) R W_t = Q(B) (Z_t-{\mathbb{E}}Z_0), \quad t\in {\mathbb{Z}}.$$ Since $P(1) g = Q(1) {\mathbb{E}}Z_0$, this shows that $(Y_t)_{t\in {\mathbb{Z}}}$ is a weakly stationary solution of .
Conversely, suppose that $Y=(Y_t)_{t\in {\mathbb{Z}}}$ is a weakly stationary solution of . Taking expectations in yields $P(1) \, {\mathbb{E}}Y_0 = Q(1) \, {\mathbb{E}}Z_0$, so that has a solution. The ${\mathbb{C}}^{m\times m}$-valued spectral measure $\mu_Y$ of $Y$ satisfies $$P(e^{-i \omega}) \, d\mu_Y(\omega) \, P(e^{-i\omega})^* =
\frac{1}{2\pi} Q(e^{-i\omega}) \Sigma Q(e^{-i\omega})^* \, d\omega,
\quad \omega \in (-\pi, \pi].$$ It follows that, with the finite set $N:= \{ \omega \in (-\pi,\pi]: P(e^{-i\omega}) = 0\}$, $$d\mu_Y(\omega) = \frac{1}{2\pi} P^{-1}( e^{-i\omega}) Q(e^{-i\omega}) \Sigma
Q(e^{-i \omega})^* P^{-1} (e^{-i\omega})^* \, d\omega \quad
\mbox{on}\quad (-\pi,\pi] \setminus N.$$ Observing that $R R^*=
\Sigma$, it follows that the function $\omega \mapsto
M'(e^{-i\omega}) M'(e^{-i \omega})^*$ must be integrable on $(-\pi,\pi] \setminus N$. Now assume that the matrix rational function $M'$ has a non-removable singularity at $z_0$ with $|z_0| =
1$ in at least one matrix element. This must then be a pole of order $r\geq 1$. Denoting the spectral norm by $\|\cdot\|_2$ it follows that there are $\varepsilon
> 0$ and $K>0$ such that $$\|M'(z)^*\|_2 \geq K |z-z_0|^{-1} \quad \forall\; z\in {\mathbb{C}}:
|z|=1, z\neq z_0, |z-z_0| \leq \varepsilon;$$ this may be seen by considering first the row sum norm of $M'(z)^*$ and then using the equivalence of norms. Since the matrix $M'(z) M'(z)^*$ is hermitian, we conclude that $$\| M'(z) M'(z)^*\|_2 = \sup_{v\in {\mathbb{C}}^n: |v|=1} |v^*M'(z) M'(z)^* v| =
\sup_{v\in {\mathbb{C}}^n: |v|=1} |M'(z)^* v|^2 \geq K^2 |z-z_0|^2$$ for all $z\neq z_0$ on the unit circle such that $|z-z_0| \leq \varepsilon$. But this implies that $\omega \mapsto M'(e^{-i\omega}) M'(e^{-i
\omega})^*$ cannot be integrable on $(-\pi,\pi] \setminus N$, giving the desired contradiction. This finishes the proof of Theorem \[thm-4\].
Proof of Theorem \[thm-5\] {#S6}
==========================
In this section we shall prove Theorem \[thm-5\]. For that, we first observe that ARMA$(p,q)$ equations can be embedded into higher dimensional ARMA$(1,q)$ processes, as stated in the following proposition. This is well known and its proof is immediate, hence omitted.
\[thm-2\] Let $m,d, p\in {\mathbb{N}}$, $q\in {\mathbb{N}}_0$, and let $(Z_t)_{t\in {\mathbb{Z}}}$ be an i.i.d. sequence of ${\mathbb{C}}^d$-valued random vectors. Let $\Psi_1,
\ldots, \Psi_p \in {\mathbb{C}}^{m\times m}$ and $\Theta_0, \ldots, \Theta_q
\in {\mathbb{C}}^{m\times d}$ be complex-valued matrices. Define the matrices $\underline{\Phi} \in {\mathbb{C}}^{mp\times mp}$ and $\underline{\Theta}_k
\in {\mathbb{C}}^{mp \times d}$, $k\in \{0,\ldots, q\}$, by $$\underline{\Phi} := \begin{pmatrix} \Psi_1 & \Psi_2 & \cdots &
\Psi_{p-1} & \Psi_p \\ \mbox{\rm Id}_m & 0_{m,m} & \cdots & 0_{m,m}
& 0_{m,m} \\
0_{m,m} & \ddots & \ddots & \vdots & \vdots\\
\vdots & \ddots & \ddots & 0_{m,m} & \vdots \\
0_{m,m} & \cdots & 0_{m,m} & \mbox{\rm Id}_m & 0_{m,m}
\end{pmatrix} \quad \mbox{and}\quad
\underline{\Theta}_k = \begin{pmatrix} \Theta_k \\
0_{m,d} \\
\vdots \\
0_{m,d} \end{pmatrix}. \label{eq-def-Phi}$$ Then the ARMA$(p,q)$ equation admits a strictly stationary solution $(Y_t)_{t\in {\mathbb{Z}}}$ of $m$-dimensional random vectors $Y_t$ if and only if the ARMA$(1,q)$ equation $$\label{eq1q-gross}
\underline{Y}_t - \underline{\Phi} \, \underline{Y}_{t-1} =
\underline{\Theta}_0 {Z}_t + \underline{\Theta}_1 {Z}_{t-1} + \ldots
+ \underline{\Theta}_q {Z}_{t-q}, \quad t\in {\mathbb{Z}},$$ admits a strictly stationary solution $(\underline{Y}_t)_{t\in {\mathbb{Z}}}$ of $mp$-dimensional random vectors $\underline{Y}_t$. More precisely, if $(Y_t)_{t\in {\mathbb{Z}}}$ is a strictly stationary solution of , then $$\label{eq-gross-Y}
(\underline{Y}_t)_{t\in {\mathbb{Z}}} := ((Y_t^T , Y_{t-1}^T , \ldots,
Y_{t-(p-1)}^T)^T)_{t\in {\mathbb{Z}}}$$ is a strictly stationary solution of , and conversely, if $(\underline{Y}_t)_{t\in {\mathbb{Z}}} = (({Y_t^{(1)T}},
\ldots, {Y_{t}^{(p) T}})^T)_{t\in {\mathbb{Z}}}$ with random components $Y_t^{(i)} \in {\mathbb{C}}^m$ is a strictly stationary solution of , then $(Y_t)_{t\in {\mathbb{Z}}} := (Y_{t}^{(1)})_{t\in
{\mathbb{Z}}}$ is a strictly stationary solution of .
For the proof of Theorem \[thm-5\] we need some notation: define $\underline{\Phi}$ and $\underline{\Theta}_k$ as in . Choose an invertible ${\mathbb{C}}^{mp\times mp}$ matrix $\underline{S}$ such that $\underline{S}^{-1} \underline{\Phi}
\underline{S}$ is in Jordan canonical form, with ${H}$ Jordan blocks $\underline{\Phi}_1, \ldots, \underline{\Phi}_{H}$, say, the $h^{th}$ Jordan block $\underline{\Phi}_h$ starting in row $\underline{r}_{h}$, with $\underline{r}_1 := 1 < \underline{r}_2 <
\cdots < \underline{r}_H < mp+1 =: \underline{r}_{H+1}$. Let $\underline{\lambda}_h$ be the eigenvalue associated with $\underline{\Phi}_h$, and, similarly to , denote by $\underline{I}_h$ the $(\underline{r}_{h+1} - \underline{r}_h)\times
mp$-matrix with components $\underline{I}_h (i,j) = 1$ if $j=i +
\underline{r}_h -1$ and $\underline{I}_h (i,j) = 0$ otherwise. For $h\in \{1,\ldots, H\}$ and $j\in {\mathbb{Z}}$ let $$N_{j,h} := \begin{cases} \mathbf{1}_{j\geq 0}
\underline{\Phi}_{h}^{j-q}
\sum_{k=0}^{j\wedge q} \underline{\Phi}_{h}^{q-k} \underline{I}_{h}
\underline{S}^{-1} \underline{\Theta}_k, & |\underline{\lambda}_h|
\in (0,1),
\\
-\mathbf{1}_{j\leq q-1} \underline{\Phi}_{h}^{j-q}
\sum_{k=(1+j)\vee 0}^{q} \underline{\Phi}_{h}^{q-k}
\underline{I}_{h} \underline{S}^{-1}
\underline{\Theta}_k , & |\underline{\lambda}_h| > 1,\\
\mathbf{1}_{j\in \{0,\ldots, mp+q-1\}} \sum_{k=0}^{j\wedge q}
\underline{\Phi}_h^{j-k} \underline{I}_h
\underline{S}^{-1}\underline{\Theta}_k, & \underline{\lambda}_h = 0,\\
\mathbf{1}_{j\in \{0,\ldots, q-1\}}
\sum_{k=0}^j\underline{\Phi}_{h}^{j-k}\underline{I}_{h}
\underline{S}^{-1}\underline{\Theta}_k , & |\underline{\lambda}_h| =
1,
\end{cases}$$ and $$\label{def-Nj}
\underline{N}_j := \underline{S}^{-1} (N_{j,1}^T, \ldots,
N_{j,H}^T)^T \in {\mathbb{C}}^{mp \times d}.$$ Further, let $U$ and $K$ be defined as in the statement of the theorem, and denote $$W_t := U Z_t, \quad t\in{\mathbb{Z}}.$$ Then $(W_t)_{t\in {\mathbb{Z}}}$ is an i.i.d. sequence. Equation is then an easy consequence of the fact that for $a\in {\mathbb{C}}^d$ the distribution of $a^*W_0 = (U^* a)^* Z_0$ is degenerate to a Dirac measure if and only if $U^* a \in K$, i.e. if $a \in UK = \{0_s\} \times
{\mathbb{C}}^{d-s}$: taking for $a$ the $i^{th}$ unit vector in ${\mathbb{C}}^d$ for $i\in \{s+1,\ldots, d\}$, we see that $W_t$ must be of the form $(w_t^T,u^T)^T$ for some $u\in {\mathbb{C}}^{d-s}$, and taking $a= (b^T,
0_{d-s}^T)^T$ for $b\in {\mathbb{C}}^{s}$ we see that $b^* w_0$ is not degenerate to a Dirac measure for $b\neq 0_{s}$. The remaining proof of the necessity of the conditions, the sufficiency of the conditions and the stated uniqueness will be given in the next subsections.
The necessity of the conditions {#S-5-1}
-------------------------------
Suppose that $(Y_t)_{t\in {\mathbb{Z}}}$ is a strictly stationary solution of . Define $\underline{Y}_t$ by . Then $(\underline{Y}_t)_{t\in {\mathbb{Z}}}$ is a strictly stationary solution of by Proposition \[thm-2\]. Hence, by Theorem \[thm-main\], there is $\underline{f}' \in {\mathbb{C}}^{mp}$, such that $(\underline{Y}_t')_{t\in {\mathbb{Z}}}$, defined by $$\label{eq-Yt-prime}
\underline{Y}_t' = \underline{f}' + \sum_{j=-\infty}^\infty
\underline{N}_j Z_{t-j}, \quad t \in {\mathbb{Z}},$$ is (possibly another) strictly stationary solution of $$\underline{Y}_t' - \underline{\Phi}\, \underline{Y}_{t-1}' =
\sum_{k=0}^q \underline{\Theta}_k Z_{t-k} = \sum_{k=0}^q
\widetilde{\underline{\Theta}}_k W_{t-k}, \quad t \in {\mathbb{Z}},$$ where $\widetilde{\underline{\Theta}}_k := \underline{\Theta}_k U^*$. The sum in converges almost surely absolutely. Now define $A_h \in {\mathbb{C}}^{(\underline{r}_{h+1} - \underline{r}_h) \times
s}$ and $C_h \in {\mathbb{C}}^{(\underline{r}_{h+1} - \underline{r}_h) \times
(d-s)}$ for $h\in \{1,\ldots, H´\}$ such that $|\underline{\lambda}_h|=1$ by $$\label{eq-AB}
(A_h , C_h) := \sum_{k=0}^q \underline{\Phi}_h^{q-k} \underline{I}_h
\underline{S}^{-1} \widetilde{\underline{\Theta}}_k .$$ By conditions (ii) and (iii) of Theorem \[thm-main\], for every such $h$ with $|\underline{\lambda}_h|=1$ there exists a vector $\underline{\alpha}_h = (\alpha_{h,1}, \ldots,
\alpha_{h,\underline{r}_{h+1}-\underline{r}_h})^T \in
{\mathbb{C}}^{\underline{r}_{h+1} - \underline{r}_h}$ such that $$(A_h, C_h) W_0 = \underline{\alpha}_h \quad
\mbox{a.s.}$$ with $\alpha_{h,1} = 0$ if $\underline{\lambda}_h=1$. Since $W_0 = (w_0^T , u^T)^T$, this implies $A_h w_0 =
\underline{\alpha}_h - C_h u$, but since $b^* w_0$ is not degenerate to a Dirac measure for any $b\in {\mathbb{C}}^s \setminus \{ 0_s\}$, this gives $A_h=0$ and hence $C_h u = \underline{\alpha}_h$ for $h\in
\{1,\ldots, H\}$ such that $|\underline{\lambda}_h|=1$. Now let $v\in {\mathbb{C}}^s$ and $(W_t'')_{t\in {\mathbb{Z}}}$ be an i.i.d. $N( \left(
\begin{array}{c} v \\ u \end{array} \right) , \left( \begin{array}{ll}
\mbox{\rm Id}_s & 0_{s,d-s} \\ 0_{d-s,s} & 0_{d-s,d-s} \end{array}
\right))$-distributed sequence, and let $Z_t'' := U^* W_t''$. Then $$(A_h, C_h) W_0'' = C_h u = \underline{\alpha}_h \quad \mbox{a.s.} \quad \mbox{for}\quad h\in \{1,\ldots, H\}:
|\underline{\lambda}_h|=1$$ and $${\mathbb{E}}\log^+ \left\| \sum_{k=0}^q
\underline{\Phi}_h^{q-k} \underline{I}_h \underline{S}^{-1}
\widetilde{\underline{\Theta}}_k W_0''\right\|< \infty \quad
\mbox{for}\quad h\in \{1,\ldots, H\}: |\underline{\lambda}_h|\neq
0,1.$$ It then follows from Theorem \[thm-main\] that there is a strictly stationary solution $\underline{Y}_t''$ of the ARMA$(1,q)$ equation $\underline{Y}_t'' -
\underline{\Phi} \, \underline{Y}_{t-1}'' = \sum_{k=0}^q
\widetilde{\underline{\Theta}}_k W_{t-k}''=\sum_{k=0}^q
\underline{\Theta}_k Z_{t-k}''$, which can be written in the form $\underline{Y}_t'' = \underline{f}''+ \sum_{j=-\infty}^\infty
\underline{N}_j Z_{t-j}''$ for some $\underline{f}'' \in {\mathbb{C}}^{mp}$. In particular, $(\underline{Y}_t'')_{t\in {\mathbb{Z}}}$ is a Gaussian process. Again from Proposition \[thm-2\] it follows that there is a Gaussian process $(Y_t'')_{t\in {\mathbb{Z}}}$ which is a strictly stationary solution of $$Y_t'' - \sum_{k=1}^p \Psi_k Y_{t-k}'' = \sum_{k=0}^q
{\widetilde{\Theta}}_k W_{t-k}'' = \sum_{k=0}^q {\Theta}_k Z_{t-k}''
, \quad t \in {\mathbb{Z}}.$$ In particular, this solution is also weakly stationary. Hence it follows from Theorem \[thm-4\] that $z\mapsto M(z)$ has only removable singularities on the unit circle and that has a solution $g\in {\mathbb{C}}^{m}$, since ${\mathbb{E}}Z_0'' = U^* (v^T, u^T)^T$. Hence we have established that (i) and (iii’), and hence (iii), of Theorem \[thm-5\] are necessary conditions for a strictly stationary solution to exist.
To see the necessity of conditions (ii) and (ii’), we need the following lemma, which is interesting in itself since it expresses the Laurent coefficients of $M(z)$ in terms of the Jordan canonical decomposition of $\underline{\Phi}$.
\[lem-3\] With the notations of Theorem \[thm-5\] and those introduced after Proposition \[thm-2\], suppose that condition (i) of Theorem \[thm-5\] holds, i.e. that $M(z)$ has only removable singularities on the unit circle. Denote by $M(z) =
\sum_{j=-\infty}^\infty M_j z^j$ the Laurent expansion of $M(z)$ in a neighborhood of the unit circle. Then $$\label{def-Mj}
\underline{M}_j := ( M_j^T ,M_{j-1}^T , \ldots, M_{j-p+1}^T)^T =
\underline{N}_j U^* \left( \begin{array}{ll} \mbox{\rm Id}_s &
0_{s,d-s} \\ 0_{d-s,s} & 0_{d-s,d-s} \end{array} \right) \quad
\forall\; j \in {\mathbb{Z}}.$$ In particular, $$\label{eq-Nt2}
\underline{M}_j U Z_{t-j} = \underline{N}_j Z_{t-j} -
\underline{N}_j U^* (0_s^T, u^T)^T \quad \forall\; j,t \in {\mathbb{Z}}.$$
Define $\Lambda := \left( \begin{array}{ll} \mbox{\rm Id}_s & 0_{s,d-s} \\ 0_{d-s,s} &
0_{d-s,d-s} \end{array} \right)$ and let $(Z_t')_{t\in {\mathbb{Z}}}$ be an i.i.d. $N(0_d, U^* \Lambda U)$-distri-buted noise sequence and define $Y_t' := \sum_{j=-\infty}^\infty M_j U Z_{t-j}'$. Then $(Y_t')_{t\in
{\mathbb{Z}}}$ is a weakly and strictly stationary solution of $P(B) Y_t' =
Q(B) Z_t'$ by Theorem \[thm-4\], and the entries of $M_j$ decrease geometrically as $|j|\to \infty$. By Proposition \[thm-2\], the process $(\underline{Y}_t')_{t\in {\mathbb{Z}}}$ defined by $\underline{Y}_t'
= ({Y_t'}^T, {Y_{t-1}'}^T, \ldots, {Y_{t-p+1}'}^T) =
\sum_{j=-\infty}^\infty \underline{M}_j U Z_{t-j}'$ is a strictly stationary solution of $$\label{eq-gross-Q} \underline{Y}_t' -
\underline{\Phi} \, \underline{Y}_{t-1}' = \sum_{j=0}^q
\underline{\Theta}_j Z_{t-j}', \quad t \in {\mathbb{Z}}.$$ Denoting $\underline{\Theta}_j = 0_{mp,d}$ for $j\in {\mathbb{Z}}\setminus
\{0,\ldots, q\}$, it follows that $\sum_{k=-\infty}^\infty
(\underline{M}_k - \underline{\Phi} \,\underline{M}_{k-1}) U
Z_{t-k}' = \sum_{k=-\infty}^\infty \underline{\Theta}_k Z_{t-k}'$, and multiplying this equation from the right by ${Z'}_{t-j}^T$, taking expectations and observing that $M(z)\Lambda=M(z)$ we conclude that $$\label{eq-L1}(\underline{M}_j - \underline{\Phi}\,
\underline{M}_{j-1}) U = (\underline{M}_j - \underline{\Phi}\,
\underline{M}_{j-1}) \Lambda U= \underline{\Theta}_j U^* \Lambda U
\quad \forall\;j\in {\mathbb{Z}}.$$
Next observe that since $(\underline{Y}_t')_{t\in {\mathbb{Z}}}$ is a strictly stationary solution of , it follows from Theorem \[thm-main\] that $(\underline{Y}_t'')_{t\in {\mathbb{Z}}}$, defined by $\underline{Y}_t'' = \sum_{j=-\infty}^\infty
\underline{N}_j Z_{t-j}'$, is also a strictly stationary solution of . With precisely the same argument as above it follows that $$\label{eq-L2} (\underline{N}_j -
\underline{\Phi} \,\underline{N}_{j-1}) U^* \Lambda U =
\underline{\Theta}_j U^* \Lambda U \quad \forall\; j\in {\mathbb{Z}}.$$ Now let $L_j := \underline{M}_j - \underline{N}_j U^*
\Lambda$, $j\in {\mathbb{Z}}$. Then $L_j - \underline{\Phi} L_{j-1} =
0_{mp,d}$ from and , and the entries of $L_j$ decrease exponentially as $|j|\to \infty$ since so do the entries of $\underline{M}_j$ and $\underline{N}_j$. It follows that for $h\in \{1,\ldots, H\}$ and $j\in {\mathbb{Z}}$ we have $$\label{eq-Q-gross-2} \underline{I}_{h}
\underline{S}^{-1} L_j - \underline{\Phi}_h \underline{I}_{h}
\underline{S}^{-1} L_{j-1} = \underline{I}_{h} \left(
\underline{S}^{-1} L_j - \begin{pmatrix} \underline{\Phi}_1 & &\\ & \ddots & \\
& & \underline{\Phi}_H \end{pmatrix} \underline{S}^{-1} L_{j-1}
\right)= 0_{\underline{r}_{h+1}-\underline{r}_h,d}.$$ Since $\underline{\Phi}_h$ is invertible for $h\in \{1,\ldots, H\}$ such that $\underline{\lambda}_h\neq 0$, this gives $\underline{I}_{h} \underline{S}^{-1} L_0 =
\underline{\Phi}_{h}^{-j} \underline{I}_{h} \underline{S}^{-1} L_j$ for all $j\in {\mathbb{Z}}$ and $\underline{\lambda}_h\neq 0$. Since for $|\underline{\lambda}_h|\geq 1$, $\|\underline{\Phi}_{h}^{-j}\| \leq \kappa j^{mp}$ for all $j\in
{\mathbb{N}}_0$ for some constant $\kappa$, it follows that $\|\underline{I}_{h} \underline{S}^{-1} L_0 \| \leq \kappa j^{mp}
\|\underline{I}_h \underline{S}^{-1} L_j\|$, which converges to 0 as $j\to\infty$ by the geometric decrease of the coefficients of $L_j$ as $j\to\infty$, so that $\underline{I}_{h} \underline{S}^{-1} L_k =
0$ for $|\underline{\lambda}_h|\geq 1$ and $k=0$ and hence for all $k\in {\mathbb{Z}}$. Similarly, letting $j\to-\infty$, it follows that $\underline{I}_h \underline{S}^{-1} L_k = 0$ for $|\underline{\lambda}_h|\in (0,1)$ and $k=0$ and hence for all $k\in
{\mathbb{Z}}$. Finally, for $h\in \{1,\ldots, H\}$ such that $\underline{\lambda}_h=0$ observe that $\underline{I}_{h}
\underline{S}^{-1} L_k = \underline{\Phi}_{h}^{mp} \underline{I}_{h}
\underline{S}^{-1} L_{k-mp}$ for $k\in {\mathbb{Z}}$ by , and since $\underline{\Phi}_{h}^{mp} = 0$, this shows that $\underline{I}_{h} \underline{S}^{-1} L_k = 0$ for $k\in {\mathbb{Z}}$. Summing up, we have $\underline{S}^{-1} L_k =0$ and hence $\underline{M}_k = \underline{N}_k U^* \Lambda$ for $k\in {\mathbb{Z}}$, which is . Equation then follows from , since $$\underline{M}_j U Z_{t-j} = \underline{M}_j \left(
\begin{array}{c}
w_{t-j} \\ u \end{array} \right) = \underline{N}_j U^* \left(
\begin{array}{c} w_{t-j} \\ 0_{d-s} \end{array} \right) =
\underline{N}_j U^* \left(U Z_{t-j} - \left( \begin{array}{c} 0 \\
u
\end{array} \right) \right).$$
Returning to the proof of the necessity of conditions (ii) and (ii’) for a strictly stationary solution to exist, observe that $\sum_{j=-\infty}^\infty
\underline{N}_j Z_{t-j}$ converges almost surely absolutely by , and since the entries of $\underline{N}_j$ decrease geometrically as $|j|\to\infty$, this together with implies that $\sum_{j=-\infty}^\infty \underline{M}_j
U Z_{t-j}$ converges almost surely absolutely, which shows that (ii’) must hold. To see (ii), observe that for $j\geq mp +q$ we have $$\label{N3} N_{j,h} =
\begin{cases} \underline{\Phi}_{h}^{j-q} \sum_{k=0}^{q}
\underline{\Phi}_{h}^{q-k} \underline{I}_{h} \underline{S}^{-1}
\underline{\Theta}_k, & |\underline{\lambda}_h| \in (0,1),\\
0, & |\underline{\lambda}_h| \not\in (0,1),
\end{cases}$$ while $$\label{eq-N4}
N_{-1,h} = \begin{cases} \underline{\Phi}_{h}^{-1-q} \sum_{k= 0}^{q}
\underline{\Phi}_{h}^{q-k} \underline{I}_{h} \underline{S}^{-1}
\underline{\Theta}_k , & |\underline{\lambda}_h| > 1,\\
0, & |\underline{\lambda}_h| \leq 1.\end{cases}$$ Since a strictly stationary solution of exists, it follows from Theorem \[thm-main\] that ${\mathbb{E}}\log^+
\|\underline{N}_{j} Z_0\| < \infty$ for $j\geq mp+q$ and ${\mathbb{E}}\log^+
\| \underline{N}_{-1} Z_0\| < \infty$. Together with this shows that condition (ii) of Theorem \[thm-5\] is necessary.
The sufficiency of the conditions and uniqueness of the solution
----------------------------------------------------------------
In this subsection we shall show that (i), (ii), (iii) as well as (i), (ii’), (iii) of Theorem \[thm-5\] are sufficient conditions for a strictly stationary solution of to exist, and prove the uniqueness assertion.
\(a) Assume that conditions (i), (ii) and (iii) hold for some $v\in
{\mathbb{C}}^s$ and $g\in {\mathbb{C}}^m$. Then ${\mathbb{E}}\log^+ \| \underline{N}_{-1}
Z_0\| < \infty$ and ${\mathbb{E}}\log^+ \| \underline{N}_{mp+q} Z_0\| <
\infty$ by (ii) and . In particular, since $\underline{S}$ is invertible, ${\mathbb{E}}\log^+ \| N_{-1,h} Z_0\| <
\infty$ for $|\underline{\lambda}_h|> 1$ and ${\mathbb{E}}\log^+ \|
N_{mp+q,h} Z_0\| < \infty$ for $|\underline{\lambda}_h| \in (0,1)$. The invertibility of $\underline{\Phi}_h$ for $\underline{\lambda}_h
\neq 0$ then shows that $$\label{eq-cond-ii-new}
{\mathbb{E}}\log^+ \left\| \sum_{k=0}^q \underline{\Phi}_h^{q-k}
\underline{I}_h \underline{S}^{-1} \underline{\Theta}_k Z_0 \right\|
< \infty \quad \forall\; h\in \{1,\ldots, H\}:
|\underline{\lambda}_h| \in (0,1) \cup (1,\infty).$$ Now let $(W_t''')_{t\in {\mathbb{Z}}}$ be an i.i.d. $N( \left(
\begin{array}{c} v \\ u \end{array} \right) , \left(
\begin{array}{ll} \mbox{\rm Id}_s & 0_{s,d-s} \\ 0_{d-s,s} &
0_{d-s,d-s} \end{array} \right))$ distributed sequence and define $Z_t''' := U^* W_t'''$. Then ${\mathbb{E}}Z_t''' = U^* (v^T, u^T)^T$. By conditions (i) and (iii) and Theorem \[thm-4\], $(Y_t''')_{t\in
{\mathbb{Z}}}$, defined by $Y_t''' := P(1)^{-1} Q(1) {\mathbb{E}}Z_0''' +$ $\sum_{j=-\infty}^\infty M_j (W_{t-j}'''- (v^T,u^T)^T)$, is a weakly stationary solution of $Y_t''' - \sum_{k=1}^p \Psi_k
Y_{t-k}''' = \sum_{k=0}^q \Theta_k Z_{t-k}'''$, and obviously, it is also strictly stationary. It now follows in complete analogy to the necessity proof presented in Section \[S-5-1\] that $A_h=0$ and $C_h u = (\alpha_{h,1}, \ldots, \alpha_{h,\underline{r}_{h+1} -
\underline{r}_h})^T$ for $|\underline{\lambda}_h|=1$, where $(A_h,C_h)$ is defined as in and $\alpha_{h,1} = 0$ if $\lambda_h=1$. Hence $\sum_{k=0}^q \underline{\Phi}_h^{q-k}
\underline{I}_h \underline{S}^{-1} \underline{\widetilde{\Theta}}_k
W_0 = (\alpha_{h,1}, \ldots, \alpha_{h,\underline{r}_{h+1} -
\underline{r}_h})^T$ for $|\underline{\lambda}_h|=1$. By Theorem \[thm-main\], this together with implies the existence of a strictly stationary solution of , so that a strictly stationary solution $(Y_t)_{t\in {\mathbb{Z}}}$ of exists by Proposition \[thm-2\].
\(b) Now assume that conditions (i), (ii’) and (iii) hold for some $v\in {\mathbb{C}}^s$ and $g\in {\mathbb{C}}^m$ and define $Y=(Y_t)_{t\in {\mathbb{Z}}}$ by . Then $Y$ is clearly strictly stationary. Since $U Z_t
= (w_t^T, u^T)$, we further have, using (iii), that $$\begin{aligned}
P(B) Y_t & = & P(1) g - P(1)M(1) \left( \begin{array}{c} v \\ u
\end{array} \right) + Q(B) U^* \left(
\begin{array}{ll} \mbox{\rm Id}_s & 0_{s,d-s} \\ 0_{d-s,s} & 0_{d-s,d-s}
\end{array} \right) \left( \begin{array}{c} w_t \\ u \end{array}
\right) \\
& = & Q(1) U^* \left(
\begin{array}{c} v \\ u \end{array} \right)
- Q(1) U^* \left(
\begin{array}{c} v \\ 0_{d-s} \end{array} \right)
+ Q(B) U^* \left(
\begin{array}{c} w_t \\ 0_{d-s} \end{array} \right)
\\
& = & Q(B) U^* \left( \begin{array}{c} w_t \\ u \end{array} \right)
= Q(B) Z_t\end{aligned}$$ for $t\in {\mathbb{Z}}$, so that $(Y_t)_{t\in {\mathbb{Z}}}$ is a solution of .
\(c) Finally, the uniqueness assertion follows from the fact that by Proposition \[thm-2\], has a unique strictly stationary solution if and only if has a unique strictly stationary solution. By Theorem \[thm-main\], the latter is equivalent to the fact that $\underline{\Phi}$ does not have an eigenvalue on the unit circle, which in turn is equivalent to $\det
P(z) \neq 0$ for $z$ on the unit circle, since $\det P(z) = \det
(\mbox{\rm Id}_{mp} - \underline{\Phi} z)$ (e.g. Gohberg et al. [@GLR], p. 14). This finishes the proof of Theorem \[thm-5\].
Discussion and consequences of main results {#S7}
===========================================
In this section we shall discuss the main results and consider special cases. Some consequences of the results are also listed. We start with some comments on Theorem \[thm-main\]. If $\Psi_1$ has only eigenvalues of absolute value in $(0,1)\cup (1,\infty)$, then a much simpler condition for stationarity of can be given:
\[cor-1\]
Let the assumptions of Theorem \[thm-main\] be satisfied and suppose that $\Psi_1$ has only eigenvalues of absolute value in $(0,1) \cup
(1,\infty)$. Then a strictly stationary solution of exists if and only if $$\label{bed3}
\mathbb{E} \log^+ \left\| \left( \sum_{k=0}^q \Psi_1^{q-k}
\Theta_k\right)Z_0 \right\| < \infty.$$
It follows from Theorem \[thm-main\] that there exists a strictly stationary solution if and only if holds for every $h\in \{1,\ldots, H\}$. But this is equivalent to $$\mathbb{E}
\log^+ \| ( \sum_{k=0}^q(S^{-1} \Psi_1 S)^{q-k} \mbox{Id}_m S^{-1}
\Theta_k)Z_0 \| < \infty,$$ which in turn is equivalent to , since $S$ is invertible and hence for a random vector $R \in {\mathbb{C}}^m$ we have $\mathbb{E} \log^+ \|S R\|<\infty$ if and only if $\mathbb{E} \log^+ \| R\| < \infty$.
\[rem-2\] Suppose that $\Psi_1$ has only eigenvalues of absolute value in $(0,1) \cup (1,\infty)$. Then $\mathbb{E} \log^+ \|Z_0\|$ is a sufficient condition for to have a strictly stationary solution, since it implies . But it is not necessary. For example, let $q=1$, $m=d=2$ and $$\Psi_1 = \begin{pmatrix} 2 & 0
\\ 0 & 3 \end{pmatrix}, \quad \Theta_0 = \mbox{\rm Id}_2, \quad \Theta_1
= \begin{pmatrix} -1 & -1 \\ 1 & -4 \end{pmatrix}, \quad \mbox{so
that} \quad \sum_{k=0}^1 \Psi_1^{q-k} \Theta_k =
\begin{pmatrix} 1 & -1
\\ 1 & -1
\end{pmatrix}.$$ By , a strictly stationary solution exists for example if the i.i.d. noise $(Z_t)_{t\in {\mathbb{Z}}}$ satisfies $Z_0 = (R_0, R_0 +
R_0')^T$, where $R_0'$ is a random variable with finite log moment and $R_0$ a random variable with infinite log moment. In particular, $\mathbb{E} \log^+ \|Z_0\| = \infty$ is possible.
An example like in the remark above cannot occur if the matrix $\sum_{k=0}^q \Psi_1^{q-k} \Theta_k$ is invertible if $m=d$. More generally, we have the following result:
\[cor-2\] Let the assumptions of Theorem \[thm-main\] be satisfied and suppose that $\Psi_1$ has only eigenvalues of absolute value in $(0,1) \cup
(1,\infty)$. Suppose further that $d\leq m$ and that $\sum_{k=0}^q
\Psi_1^{q-k} \Theta_k$ has full rank $d$. Then a strictly stationary solution of exists if and only if $\mathbb{E} \log^+
\|Z_0\| < \infty$.
The sufficiency of the condition has been observed in Remark \[rem-2\], and for the necessity, observe that with $A:=
\sum_{k=0}^q \Psi_1^{q-k} \Theta_k$ and $U:= A Z_0$ we must have $\mathbb{E} \log^+ \|U\| < \infty$ by . Since $A$ has rank $d$, the matrix $A^T A \in {\mathbb{C}}^{d\times d}$ is invertible and we have $Z_0 = (A^T A)^{-1} A^T U$, i.e. the components of $Z_0$ are linear combinations of those of $U$. It follows that $\mathbb{E}
\log^+ \|Z_0\| < \infty$.
Next, we shall discuss the conditions of Theorem \[thm-5\] in more detail. The following remark is obvious from Theorem \[thm-5\]. It implies in particular the well known fact that ${\mathbb{E}}\log^+
\|Z_0\|<\infty$ together with $\det P(z)\neq 0$ for all $z$ on the unit circle is sufficient for the existence of a strictly stationary solution.
\[rem-2a\] (a) ${\mathbb{E}}\log^+ \|Z_0\|<\infty$ is a sufficient condition for (ii) of Theorem \[thm-5\].\
(b) $\det P(1) \neq 0$ is a sufficient condition for (iii) of Theorem \[thm-5\].\
(c) $\det P(z) \neq 0$ for all $z$ on the unit circle is a sufficient condition for (i) and (iii) of Theorem \[thm-5\].
With the notations of Theorem \[thm-5\], denote $$\label{def-tilde-Q}
\widetilde{Q}(z) := Q(z) U^* \left(
\begin{array}{ll} \mbox{\rm Id}_s & 0_{s,d-s}
\\ 0_{d-s,s} & 0_{d-s,d-s} \end{array} \right) ,$$ so that $M(z) = P^{-1}(z) \widetilde{Q}(z)$. It is natural to ask if conditions (i) and (iii) of Theorem \[thm-5\] can be replaced by a removability condition on the singularities on the unit circle of $(\det P(z))^{-1} \det (\widetilde{Q}(z))$ if $d=m$. The following corollary shows that this condition is indeed necessary, but it is not sufficient as pointed out in Remark \[rem-2c\].
\[cor-3\] Under the assumptions of Theorem \[thm-main\], with $\widetilde{Q}(z)$ as defined in , a necessary condition for a strictly stationary solution of the ARMA$(p,q)$ equation to exist is that the function $z \mapsto |\det
{P}(z)|^{-2} {\det (\widetilde{Q}(z)\widetilde{Q}(z)^*)}$ has only removable singularities on the unit circle. If additionally $d=m$, then a necessary condition for a strictly stationary solution to exist is that the matrix rational function $z \mapsto (\det
P(z))^{-1} \det (\widetilde{Q}(z))$ has only removable singularities on the unit circle.
The second assertion is immediate from Theorem \[thm-5\], and the first assertion follows from the fact that if $M(z)$ as defined in Theorem \[thm-5\] has only removable singularities on the unit circle, then so does $M(z) M(z)^*$ and hence $\det (M(z) M(z)^*)$.
\[rem-2c\] In the case $d=m$ and ${\mathbb{E}}\log^+\|Z_0\| < \infty$, the condition that the matrix rational function $z \mapsto (\det {P}(z))^{-1}
{\det \widetilde{Q}(z)}$ has only removable singularities on the unit circle is not sufficient for the existence of a strictly stationary solution of . For example, let $p=q=1$, $m=d=2$ and $\Psi_1 = \Theta_0 = \mbox{\rm Id}_{2}$, $\Theta_1 =
\begin{pmatrix} -1 & 0\\ 1 & -1 \end{pmatrix}$, $(Z_t)_{t\in {\mathbb{Z}}}$ be i.i.d. standard normally distributed and $U=\mbox{\rm Id}_2$. Then $\det P(z) = \det \widetilde{Q}(z) = (1-z)^2$, but it does not hold that $\Psi_1 \Theta_0 + \Theta_1 = 0$, so that condition (iii) of Theorem \[thm-main\] is violated and no strictly stationary solution can exist.
Next, we shall discuss condition (i) of Theorem \[thm-5\] in more detail. Recall (e.g. Kailath [@Kailath]) that a ${\mathbb{C}}^{m\times
m}$ matrix polynomial $R(z)$ is a [*left-divisor*]{} of $P(z)$, if there is a matrix polynomial $P_1(z)$ such that $P(z) = R(z)
P_1(z)$. The matrix polynomials $P(z)$ and $\widetilde{Q}(z)$ are [*left-coprime*]{}, if every common left-divisor $R(z)$ of $P(z)$ and $\widetilde{Q}(z)$ is [*unimodular*]{}, i.e. the determinant of $R(z)$ is constant in $z$. In that case, the matrix rational function $P^{-1}(z) \widetilde{Q}(z)$ is also called [*irreducible*]{}. With $\widetilde{Q}$ as defined in , it is then easy to see that condition (i) of Theorem \[thm-5\] is equivalent to
**
1. There exist ${\mathbb{C}}^{m\times m}$-valued matrix polynomials $P_1(z)$ and $R(z)$ and a ${\mathbb{C}}^{m\times d}$-valued matrix polynomial $Q_1(z)$ such that $P(z) = R(z) P_1(z)$, $\widetilde{Q}(z) = R(z)
Q_1(z)$ for all $z\in {\mathbb{C}}$ and $\det P_1(z) \neq 0$ for all $z$ on the unit circle.
That (i’) implies (i) is obvious, and that (i) implies (i’) follows by taking $R(z)$ as the greatest common left-divisor (cf. [@Kailath], p. 377) of $P(z)$ and $\widetilde{Q}(z)$. The thus remaining right-factors $P_1(z)$ and $Q_1(z)$ are then left-coprime, and since the matrix rational function $M(z) = P^{-1}(z)
\widetilde{Q}(z) = P_1^{-1}(z) Q_1(z)$ has no poles on the unit circle, it follows from page 447 in Kailath [@Kailath] that $\det P_1(z) \neq 0$ for all $z$ on the unit circle, which establishes (i’). As an immediated consequence, we have:
\[rem-BP-1\] With the notation of the Theorem \[thm-5\] and , assume additionally that $P(z)$ and $\widetilde{Q}(z)$ are left-coprime. Then condition (i) of Theorem \[thm-5\] is equivalent to $\det P(z) \neq 0$ for all $z$ on the unit circle.
Next we show how a slight extension of Theorem 4.1 of Bougerol and Picard [@BP], which characterized the existence of a strictly stationary non-anticipative solution of the ARMA$(p,q)$ equation , can be deduced from Theorem \[thm-5\]. By a [*non-anticipative*]{} strictly stationary solution we mean a strictly stationary solution $Y=(Y_t)_{t\in {\mathbb{Z}}}$ such that for every $t\in
{\mathbb{Z}}$, $Y_t$ is independent of the sigma algebra generated by $(Z_s)_{s> t}$, and by a [*causal*]{} strictly stationary solution we mean a strictly stationary solution $Y=(Y_t)_{t\in {\mathbb{Z}}}$ such that for every $t\in {\mathbb{Z}}$, $Y_t$ is measurable with respect to the sigma algebra generated by $(Z_s)_{s\leq t}$. Clearly, since $(Z_t)_{t\in {\mathbb{Z}}}$ is assumed to be i.i.d., every causal solution is also non-anticipative. The equivalence of (i) and (iii) in the theorem below was already obtained by Bougerol and Picarcd [@BP] under the additional assumption that ${\mathbb{E}}\log^+ \|Z_0\| < \infty$.
\[cor-BP\] In addition to the assumptions and notations of Theorem \[thm-5\], assume that the matrix polynomials $P(z)$ and $\widetilde{Q}(z)$ are left-coprime, with $\widetilde{Q}(z)$ as defined in . Then the following are equivalent:
1. There exists a non-anticipative strictly stationary solution of .
2. There exists a causal strictly stationary solution of .
3. $\det P(z) \neq 0$ for all $z\in {\mathbb{C}}$ such that $|z|\leq 1$ and if $M(z) = \sum_{j=0}^\infty M_j z^j$ denotes the Taylor expansion of $M(z) = P^{-1}(z) \widetilde{Q}(z)$, then $$\label{eq-logfinite2}
{\mathbb{E}}\log^+ \| M_j UZ_0\| < \infty \quad \forall\; j \in \{ mp+q-p+1,
\ldots, mp+q\} .$$
The implication “(iii) $\Rightarrow$ (ii)” is immediate from Theorem \[thm-5\] and equation , and “(ii) $\Rightarrow$ (i)” is obvious since $(Z_t)_{t\in {\mathbb{Z}}}$ is i.i.d. Let us show that “(i) $\Rightarrow$ (iii)”: since a strictly stationary solution exists, the function $M(z)$ has only removable singularities on the unit circle by Theorem \[thm-5\]. Since $P(z)$ and $\widetilde{Q}(z)$ are left-coprime, this implies by Remark \[rem-BP-1\] that $\det
P(z) \neq 0$ for all $z\in {\mathbb{C}}$ such that $|z|=1$. In particular, by Theorem \[thm-5\], the strictly stationary solution is unique and given by . By assumption, this solution must then be non-anticipative, so that we conclude that the distribution of $M_j
U Z_{t-j}$ must be degenerate to a constant for all $j\in
\{-1,-2,\ldots\}$. But since $U Z_0 = (w_0^T, u^T)^T$ and $M_j =
(M_j', 0_{m,d-s})$ with certain matrices $M_j' \in {\mathbb{C}}^{m,s}$, it follows for $j\leq -1$ that $M_j UZ_0 = M_j' w_0$, so that $M_j' =
0$ since no non-trivial linear combination of the components of $w_0$ is constant a.s. It follows that $M_j = 0$ for $j\leq -1$, i.e. $M(z)$ has only removable singularities for $|z|\leq 1$. Since $P(z)$ and $\widetilde{Q}(z)$ are assumed to be left-coprime, it follows from page 447 in Kailath [@Kailath] that $\det P(z) \neq
0$ for all $|z|\leq 1$. Equation is an immediate consequence of Theorem \[thm-5\].
It may be possible to extend Theorem \[cor-BP\] to situations without assuming that $P(z)$ and $\widetilde{Q}(z)$ are left-coprime, but we did not investigate this question.
The last result is on the interplay of the existence of strictly and of weakly stationary solutions of when the noise is i.i.d. with finite second moments:
\[cor-strict-weak\] Let $m,d, p\in {\mathbb{N}}$, $q\in {\mathbb{N}}_0$, and let $(Z_t)_{t\in {\mathbb{Z}}}$ be an i.i.d. sequence of ${\mathbb{C}}^d$-valued random vectors with finite second moment. Let $\Psi_1, \ldots, \Psi_p \in {\mathbb{C}}^{m\times m}$ and $\Theta_0, \ldots, \Theta_q \in {\mathbb{C}}^{m\times d}$. Then the ARMA$(p,q)$ equation admits a strictly stationary solution if and only if it admits a weakly stationary solution, and in that case, the solution given by is both a strictly stationary and weakly stationary solution of .
It follows from Theorem \[thm-4\] that if a weakly stationary solution exists, then one choice of such a solution is given by , which is clearly also strictly stationary. On the other hand, if a strictly stationary solution exists, then by Theorem \[thm-5\], one such solution is given by , which is clearly weakly stationary.
Finally, we remark that most of the results presented in this paper can be applied also to the case when $(Z_t)_{t\in {\mathbb{Z}}}$ is an i.i.d. sequence of ${\mathbb{C}}^{d\times d'}$ random matrices and $(Y_t)_{t\in
{\mathbb{Z}}}$ is ${\mathbb{C}}^{m\times d'}$-valued. This can be seen by stacking the columns of $Z_t$ into a ${\mathbb{C}}^{dd'}$-variate random vector $Z_t'$, those of $Y_t$ into a ${\mathbb{C}}^{md'}$-variate random vector $Y_t'$, and considering the matrices $$\Psi_k' := \begin{pmatrix} \Psi_k & & \\ & \ddots & \\ & & \Psi_k
\end{pmatrix} \in {\mathbb{C}}^{md'\times md'}
\quad \mbox{and} \quad \Theta_k' := \begin{pmatrix} \Theta_k & & \\
& \ddots & \\ & & \Theta_k
\end{pmatrix} \in {\mathbb{C}}^{md'\times dd'}.$$ The question of existence of a strictly stationary solution of with matrix-valued $Z_t$ and $Y_t$ is then equivalent to the existence of a strictly stationary solution of $Y_t' -
\sum_{k=1}^p \Psi_k' Y_{t-k}' = \sum_{k=0}^q \Theta_k' Z_{t-k}'$.
### Acknowledgements {#acknowledgements .unnumbered}
We would like to thank Jens-Peter Krei[ß]{} for helpful comments. Support from an NTH-grant of the state of Lower Saxony and from National Science Foundation Grant DMS-1107031 is gratefully acknowledged.
[99]{}
Athanasopoulos, G. and Vahid, F. (2008) VARMA versus VAR for macroeconomic forecasting. [*J. Bus. Econ. Statistics*]{} [**26**]{}, 237–252.
Bougerol, P. and Picard, N. (1992) Strict stationarity of generalized autoregressive processes. [*Ann. Probab.*]{} [**20**]{}, 1714–1730.
Brockwell, P.J. and Davis, R.A. (1991) [*Time Series: Theory and Methods*]{}, 2nd ed. Springer, New York.
Brockwell, P.J. and Lindner, A. (2010) Strictly stationary solutions of autregressive moving average equations. [*Biometrika*]{} [**97**]{}, 765–772.
Kailath, R. (1980) [*Linear Systems*]{}. Prentice Hall, Englewood Cliffs.
Kallenberg, O. (2002) [*Foundations of Modern Probability*]{}. Second Edition, Springer, New York.
Gohberg, I., Lancaster, P. and Rodman, L. (1982) [*Matrix Polynomials.*]{} Academic Press, New York.
Golub, G. H. and van Loan, C. F. (1996) [*Matrix Computations*]{}. Third Edition, Johns Hopkins, Baltimore and London.
[^1]: Colorado State University, Fort Collins, Colorado and Columbia University, New York. `[email protected]`.
[^2]: Institut für Mathematische Stochastik, TU Braunschweig, Pockelsstra[ß]{}e 14, D-38106 Braunschweig, Germany `[email protected]`
[^3]: Institut für Mathematische Stochastik, TU Braunschweig, Pockelsstra[ß]{}e 14, D-38106 Braunschweig, Germany `[email protected]`
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Extensive molecular dynamics simulations show that a short-range central potential, suited to model 60, undergoes a high temperature transition to a glassy phase characterized by the positional disorder of the constituent particles. Crystallization, melting and sublimation, which also take place during the simulation runs, are illustrated in detail. It turns out that vitrification and the mentioned phase transitions occur when the packing fraction of the system — defined in terms of an effective hard-core diameter — equals that of hard spheres at their own glass and melting transition, respectively. A close analogy also emerges between our findings and recent mode coupling theory calculations of structural arrest lines in a similar model of protein solutions. We argue that the conclusions of the present study might hold for a wide class of potentials currently employed to mimic interactions in complex fluids (some of which of biological interest), suggesting how to achieve at least qualitative predictions of vitrification and crystallization in those systems.'
author:
- 'Maria C. Abramo, Carlo Caccamo[^1], Dino Costa, and Romina Ruberto'
title: |
Phase and glass transitions in short-range central\
potential model systems: the case of 60
---
Introduction
============
Central, short-range potential models of simple analytic form have been the object of intense investigation over the last years. The main reason of such an interest is that they provide an approximate representation of the effective interactions and phase behavior experimentally observed in a variety of macrosized particle systems, spanning from colloidal suspensions and protein or polymer solutions [@zamora; @lomakin; @piazza2; @poon1; @tenwolde; @louis; @foffi; @pellicane] to other macromolecular systems and fullerenes [@cheng; @hagen; @ashcroft; @hasegawa; @costa; @costa1]. A paradigmatic example in this respect is offered by protein crystallization [@chayen; @mcpherson; @broide]: indeed, if one adopts a representation in which the macromolecules are spherical particles interacting through a central short-range potential, and the solvent is assumed to be a structureless continuum, the liquid-vapor binodal and the sublimation line thereby calculated exhibit all features of the protein-rich/protein-poor and crystallization line observed in real globular protein solutions. These models then offers an unvaluable tool for the study, by usual means of statistical mechanics and simulation, of the relative location of phase coexistence lines in real systems.
This scenario has been further enriched by recent investigations of glass transition in one of the most studied short-range model, namely the attractive hard-core Yukawa fluid. In this context the mode coupling theory predicts the onset of two glassy phases deeply inside the metastable liquid-solid region of the model [@foffi]. The formation of distinct vitreous phases with peculiar internal characteristics has also been reported in recent experiments on colloid-polymer mixtures [@bartsch; @chen; @poon].
These evidences prompted us to an extensive investigation of phase coexistence and glass transition conditions in short-range models, with particular attention to the rich phenomenology which characterizes the metastable region enveloped between the freezing and the melting line of such systems. The paper focuses in particular on the Girifalco potential for fullerene 60 [@girifalco]. This model can be considered as “marginally” short-range in the sense that, whereas in colloidal and protein systems the interaction between (model) macroparticles reduces practically to zero within a fraction of the particle diameter and the liquid-vapor equilibrium is only metastable, in the Girifalco model the decay length of interactions is slightly greater than the fullerene diameter and a stable liquid phase does exist, albeit confined to a temperature interval smaller than 100K [@cheng; @hasegawa; @costa; @costa1], with no experimental evidence of it hitherto reported [@fischer; @xu; @sundar; @noi; @nota]. Such characteristics appear consistent with early speculations about the “elusive diffusive” nature of liquid 60 [@ashcroft], as well as with recent evidences of the absence, in this peculiar fluid, of cage effects in the velocity autocorrelation functions, and of collective phenomena, ordinarily observed in simple liquids [@alemany].
We have recently reported in a preliminary communication [@vetro] (hereafter referred to as I) Molecular Dynamics (MD) evidences of the onset of a glass transition in 60 at $T=1100$K upon a pressure of 3.5MPa, as associated to positional disorder. Such a glassy phase is quite distinct from the experimentally detected 60 orientational glass, which forms at 90K as a consequence of the freezing at low temperature of the residual disorder in the orientation of the fullerene cages [@gugenberger; @matsuo].
As preliminary documented in I, vitrification of 60 at high temperature is assessed through quenching cycles of the liquid phase at various pressures, by analyzing a number of thermodynamic, structural and dynamical quantities. Here, we report and discuss in detail the MD results we cumulated to support our conclusion that a positional glass of 60 is effectively formed. In the same context, we discuss other transitions undergone by the system in the course of cooling or heating cycles, such as crystallization, melting and sublimation. As we shall detail, such transitions take place deeply inside the metastable region in coincidence of the crossing of two density-vs-temperature [*loci*]{} over which the “effective” packing fraction of the system equals that of hard spheres at their melting and glass transition, respectively. The analysis of such hard-sphere-like behavior enlights the analogies of our study with the case, discussed in reference [@foffi], of the onset of glassy phases in model protein solutions, suggesting that our results can be useful in the more general context of complex fluids investigation.
The paper is structured as follows: in section II we introduce the model and describe the computational strategies. In III results for thermodynamics, structural and dynamical properties are reported. The discussion and conclusions follow in section IV.
Model and simulation strategies
===============================
We study a system composed of particles interacting via the Girifalco potential [@girifalco], $$\begin{aligned}
\label{eq:pot}
v(r) &=& -\alpha_1 \left[ \frac{1}{s(s-1)^3} +
\frac{1}{s(s+1)^3} - \frac{2}{s^4} \right] \nonumber\\[4pt]
& & \quad +\alpha_2 \left[ \frac{1}{s(s-1)^9} +
\frac{1}{s(s+1)^9} - \frac{2}{s^{10}} \right] \end{aligned}$$ where $s=r/d$, $\alpha_1=N^2A/12d^6$, and $\alpha_2=N^2B/90d^{12}$; $N=60$ and $d=0.71$nm are the number of carbon atoms and the diameter, respectively, of the fullerene particles, $A=32\times10^{-60}$ergcm$^6$ and $B=55.77\times10^{-105}$ergcm$^{12}$ are constants entering the 12-6 potential $\phi$(r)$= -A/r^6+ B/r^{12}$ through which two carbon sites on different spherical molecules are assumed to interact. The distance where the potential (\[eq:pot\]) crosses zero, the position of the potential well minimum and its depth, are $\sigma=0.959$nm, $r_{\rm min}= 1.005$nm, and $\varepsilon = 0.444\times10^{-12}$erg, respectively [@girifalco]. We assume $v(r)=\infty$ for $s<1$.
We have used the MD method introduced by Andersen [@andersen] to simulate constant-pressure, constant-enthalpy (NPH) conditions, and the Verlet algorithm to integrate the equations of motion over time steps $\Delta t=5$fs; cubic boxes and periodic boundary conditions are assumed. Most results are obtained for $N=864$ and $N=1000$ total number of particles; the smaller sample is fully compatible with an initial fcc lattice arrangement of the particles in a cubic simulation box, whereas the 1000-particle distribution obviously implies defects. Both sizes are part of a more general analysis employing also five hundred and 1372 particles, to exclude any dependence of simulation results on the sample size. A wide range of pressures, spanning from close-to-atmospheric conditions up to 250MPa (2.5kbar), has been explored to verify the effects on crystallization and glass transition conditions. Cooling and quenching cycles start from an initial supercritical or liquid configuration (see figure \[fig:1\]), identified as such on the basis of the known phase diagram of 60 [@costa1]. Heating cycles begin either from the defective crystals eventually obtained at the end of the cooling cycles, or from a perfect fcc crystal at room temperature with 864 particles. Other details are given in the presentation of results.
Results
=======
For clarity sake, we shall detail in the first instance the transitions undergone by the system during slow cooling or heating sequences, followed by a presentation of results obtained during the quenching routines. Just before, we introduce the concept of “effective” packing fraction for the system at issue, with two associated loci of thermodynamic states.
Effective hard sphere diameters and\
constant “packing” loci
------------------------------------
We introduce an effective hard sphere diameter for the 60-60 interaction with the definition of a reference potential for $v(r)$ of equation (\[eq:pot\]) according to the well-known Weeks-Chandler-Andersen (WCA) prescription [@weeks],
$$\begin{aligned}
\label{eq:wca}
\begin{array}{cc}
v_{\rm ref}(r) = &
\left\{
\begin{array}{cc}
v(r)+\varepsilon & \quad {\rm if} \ r \le r_{\rm min} \cr
0 & \quad {\rm if} \ r > r_{\rm min}
\end{array}
\right. \\ \\
\end{array} \,,\end{aligned}$$
and adopt the Barker and Henderson expression for the effective hard-core diameter [@BH], $$\label{eq:sigma1}
\sigma_{\rm BH}=\int_0^\infty \{1 -\exp[-\beta v_{\rm ref}(r)]\}dr \,.$$ The Barker-Henderson prescription for the hard sphere diameter is by no means unique. Other more sophisticated recipes, hinging for instance on the use of the cavity distribution function $y(r)=\exp[\beta v(r)]g(r)$ [@report] (where $g(r)$ is the radial distribution function), have been proposed in the literature. We have verified however that the resulting changes in the estimate of the effective diameter with respect to the simpler BH prescription are quite minor.
We now look for densities $\rho=\rho(T)$ which make the effective packing fraction $\eta=\pi/6\rho\sigma_{\rm BH}^3$ constant with respect to temperature variations, and equal to a prefixed value. In particular, we impose $\eta$ to be equal to the packing of hard spheres at their own melting and glass transition, respectively. The value $\eta^{\rm HS}_{\rm m}=0.545$ has been reported for the packing fraction of hard spheres at melting [@hoover]. A packing of 0.55 has also been reported in constant pressure MD simulation of hard spheres [@gruhn]. Here we choose an intermediate value (see I), by imposing $$\label{eq:etamelt}
\eta_{\rm m}=\pi/6\rho_{\rm m}(T)\sigma_{\rm BH}^3=0.548\,.$$ As for the packing of hard spheres at the glass transition, $\eta_{\rm g}^{\rm HS}$, the value $0.58$ has been reported in reference [@woodcock]. whereas effective packing fractions at vitrification for continous potentials, mostly of Lennard-Jones-like form (see, e.g. [@shumway] and references), approach 0.58 with a considerable dispersion of values due to the different prescriptions either for the effective diameter or for the determination of the glass transition temperature. We require in particular (see I): $$\label{eq:etaglass}
\eta_{\rm g}=\pi/6\rho_{\rm g}(T)\sigma_{\rm BH}^3=0.574\,.$$ On the basis of equations (\[eq:etamelt\]) and (\[eq:etaglass\]) we can determine the two functions $\rho_{\rm m}(T)$ and $\rho_{\rm g}(T)$ which are shown in figure \[fig:1\]. As we shall see, these two $\rho(T)$ loci correlate significantly with the crystallization and vitrification conditions of the 60 model.
Cooling cycles and crystallization; heating cycles and sublimation or melting
-----------------------------------------------------------------------------
We first illustrate the case when the pressure $P=3.5$MPa. Cooling starts from a high temperature fluid configuration at $T=2000$K (see figure \[fig:1\], top). In sequential stepwise drops $\Delta T=30$K, the system evolves over 20000 steps (correspondiong to 100 ps) at fixed temperature, followed by 10000 cumulation steps where the system evolves freely. The final temperature $T$, box volume $V$, enthalpy $H$, and other thermodynamic, structural and dynamical properties are recorded at each $\Delta T$ (see below). Cooling is always arrested when room temperature is achieved or sligthly before this threshold. Statistical uncertainties on $T$, $\rho$, and $P$ turn out to be 0.8K, 0.0003nm$^{-3}$ and 0.1MPa, respectively.
Cooling paths are visualized in figure \[fig:1\] (top) in the $\rho$-$T$ representation of the phase diagram of 60. Volumes and enthalpies changes with the temperature are displayed in figure \[fig:2\] (top); as visible (and already shown in I for the case $N=1000$), for both system sizes investigated $V$ and $H$ undergo a marked drop at $T\simeq 1300$K, accompanied by a temporary increase of the temperature. Such a highly nonmonotonic behaviour of the two thermodynamic quantities is fully consistent with a transition of the system to the solid phase as further documented by [*(i)*]{} the behavior of the radial distribution function, $g(r)$ for $T \le 1307 $K (figure \[fig:3\], top), [*(ii)*]{} the temperature dependence of the diffusion coefficient $D$ (figure \[fig:4\],top) and [*(iii)*]{} the snapshots of the 60 configurations at different temperatures (figure \[fig:5\]). The incipient solid phase is characterized by a fcc crystalline arrangement, as visible from an inspection of the peaks’ positions in the $g(r)$.
$P$ $T_{\rm c}$ $\rho_{\rm c}$ $\sigma_{\rm BH}$ $\eta_{\rm c}$
----- ------------- ---------------- ------------------- ----------------
3.5 1307 1.124 0.976 0.547
40 1464 1.130 0.975 0.548
150 1880 1.138 0.972 0.547
250 2200 1.150 0.970 0.550
$P$ $T_{\rm m}$ $\rho_{\rm m}$ $\sigma_{\rm BH}$ $\eta_{\rm m}$
3.5 1931 1.144 0.9716 0.549
$P$ $T_{\rm g}$ $\rho_{\rm g}$ $\sigma_{\rm BH}$ $\eta_{\rm g}$
3.5 1100 1.168 0.978 0.572
40 1170 1.177 0.977 0.575
150 1480 1.187 0.975 0.576
250 1700 1.190 0.973 0.574
: Crystallization ($c$), melting ($m$) and glass ($g$) transition parameters. Melting data refer to the defective crystal with $N=1000$. Pressures are given in MPa, temperatures in K, densities in nm$^{-3}$, and diameters in nm; in the last column the packing fraction $\eta_{\rm x}=\pi/6\rho_{\rm x}\sigma_{\rm BH}^3$. []{data-label="tab:1"}
The variation of $N$ from 1000 to 864 does not produce any significant effect either on the overall cooling path, or on the location of the turning points which marks the onset of crystallization. Such an outcome is obtained if we evolve the smaller system for a longer time. In fact, under equal time elapsed conditions the turning point with 864 particles would fall at lower temperatures. We interpret this result as a manifestation of the fact, also emerging in heating cycles, that the 1000-particle case, characterized by a larger simulation box and defects with respect to the perfect fcc arrangement, easier allows for the internal particle rearrangements which trigger the phase transitions. We also note that the first of the turning points, signalling the onset of crystallization falls practically in coincidence of the crossing between the cooling path and the $\eta^{\rm HS}_{\rm m}=0.548$ locus introduced in equation (\[eq:etamelt\]) (compare figure \[fig:1\] and figure \[fig:2\]); we shall discuss this point in the next section.
No appreciable effect on the system behavior is exerted by the cooling rate since cooling cycles with $\Delta T=15$K lead to substantially similar results (see I). Sensitive changes are instead associated to the value of the imposed pressure: the modifications undergone by the cooling paths as $P$ varies from 3.5 to 150 and 250MPa are shown in figure \[fig:1\] (middle panel, see also the inset) and figure \[fig:2\] (compare top and middle panels); the radial distribution functions for the case $P=250$MPa are shown in the bottom panel of figure \[fig:3\] and numerical values of the crystallization parameters are reported in table \[tab:1\]. It appears that the increase of the pressure rises dramatically the temperature where the onset of crystallization occurs. The densities attained at crystallization, and at the end of the cooling paths, are close to each other and exhibit a trend to increase with the pressure. These densities are all smaller than those of the perfect crystal heated at the corresponding temperature, indicating that defective crystals are formed. One can realize that this is so by comparing the physical characteristics of such crystals with those typical of a perfect crystal of 864 particles. We consider to this aim crystals at room temperature as previously obtained through the cooling from high temperatures of samples with $N=1000$ and $N=864$, and a perfect crystal of 864 particles under the same temperature and pressure conditions. As visible from figure \[fig:1\] and figure \[fig:2\], the final densities of the samples obtained from the cooling procedure are smaller than that of the perfect 864 crystal. Such lower density values must be attributed to the presence of voids in the crystalline matrix formed during the cooling process. The comparison of radial distribution functions in figure \[fig:6\] enlights the smoother structure of cooled samples with respect to the perfect crystal. At P=250MPa, after the onset of crystallization, the cooling path runs almost overlapped with the melting curve of 60 [@costa1] (see figure \[fig:1\], middle). The high pressure is thus able to force the system to evolve out of the metastable region approximately along the true coexistence line.
The transition to the crystal takes place under strong supercooling conditions, well beyond the freezing line. indeed, when the latter is crossed, no feature in the radial distribution function signals the onset of any structural order typical of the solid phase (see figure \[fig:3\]). Signatures albeit faint of such a crossing can be recognized however in the $V$ and $H$ patterns which show a corresponding tiny nonmonotonicity (see top panels of figure \[fig:2\] and insets), as well as in the diffusion coefficient (see figure \[fig:4\] and insets). As visible, runs with different number of particles exhibit the same features, ruling out the possibility that our observations are merely a consequence of statistical noise.
A similar situation emerges when the systems approaches the metastable portion of the liquid-vapor binodal of 60, calculated in reference [@fucile] and reported in figure \[fig:1\]. It appears that the cooling paths get closer to the binodal the lower the pressure becomes. At $P=3.5$MPa a crossing can be extrapolated to take place at $T=1700$K and $\rho=0.95$ nm$^{-3}$, a state point where $V$ and $H$ patterns show another nonmonotonicity, magnified in the insets of figure \[fig:2\]. Remarkably, also the diffusion coefficient in figure \[fig:4\] is nonmotonic in correspondence of the estimated crossing of the binodal. We have carried out very long simulation runs (up to several million time steps) in correspondence of the crossing of the freezing and binodal lines, to check whether the system is able to develop any signal of the incipient transitions. In no case, however, we could monitor transformations in the thermodynamic and diffusion coefficient behavior comparable to those heralding at the crystallization temperature $T=1307$K, as previously described. Several similarities emerge in comparison with the simulation study that two of us have performed [@ballone] on a modified Lennard-Jones potential used to model globular protein solutions [@tenwolde]. In that case, the transition to the crystalline arrangement takes place during isochoric coolings of the system deeply beneath the freezing line, when the system reaches the metastable binodal line. In the present case, the system must be cooled even substantially below the metastable binodal before crystallization can start.
Starting from the two solid simulation samples eventually obtained through the cooling cycles with $N=1000$ and $864$, and from a third one built as a perfect crystal with 864 particles, we gradually heat all these systems through successive $\Delta T$=30K increases, up to the temperature at which they undergo an abrupt transition to a fluid configuration. Bottom panels of figures \[fig:1\] and \[fig:2\] refer to this sequences; it appears that the defective sample with one thousand particles undergoes a true melting since the systems jumps to a thermodynamic point inside the liquid pocket of the phase diagram whereas the 864 particle crystal, either defective or perfect, is instead substantially overheated until it jumps directly to a vapor phase, thus exhibiting sublimation. The role of defects at intermediate temperatures emerges by the comparison of the radial distribution functions obtained from the heating of the perfect crystal at relatively high temperature with the corresponding $g(r)$ of a crystal at an equal temperature obtained from cooling: in figure \[fig:6\] one can see that all features are less sharp in the latter case, with some peaks even missing with respect to the perfect crystal case. As can be seen in the corresponding panels in figure \[fig:2\], an evident “hysteresis” characterizes the heating cycles with respect to the cooling sequences.
Quenching and glass transition
------------------------------
Quenching of the liquid at different pressures is carried out for $N=1000$ and 864, starting from an initial temperature $T=1950$K, either through a sequence of $\Delta T= 150$K decrease steps, or through a sudden $\Delta T$=1000K drop. Temperature variations are imposed over 20000 simulation steps, followed by 10000 steps of free evolution. The quenching paths are displayed in figure \[fig:1\] (top and middle panels) while the volume and enthalpy behavior is visible in figure \[fig:2\] (top and middle panels).
The onset of a glassy phase during both quenching procedures is visually documented in the snapshots shown in figure \[fig:5\] and is proved by several evidences on thermodynamic, structural and dynamical quantities. We first observe that the $V$ and $H$ patterns in figure \[fig:2\] show a glass branch running above the crystallization line; moreover, at variance with what we have observed during the slow cooling, the radial distibution functions in figure \[fig:3\] do not develop the peak structure of the fcc arrangement at low temperatures; rather, down to $\simeq 300$K they exhibit the twofold structure of the second peak typical of the glass. Such a structural evidence has a counterpart in the diffusion coefficient behavior which, as visible in figure \[fig:4\], does not exhibit the drop associated to the onset of crystallization process, but manifests only a change of slope. On the other hand, the mean square displacements show a plateau for temperatures $T \le 1100$K, signalling structural arrest, also confirmed by the shape variations of the velocity autocorrelation function with the temperature (see figure \[fig:7\]). Changes of slope as the temperature decrease, typically associated to the glass transition, occur also in $V$ and $H$ (figure \[fig:2\]), as well as in the thermal expansivity $\alpha$, the specific heat $C_{\rm p}$, and the Wendt-Abraham ratio $R$ of the first peak to the first minimum height in $g(r)$ (figure \[fig:8\]).
Evidence of the glass formation also comes from the shear viscosity which we have calculated via the Green-Kubo relation [@hansen]: $$\label{eq:visc1}
\eta_{\rm sh}=\frac{1}{Vk_{\rm B}T}\int_0^\infty
\langle \sigma^{xy}(0)\sigma^{xy}(t)\rangle \diff t \,,$$ where $$\label{eq:visc2}
\sigma^{xy}=\sum_{i=1}^N \left[
m_iv_i^xv_i^y+\frac{1}{2}\sum_{j\ne i}x_{ij}f_y(r_{ij}) \right]$$ is the off-diagonal component of the stress tensor, $v_i^x$ and $x_{ij}$ represent, respectively, the $x$ component of the velocity and separation distance, $r_{ij}$, between molecules’ center-of-mass $i$ and $j$, and $f_y$ is the $y$ component of the force exerted on atom $i$ by atom $j$. As can be appreciated in figure \[fig:9\] $\eta_{\rm sh}$ exhibits a marked increase across the glass transition region and displays an Arrhenius behavior as a function of $T$ (see inset), indicating that the glass formed is a “strong” one [@angell]; the orientational glass of 60 is similarly “strong” [@gugenberger; @matsuo].
The glass transition parameters are obtained from the intersection of the two (extrapolated) branches in the $V$, $H$, $C_{\rm P}$, $\alpha$ and $R$ patterns as functions of the temperature (see reference [@angell] for details): we estimate at $P=3.5$MPa $T_{\rm g} \simeq 1100$K and $\rho_{\rm g} \simeq 1.168$nm$^{-3}$.
As visible from figure \[fig:2\] a glass transition is observed also at pressures higher than $3.5$MPa, at the cost to increase the quenching rate with respect to $1.5 \times 10^{12}$K/s adopted at 3.5MPa. For instance, at $P=250$MPa no glassy phase is obtained if $\Delta T \le 300$K over 20000 time steps. Glass transition temperatures and densities at different pressures are reported in table \[tab:1\]. It is immediate to verify that, similarly to what observed with the crystallization transitions, the glass transitions (at different pressures) occur for states falling at the crossing of the quenching paths with the $\rho(T)$ locus defined by equation (\[eq:etaglass\]), where $\eta_{\rm g}$= $\eta_{\rm g}^{\rm HS}$. We shall comment on these evidences in the next section.
The quenching rates we adopt in our simulations are much higher than those presently achieved in experiments (typically $10^8$K/s). We have however preliminary simulative evidences [@miscela] that in equimolar Girifalco 60/C$_{70}$ and 60/C$_{96}$ mixtures crystallization does not occur even upon cooling rate one or two order of magnitude lower than those adopted here; the 60/C$_{70}$ mixture, for instance, remains liquid during the cooling procedure down to 1100K, a temperature lower than that at which amorphization of pure 60 fullerite occurs [@fischer] (see also note [@nota]). These findings make it plausible that a glassy phase might be formed experimentally at least from mixed fullerene systems.
Discussion and conclusions
==========================
The effective packing at crystallization for different pressures have values close to the packing fraction of hard spheres at melting $\eta^{\rm HS}_{\rm m}$=0.545, as reported in table \[tab:1\]. As a matter of fact, it thus apperas that the $\eta_{\rm m}(T)$ locus introduced in equation (\[eq:etamelt\]) acts as an interpolation curve among the transition points determined [*a posteriori*]{}, and can be used to forecast crystallization temperatures and densities at pressures other than those here considered. A similar evidence is found for the effective packing at the glass transition $\eta_{\rm g}(T)$ of equation (\[eq:etaglass\]) also reported in the table, that is quite close to the hard sphere glass transition value, $\eta_{\rm g}^{\rm HS}=0.58$ [@woodcock]. As for the onset of crystallization and glass transitions, an obvious deduction is that the Girifalco model follows rather closely a hard-sphere-like behavior. Our evidences, however, allow us to get a deeper insight into the role of the $\rho_{\rm m}(T)$ and $\rho_{\rm g}(T)$ loci. Indeed, we have found that when supercooling is pushed beyond $\rho_{\rm m}(T)$ the system invariably crystallizes; by contrast, a state close to, but on the left of $\rho_{\rm m}(T)$, remains in the supercooled phase even over the longest simulation run we could perform, namely 12 million simulation steps, corresponding to 60ns. It thus appears that the $\rho_{\rm m}(T)$ locus approximately corresponds to an instability boundary of the metastable region. This observation also emerges from Monte Carlo calculations of the free energy of the Girifalco model at $T=2100$K [@costa] (see inset of figure \[fig:1\]); in fact, $\rho_{\rm m }$(2100K) coincides with the point where the free energy shows an inflection point associated with a sudden drop of the pressure of the simulation sample, i.e. with a mechanical instability of the system. A similar property might characterize the free energy branch generated by approaching the metastable region from the solid side and since the interval separating the two branches in figure \[fig:1\] is quite narrow, it is conceivable that our simulation strategy is unable to discriminate between the densities at the crystallization and melting instabilities. It is interesting to observe that in Molecular Dynamics simulations of the Lennard-Jones potential, crystallization occurs much beyond the melting line, deeply inside the solid region [@nose]. Indeed we have carried out MD simulations of the same potential and verified that states located inside the metastable fluid-solid region, do not undergo crystallization even after 35 million time steps ($\simeq$ 180ns).
Our results fot the glass transition positively agree with the predictions made by Foffi and coworkers [@foffi] on the glass transition in protein solutions modeled through the attractive hard-core Yukawa (HCY) potential. These authors find that for inverse decay length [*z*]{} of the Yukawian term spanning from 5 to 60 times the reciprocal of the particle diameter, the glass transition line is an almost vertical locus with a $T= \infty$ asymptote $\eta=\eta_{\rm g}^{\rm HS}=0.58$. On the other hand, as discussed in [@hagen2], the physical properties of the Girifalco model can be reasonably reproduced by a Yukawa potential with $z \simeq 4$ (though with moderate differences in the phase diagram), close indeed to the lowest value investigated in reference [@foffi]. A qualitative agreement thus emerges between the prediction of the glass transition in the two different systems. In the HCY model the formation of “repulsive” and “attractive” glasses depends on the value of $z$ [@foffi]; since at $z$ as low as five only the repulsive glass would be formed, we identify our 60 “positional” glass as a repulsive one.
In conclusion, we have documented that effective hard-core exclusion effects play an important role for the determination of the glass transition in a short-range potential well suited to model various fullerenes. Mode Coupling Theory calculations for the onset of the glass transition in a short-range model of globular protein solutions lead to similar conclusions at least for relatively low values of the decay potential parameters. In this context, it could be worth to carry out specific Mode Coupling calculations for the present model fullerene in order to compare the structural arrest line thereby predicted with our results for the glass line based on simulations and $\rho_{\rm g}(T)$ locus predictions. Moreover, colloidal suspensions and colloid-polymer mixtures show a behavior characterized by several similarities with the observations reported in this work [@louis]. We argue that the basic equation (\[eq:etaglass\]) might hold to a high accuracy also for other interaction potentials, and hence it could provide a simple framework for a qualitative but immediate prediction of the glass line in a variety of model systems.
This work has been done in the framework of the Marie Curie Network on Dynamical Arrest of Soft Matter and Colloids, Contract Nr MRTN-CT-2003-504712.
[99]{}
Rosenbaum, D. F.; Zamora, P. C.; Zukoski, C. F. [*Phys. Rev. Lett.*]{} [**1996**]{}, [*76*]{}, 150. Lomakin, A.; Asherie, N.; Benedek, G. B. [*J. Chem. Phys.*]{} [**1996**]{}, [*104*]{}, 1646. Piazza, R.; Peyre, V.; Degiorgio, V. [*Phys. Rev.*]{} E [**1998**]{}, [*58*]{}, R2733. Poon, W. C. K. [*Phys. Rev.*]{} E [**1997**]{}, [*55*]{}, 3762. ten Wolde, P. R.; Frenkel, D. [*Science*]{} [**1997**]{}, [*277*]{}, 1975. Louis, A. A. [*Phil. Trans. R. Soc. Lond. A*]{} [**2001**]{}, [*359*]{}, 939; Pham, K. N.; Egelhaaf, S. U.; Pusey, P. N.; Poon, W. C. K. [*Phys. Rev. E*]{} [**2004**]{}, [*69*]{}, 011503. Foffi, G.; McGullagh, G. D.; Lawlor, A.; Zaccarelli, E.; Dawson, K. A.; Sciortino, F.; Tartaglia, P.; Pini, D.; Stell, G. [*Phys. Rev. E*]{} [**2002**]{}, [*65*]{}, 031407; Dawson, K. A. [*Current Opinion in Colloid and Interface Science*]{} [**2002**]{}, [*7*]{}, 218. Pellicane, G.; Costa, D.; Caccamo, C. [*J. Phys. Chem. B*]{} [**2004**]{}, [*104*]{}, 7538. Cheng, A.; Klein, M. L.; Caccamo, C. [*Phys. Rev. Lett.*]{} [**1993**]{}, [*71*]{}, 1200. Hagen, M. H. J.; Meijer, E. J.; Mooij, G. C. A. M.; Frenkel, D.; Lekkerkerker H. N. W. [*Nature*]{} [**1993**]{}, [*365*]{}, 425. Ashcroft, N. W. [*Nature*]{} [**1993**]{}, [*365*]{}, 387. Hasegawa M.; Ohno, K. [*J. Phys.: Cond. Matter*]{} [**1997**]{}, [*9*]{}, 3361; [*J. Chem. Phys.*]{} [**1999**]{}, [*111*]{}, 5955. Abramo, M. C.; Caccamo, C.; Costa, D.; Pellicane, G. [*Europhys. Lett.*]{} [**2001**]{}, [*54*]{}, 468. Costa, D.; Pellicane, G.; Abramo, M.C.; Caccamo, C. [*J. Chem. Phys.*]{} [**2003**]{}, [*118*]{}, 304. Chayen, N. E. [*Trends in Biotechnology*]{} [**2002**]{}, [*20*]{}, 98. McPherson, A. [*Preparation and Analysis of Protein Crystals*]{}; Krieger: Malabar, 1982. Broide, M.; Tominc, T. M.; Saxowsky, M. D. [*Phys. Rev.*]{} E [**1996**]{}, [*53*]{}, 6325. Grigsby, J. J.; Blanch, H. W.; Prausnitz, J. M. [*Biophys. Chem.*]{} [**2001**]{}, [*91*]{}, 231. Eckert T.; Bartsch E. [*Phys. Rev. Lett.*]{} [**2002**]{}, [*89*]{}, 125701. Chen, S. H.; Chen, W. R.; Mallamace, F. [*Science*]{} [**2003**]{}, [*300*]{}, 619. Pham K. N.; Egelhaaf S. U.; Pusey P. N.; Poon W. C. K. [*Phys. Rev. E*]{}, [**2004**]{}, [*69*]{}, 011503. Girifalco, L. F. [*J. Phys. Chem.*]{} [**1992**]{}, [*96*]{}, 858. Stetzer, M. R.; Heiney, P. A.; Fischer, J. E.; McGhie, A. R. [*Phys. Rev. B*]{} [**1997**]{}, [*55*]{}, 127. Xu, C.; Scuseria, G. E. [*Phys. Rev. Lett.*]{} [**1994**]{}, [*72*]{}, 669; Kim, S. C.; Tomanek, D. [*Phys. Rev. Lett.*]{} [**1994**]{}, [*72*]{}, 2418. Sundar, C. S.; Bharathi, A.; Hariharan, Y.; Janaki, J.; Sankara Sastri, V.; Radhakrishnan, T. [*Solid State Comm.*]{} [**1992**]{}, [*84*]{}, 823. Abramo, M. C.; Caccamo, C. [*J. Chem. Phys.*]{} [**1997**]{}, [*106*]{}, 6475; Ruberto, R.; Abramo, M. C. [*J. Chem. Phys.*]{} [**2004**]{}, submitted for publication. Fullerite (crystalline 60) heated at $T >1200 $K, mostly transforms in amorphous carbon due to the disruption of the 60 cages (see ref [@fischer]). The latter, however, are predicted to be stable up to 3500-4000K [@xu]. Solid state effects through molecule collision [@sundar], but also and alternatively residual impurities trapped in the crystalline matrix [@noi], might trigger the cage instability. Alemany, M. M. G.; Rey, C.; Dieguez, O.; Gallego, L. J. [*J. Chem. Phys.*]{} [**2000**]{}, [*112*]{}, 10711. Abramo, M. C.; Caccamo, C. Costa, D; Ruberto, R, [*J. Phys. Chem. B*]{} [**2004**]{}, [*108*]{}, 13576. Gugenberger, F.; Heid, R.; Meingast, C.; Adelmann, P.; Braun M.; Wuhl, M.; Haluska, M.; Kuzmany, H. [*Phys. Rev. Lett.*]{} [**1992**]{}, [*69*]{}, 1774. Matsuo, T.; Tsuo, T.; Suga, H.; David, W. I. F.; Ibberson, R. M.; Benrier, P.; Zahab, A.; Fabre, C.; Rassat, A.; Dworkin, A. [*Solid State Comm.*]{} [**1992**]{}, [*83*]{}, 711. Andersen, H. C. [*J. Chem. Phys.*]{}[**1980**]{}, [*72*]{}, 2384. Weeks, J. D.; Chandler, D.; Andersen, H. C. [*J. Chem. Phys.*]{} [**1971**]{}, [*54*]{}, 5237. Barker, J. A.; Henderson, D. [*J. Chem. Phys.*]{} [**1967**]{}, [*47*]{}, 2856. Caccamo, C. [*Phys. Rep.*]{} [**1996**]{}, [*274*]{}, 1. Alder, B. J.; Wainwright, T. E. [*J. Chem. Phys.*]{} [**1959**]{} [*31*]{}, 459; Hoover, W. G.; Ree, F. H. [*J. Chem. Phys.*]{}, [**1968**]{}, [*49*]{}, 3609. Gruhn, T.; Monson, P. A. [*Phys. Rev. E*]{} [**2001**]{}, [*64*]{}, 061703 and references therein. Woodcock, L. V. [*Ann. N. Y. Acad. Sci.*]{} [**1981**]{}, [*37*]{}, 274. Shumway, S. L.; Clarke, A. S.; Jonsson, H. [*J. Chem. Phys.*]{} [**1995**]{}, [*102*]{}, 1796. Caccamo, C.; Costa, D.; Fucile A. [*J. Chem. Phys.*]{} [**1997**]{}, [*106*]{}, 255. Costa, D.; Ballone, P.; Caccamo, C. [*J. Chem. Phys.*]{}[**2002**]{}, [*116*]{}, 3327. Hansen J-P and McDonald I R [*Theory of Simple Liquids*]{} 2nd edition; Academic Press; London 1986. Angell, C. A. [*J. Non-Cryst.Solids*]{} [**1991**]{}, [*131-133*]{}, 13. Abramo, M. C.; Caccamo, C.; Ruberto, R. (unpublished). Nosè, S.; Yonezawa, F. [*Solid State Comm.*]{} [**1985**]{}, [*56*]{}, 1005; [*J. Chem. Phys.*]{} [**1986**]{}, [*840*]{}, 1803. Hagen, M. H. J.; Frenkel, D. [*J. Chem. Phys.*]{} [**1994**]{}, [*101*]{}, 4093.
\
{width="7.8cm"}
\
\
{width="4.1cm"} {width="4.1cm"} {width="4.1cm"} {width="4.1cm"}\
{width="4.1cm"} {width="4.1cm"} {width="4.1cm"} {width="4.1cm"}
\
{width="9.0cm"}
[^1]: Email: [[email protected]]{}
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Fix an alphabet $A=\{0,1,\dots,M\}$ with $M\in\mathbb{N}$. The univoque set $\mathscr{U}$ of bases $q\in(1,M+1)$ in which the number $1$ has a unique expansion over the alphabet $A$ has been well studied. It has Lebesgue measure zero but Hausdorff dimension one. This paper investigates how the set $\mathscr{U}$ is distributed over the interval $(1,M+1)$ by determining the limit $$f(q):=\lim_{\delta\to 0}\dim_H\big(\mathscr{U}\cap(q-\delta,q+\delta)\big)$$ for all $q\in(1,M+1)$. We show in particular that $f(q)>0$ if and only if $q\in\overline{\mathscr{U}}\backslash\mathscr{C}$, where $\mathscr{C}$ is an uncountable set of Hausdorff dimension zero, and $f$ is continuous at those (and only those) points where it vanishes. Furthermore, we introduce a countable family of pairwise disjoint subsets of $\mathscr{U}$ called [*relative bifurcation sets*]{}, and use them to give an explicit expression for the Hausdorff dimension of the intersection of $\mathscr{U}$ with any interval, answering a question of Kalle et al. \[[*arXiv:1612.07982; to appear in Acta Arithmetica*]{}, 2018\]. Finally, the methods developed in this paper are used to give a complete answer to a question of the first author \[[*Adv. Math.*]{}, 308:575–598, 2017\] about strongly univoque sets.'
address:
- 'Mathematics Department, University of North Texas, 1155 Union Cir \#311430, Denton, TX 76203-5017, U.S.A.'
- 'College of Mathematics and Statistics, Chongqing University, 401331, Chongqing, P.R.China'
author:
- Pieter Allaart
- Derong Kong
title: Relative bifurcation sets and the local dimension of univoque bases
---
Introduction {#s1}
============
Fix an integer $M\ge 1$. For $q\in(1,M+1]$, any real number $x$ in the interval $I_{M,q}:=[0, M/(q-1)]$ can be represented as $$\label{eq:projection-pi-q}
x=\pi_q((d_i)):=\sum_{i=1}^{\infty}\frac{d_i}{q^i},$$ where $d_i\in{\left\{0,1,\ldots, M\right\}}$ for all $i\ge 1.$ The infinite sequence $(d_i)=d_1d_2\ldots$ is called a *$q$-expansion* of $x$ with *alphabet* ${\left\{0,1,\ldots, M\right\}}$. Such non-integer base expansions have been studied since the pioneering work of Rényi [@Renyi_1957] and Parry [@Parry_1960]. In the 1990’s, work by Erdős et al. [@Erdos_Joo_Komornik_1990; @Erdos_Horvath_Joo_1991; @Erdos_Joo_1992] inspired an explosion of research papers on the subject, covering unique expansions [@AlcarazBarrera-Baker-Kong-2016; @DeVries_Komornik_2008; @Glendinning_Sidorov_2001; @Komornik-Kong-Li-17], finitely or countably many expansions [@Baker_2015; @Baker_Sidorov_2014; @Komornik-Kong-2018; @Sidorov_2009], uncountably many expansions and random expansions [@Dajani_DeVries_2007; @Sidorov_2003]. Non-integer base expansions have furthermore been connected with Bernoulli convolutions [@Jordan-Shmerkin-Solomyak-2011], Diophantine approximation [@Lu-Wu-2016], singular self-affine functions [@Allaart-2016], open dynamical systems [@Sidorov_2007], and intersections of Cantor sets [@Kong_Li_Dekking_2010].
Let $${\mathscr{U}}:={\left\{q\in(1, M+1]: 1\textrm{ has a unique }q\textrm{-expansion of the form } (\ref{eq:projection-pi-q})\right\}}.$$ Thus for each $q\in{\mathscr{U}}$ there exists a unique sequence $(a_i)\in\Omega_M:={\left\{0,1,\ldots, M\right\}}^{\ensuremath{\mathbb{N}}}$ such that $1=\pi_q((a_i))$. The set ${\mathscr{U}}$ was extensively studied for over 25 years. Erdős et al. [@Erdos_Joo_Komornik_1990] showed that ${\mathscr{U}}$ is uncountable and of zero Lebesgue measure. Daróczy and Kátai [@Darczy_Katai_1995] proved that ${\mathscr{U}}$ has full Hausdorff dimension (see also [@Komornik-Kong-Li-17]). Komornik and Loreti [@Komornik-Loreti-1998; @Komornik_Loreti_2002] found its smallest element $q_{KL}=q_{KL}(M)$, which is now called the *Komornik-Loreti constant* and is related to the Thue-Morse sequence (see (\[eq:lambda\]) below). Later in [@Komornik_Loreti_2007] the same authors proved that its topological closure $\overline{{\mathscr{U}}}$ is a Cantor set, i.e., a non-empty compact set having neither interior nor isolated points. Recently, Dajani et al. [@Dajani-Komornik-Kong-Li-2018] proved that the algebraic difference ${\mathscr{U}}-{\mathscr{U}}$ contains an interval. Furthermore, the set ${\mathscr{U}}$ also has intimate connections with kneading sequences of unimodal expanding maps (cf. [@Allouche_Cosnard_1983; @Allouche-Cosnard-2001]), and even with the real slice of the boundary of the Mandelbrot set [@Bon-Car-Ste-Giu-2013].
The main purpose of this paper is to describe the distribution of ${\mathscr{U}}$. More precisely, we are interested in the *local dimensional function* $$f(q):=\lim_{\delta{\rightarrow}0}\dim_H({\mathscr{U}}\cap(q-\delta, q+\delta)), \qquad q\in(1, M+1],$$ as well as its one-sided analogs $$f_-(q):=\lim_{\delta{\rightarrow}0}\dim_H ({\mathscr{U}}\cap(q-\delta, q)),\qquad f_+(q):=\lim_{\delta{\rightarrow}0}\dim_H({\mathscr{U}}\cap(q, q+\delta)),$$ which we call the *left and right local dimensional functions* of ${\mathscr{U}}$. Note that $f=\max{\left\{f_-, f_+\right\}}$, and if $q\notin\overline{{\mathscr{U}}}$, then $f(q)=f_-(q)=f_+(q)=0$. [Extending a recent result by the authors and Baker [@Allaart-Baker-Kong-17],]{} we compute $f(q)$, $f_-(q)$ and $f_+(q)$ for every $q\in(1,M+1]$ in terms of a kind of localized entropy. As an application we compute the Hausdorff dimension of the intersection of ${\mathscr{U}}$ with any interval, answering a question of Kalle et al. [@Kalle-Kong-Li-Lv-2016]. In addition, our methods allow us to give a complete answer to a question of the first author [@Allaart-2017] about strongly univoque sets.
Univoque set, entropy plateaus and the bifurcation set
------------------------------------------------------
In order to state our main results, some notation is necessary. For $q\in(1,M+1]$ let ${\ensuremath{\mathcal{U}}}_q$ be the *univoque set* of $x\in I_{M,q}$ having a unique $q$-expansion as in (\[eq:projection-pi-q\]). Let ${\mathbf{U}}_q$ be the set of corresponding sequences, i.e., $${\mathbf{U}}_q:={\left\{(d_i)\in\Omega_M: \pi_q((d_i))\in{\ensuremath{\mathcal{U}}}_q\right\}}.$$ A useful tool in the study of unique expansions is the lexicographical characterization of ${\mathbf{U}}_q$ (cf. [@Baiocchi_Komornik_2007; @DeVries_Komornik_2008]): $(d_i)\in{\mathbf{U}}_q$ if and only if $(d_i)\in\Omega_M$ satisfies $$\label{eq:characterization-unique expansion}
\begin{split}
d_{n+1}d_{n+2}\ldots&\prec {\alpha}(q)\qquad\textrm{if}\quad d_n<M,\\
d_{n+1}d_{n+2}\ldots&\succ\overline{{\alpha}(q)}\qquad\textrm{if}\quad d_n>0,
\end{split}$$ where ${\alpha}(q)=({\alpha}_i(q))\in\Omega_M$ is the lexicographically largest $q$-expansion of $1$ not ending with $0^{\infty}$, called the *quasi-greedy* $q$-expansion of $1$, and $\overline{{\alpha}(q)}:=(M-{\alpha}_i(q))$. Here and throughout the paper we will use the lexicographical order between sequences and blocks in a natural way.
Note by (\[eq:characterization-unique expansion\]) that any sequence $(d_i)\in{\mathbf{U}}_q\setminus{\left\{0^{\infty}, M^{\infty}\right\}}$ has a tail sequence in the set $$\label{eq:def-widetilde-uq}
{\widetilde{\mathbf{U}}}_q:={\left\{(d_i)\in\Omega_M: \overline{{\alpha}(q)}\prec{\sigma}^n((d_i))\prec {\alpha}(q)\ \forall n\ge 0\right\}},$$ where $\sigma$ denotes the left shift map on $\Omega_M$. Furthermore, ${\mathbf{U}}_q$ and ${\widetilde{\mathbf{U}}}_q$ have the same topological entropy, i.e., $h({\mathbf{U}}_q)=h({\widetilde{\mathbf{U}}}_q)$, where the *topological entropy* of a subset $X\subset\Omega_M$ is defined by $$h(X):=\liminf_{n{\rightarrow}{\infty}}\frac{\log \# B_n(X)}{n}$$ (cf. [@Lind_Marcus_1995]). Here $\#B_n(X)$ denotes the number of all length $n$ blocks occurring in sequences from $X$, and “$\log$" denotes the natural logarithm. We may thus obtain all the relevant information about ${\mathbf{U}}_q$ by studying the simpler set ${\widetilde{\mathbf{U}}}_q$.
Since the map $q\mapsto {\alpha}(q)$ is strictly increasing on $(1, M+1]$ (see Lemma \[lem:quasi-greedy expansion-alpha-q\] below), (\[eq:def-widetilde-uq\]) implies that the set-valued map $q\mapsto {\widetilde{\mathbf{U}}}_q$ is non-decreasing, and hence the entropy function $H: q\mapsto h({\widetilde{\mathbf{U}}}_q)$ is non-decreasing. Recently, Komornik et al. [@Komornik-Kong-Li-17] and the present authors [@Allaart-Kong-2018] proved the following:
\[thm:devil-staircase\] The graph of $H$ is a Devil’s staircase:
(i) $H$ is non-decreasing and continuous on $(1, M+1]$;
(ii) $H$ is locally constant almost everywhere on $(1, M+1]$;
(iii) $H(q)>0$ if and only if $q>q_{KL}$, where $q_{KL}$ is the [Komornik-Loreti constant]{}.
An interval $[p_L, p_R]\subset(1, M+1]$ is called an *entropy plateau* (or simply, a *plateau*) if it is a maximal interval (in the partial order of set inclusion) on which $H$ is constant and positive. A complete characterization of all entropy plateaus was given by Alcaraz Barrera et al. [@AlcarazBarrera-Baker-Kong-2016] (see also [@Alcaraz_Barrera_2014] for the case $M=1$). Equivalently, they described the *bifurcation set* $$\label{eq:bifurcation set}
{\mathscr{B}}:={\left\{q\in(1, M+1]: H(p)\ne H(q)~\forall p\ne q\right\}},$$ and showed that ${\mathscr{B}}\subset {\mathscr{U}}$, ${\mathscr{B}}$ is Lebesgue null, and $\dim_H {\mathscr{B}}=1$. From Theorem \[thm:devil-staircase\] and the definition of ${\mathscr{B}}$ it follows that $$\label{eq:relation-entropy plateaus-bifurcation set}
(1, M+1]\setminus{\mathscr{B}}=(1, q_{KL}]\cup\bigcup[p_L, p_R],$$ where the union is taken over all plateaus $[p_L, p_R]\subset(q_{KL}, M+1]$ of $H$. We emphasize that the plateaus are pairwise disjoint and therefore the union is countable.
Recall that our main objective is to find the local dimensional functions $f$, $f_+$ and $f_-$. The following result is due to the authors and Baker [@Allaart-Baker-Kong-17].
\[prop:local dimension-B\] For any $q\in{\mathscr{B}}\setminus{\left\{M+1\right\}}$ we have $$f(q)=f_-(q)=f_+(q)=\dim_H{\ensuremath{\mathcal{U}}}_q>0,$$ and for any $q\in(1,M+1]$ we have $f(q)\leq\dim_H{\ensuremath{\mathcal{U}}}_q$. Furthermore, for $q=M+1$ we have $f(q)=f_-(q)=1$ and $f_+(q)=0$.
Relative bifurcation sets and relative plateaus
-----------------------------------------------
In order to describe the local dimensional function $f$ of ${\mathscr{U}}$ we introduce the relative bifurcation sets, which provide finer information about the growth of $q\mapsto{\widetilde{\mathbf{U}}}_q$ inside entropy plateaus.
\[def:admissible-words\] A word $a_1\dots a_m\in\{0,1,\dots,M\}^m$ with $m\geq 2$ is [*admissible*]{} if $$\label{eq:admissible word}
\overline{a_1\ldots a_{m-i}}{\preccurlyeq}a_{i+1}\ldots a_m\prec a_1\ldots a_{m-i}\quad\forall ~1\le i<m.$$ When $M\geq 2$, the “word" $a_1\in\{0,1,\dots,M\}$ is [*admissible*]{} if $\overline{a_1}\leq a_1<M$.
For any admissible word ${\mathbf{a}}$, there are bases $q_L$ and $q_R$ such that $$\alpha(q_L)={\mathbf{a}}^{\infty}, \qquad \alpha(q_R)={\mathbf{a}}^+(\overline{{\mathbf{a}}})^{\infty}.$$ Here, for a word $\mathbf c:=c_1\ldots c_n\in{\left\{0,1,\ldots, M\right\}}^n$ with $c_n<M$ we set $\mathbf c^+:=c_1\ldots c_{n-1}(c_n+1)$. Similarly, for a word $\mathbf c:=c_1\ldots c_n\in{\left\{0,1,\ldots, M\right\}}^n$ with $c_n>0$ we shall write $\mathbf c^-:=c_1\ldots c_{n-1}(c_n-1)$. We call $[q_L,q_R]$ a [*basic interval*]{} and say it is [*generated by*]{} the word ${\mathbf{a}}$. By [@AlcarazBarrera-Baker-Kong-2016 Lemma 4.8], any two basic intervals are either disjoint, or else one contains the other. For any basic interval $I$ generated by an admissible word ${\mathbf{a}}$, we define the associated [*de Vries-Komornik number*]{} $q_c(I)$ by $\alpha(q_c(I))=(\theta_i)$ (cf. [@Kong_Li_2015]), where $(\theta_i)$ is given recursively by
(i) $\theta_1\dots \theta_m={\mathbf{a}}^+$;
(ii) $\theta_{2^{k-1}m+1}\dots\theta_{2^k m}=\overline{\theta_1\dots \theta_{2^{k-1}m}}^+$, for $k=1,2,\dots$.
Thus, $$\alpha(q_c(I))={\mathbf{a}}^+\overline{{\mathbf{a}}}\overline{{\mathbf{a}}^+}{\mathbf{a}}^+\overline{{\mathbf{a}}^+}{\mathbf{a}}{\mathbf{a}}^+\overline{{\mathbf{a}}}\cdots.
\label{eq:q_c}$$ Note that $q_c(I)$ lies in the interior of $I$ for each basic interval $I$; this is a direct consequence of Lemma \[lem:quasi-greedy expansion-alpha-q\] below. Observe also that different basic intervals can have the same associated de Vries-Komornik number.
We now construct a nested tree $${\left\{J_{\mathbf i}: \mathbf i\in{\left\{1,2,\ldots\right\}}^n; n\ge 1\right\}}$$ of intervals, which we call [*relative entropy plateaus*]{}, or simply [*relative plateaus*]{}, as follows. At level $0$, we set $J_\emptyset=[1,M+1]$. Next, at level $1$, we put $J_0=[1,q_{KL}]$ and let $J_1,J_2,\dots$ be an arbitrary enumeration of the entropy plateaus $[p_L,p_R]$ from . Note by [@AlcarazBarrera-Baker-Kong-2016] that these entropy plateaus are precisely the maximal basic intervals which lie completely to the right of $q_{KL}$. We call $J_0$ a [*null interval*]{}, since ${\mathscr{U}}\cap (1,q_{KL})=\emptyset$.
From here, we proceed inductively as follows. Let $n\geq 1$, and for each $\mathbf{i}\in\{1,2,\dots\}^n$, assume $J_{\mathbf i}$ has already been defined and is a basic interval $[q_L,q_R]$. Then we set $J_{\mathbf{i}0}=[q_L,q_c(J_{\mathbf i})]$, and let $J_{\mathbf{i}1},J_{\mathbf{i}2},\dots$ be an arbitrary enumeration of the maximal basic intervals inside $[q_c(J_{\mathbf i}),q_R]$. (It is not difficult to see that infinitely many such basic intervals exist.)
Note that for each fixed $n\ge 1$ the relative plateaus $J_{\mathbf i}, \mathbf i\in{\left\{1,2,\ldots\right\}}^n$ are pairwise disjoint. Furthermore, for any word $\mathbf i\in\bigcup_{n=1}^{\infty}{\left\{1,2,\ldots\right\}}^n$ we call $J_{\mathbf i 0}$ a [*null interval*]{}, because it intersects ${\mathscr{U}}$ only in the single point $q_c(J_{\mathbf i})$. We emphasize that any basic interval generated by a word ${\mathbf{a}}$ not of the form ${\mathbf{b}}\overline{{\mathbf{b}}}$ is a relative plateau.
We now define the sets $${ {\mathscr{C}}}_{\infty}:= \bigcap_{n=1}^{\infty}\bigcup_{\mathbf i\in{\left\{1,2,\ldots\right\}}^n} J_{\mathbf i}
$$ and $${ {\mathscr{C}}}_0:=\left\{q_c(J_{\mathbf i}): \mathbf{i}\in\bigcup_{n=0}^{\infty}{\left\{1,2,\ldots\right\}}^n\right\}.$$ Thus ${ {\mathscr{C}}}_{\infty}$ is the set of points which are contained in infinitely many relative plateaus, and ${ {\mathscr{C}}}_0$ is the set of all de Vries-Komornik numbers (cf. [@Kong_Li_2015]). The smallest element of ${ {\mathscr{C}}}_0$ is the Komornik-Loreti constant $q_{KL}=q_c(J_\emptyset)$. Finally, let $${ {\mathscr{C}}}:={ {\mathscr{C}}}_0\cup{ {\mathscr{C}}}_{\infty}.
$$ For the proof of the following proposition, as well as examples of points in ${ {\mathscr{C}}}$, we refer to Section \[sec:C\].
\[prop:property of C-infity\]
1. ${ {\mathscr{C}}}\subset{\mathscr{U}}$.
2. ${ {\mathscr{C}}}$ is uncountable and has no isolated points.
3. $\dim_H{ {\mathscr{C}}}=0$.
Main results
------------
Let $J=[q_L, q_R]$ be a relative plateau with $J\neq[1,M+1]$. Then there is an admissible word ${\mathbf{a}}=a_1\ldots a_m$ such that ${\alpha}(q_L)={\mathbf{a}}^{\infty}$ and ${\alpha}(q_R)={\mathbf{a}}^+(\overline{{\mathbf{a}}})^{\infty}$. In particular, $\alpha(q)$ begins with the prefix ${\mathbf{a}}^+$ for each $q\in (q_L,q_R]$. Let $${\widetilde{\mathbf{U}}}_q(J):={\left\{(x_i)\in{\widetilde{\mathbf{U}}}_q: x_1\ldots x_m={\alpha}_1(q)\ldots {\alpha}_m(q)=a_1\dots a_m^+\right\}}, \qquad q\in (q_L,q_R].
\label{eq:U_q(J)}$$ For the special case when $J=J_\emptyset=[1,M+1]$, we set ${\widetilde{\mathbf{U}}}_q(J):={\widetilde{\mathbf{U}}}_q$. We are now ready to give a characterization of the local dimensional functions $f$, $f_-$ and $f_+$.
\[main1\]
1. Let $q\in\overline{{\mathscr{U}}}$. Then $$f(q)=0\quad\Longleftrightarrow\quad f_-(q)=0\quad\Longleftrightarrow\quad
q\in{ {\mathscr{C}}}.$$
2. Let $q\in \overline{{\mathscr{U}}} \setminus{ {\mathscr{C}}}$. Then $$f_-(q)=\frac{h({\widetilde{\mathbf{U}}}_q(J))}{\log q}>0,$$ where $J=[q_L, q_R]$ is the smallest relative plateau such that $q\in(q_L, q_R]$. Furthermore, $$f_+(q)=\begin{cases}
0 & \textrm{if}\ q\in\overline{{\mathscr{U}}}\setminus{\mathscr{U}},\\
\frac{h({\widetilde{\mathbf{U}}}_q(J))}{\log q}>0 & \textrm{if}\ q\in{\mathscr{U}}\setminus{ {\mathscr{C}}},
\end{cases}$$ where $J=[q_L, q_R]$ is the smallest relative plateau such that $q\in(q_L, q_R)$. As a consequence, $$f(q)=\frac{h({\widetilde{\mathbf{U}}}_q(J))}{\log q}>0,$$ where $J=[q_L, q_R]$ is the smallest relative plateau such that $q\in(q_L, q_R)$.
Note the asymmetry between $f_-$ and $f_+$. This is caused by the very different roles played by the left and right endpoints $q_L$ and $q_R$ of a relative plateau. On the one hand, we have $f(q_L)=f_-(q_L)>0$ while $f_+(q_L)=0$. On the other hand, suppose $[q_L,q_R]=J$, and let $I$ be the parent interval of $J$, that is, the relative plateau one level above $J$ that contains $J$. Then $$f_-(q_R)=\frac{h({\widetilde{\mathbf{U}}}_{q_R}(J))}{\log q_R}>0 \qquad\mbox{and} \qquad f_+(q_R)=\frac{h({\widetilde{\mathbf{U}}}_{q_R}(I))}{\log q_R}>0,$$ and since ${\widetilde{\mathbf{U}}}_{q_R}(J)\subset {\widetilde{\mathbf{U}}}_{q_R}(I)$, we have $f_-(q_R)\leq f_+(q_R)$ so $f(q_R)=f_+(q_R)$. In fact, the inequality between $f_-(q_R)$ and $f_+(q_R)$ is almost always strict, with just one possible exception; see Example \[ex:endpoints\] below for more details.
Theorem \[main1\] suggests a closer investigation of the sets ${\widetilde{\mathbf{U}}}_q(J)$. Our next result gives a detailed description.
Recall the definition of $q_c(J)$, and let $q_G(J)$ and $q_F(J)$ be the bases in $(q_L, q_R)$ with $${\alpha}(q_G(J))=({\mathbf{a}}^+\overline{{\mathbf{a}}^+})^{\infty}, \qquad {\alpha}(q_F(J))=({\mathbf{a}}^+\overline{{\mathbf{a}}}\;\overline{{\mathbf{a}}^+}{\mathbf{a}})^{\infty}.$$ Then $q_G(J)<q_F(J)<q_c(J)$.
\[main2\] Let $J=[q_L, q_R]$ be a relative plateau generated by the admissible word ${\mathbf{a}}$. Then the entropy function $$H_J: q\mapsto h({\widetilde{\mathbf{U}}}_q(J))$$ is a Devil’s staircase on $(q_L, q_R]$, i.e., $H_J$ is continuous, non-decreasing and locally constant almost everywhere on $(q_L, q_R]$. Furthermore, the set ${\widetilde{\mathbf{U}}}_q(J)$ has the following structure:
1. If $q_L<q\le q_G(J)$, then ${\widetilde{\mathbf{U}}}_q(J)=\emptyset$.
2. If $q_G(J)<q\le q_F(J)$, then ${\widetilde{\mathbf{U}}}_q(J)=\big\{\big({\mathbf{a}}^+\overline{{\mathbf{a}}^+}\big)^{\infty}\big\}$.
3. If $q_F(J)<q<q_c(J)$, then ${\widetilde{\mathbf{U}}}_q(J)$ is countably infinite.
4. If $q=q_c(J)$, then ${\widetilde{\mathbf{U}}}_q(J)$ is uncountable but $H_J(q)=0$.
5. If $q_c(J)<q\le q_R$, then $H_J(q)>0$.
Theorem \[main2\] can be viewed as a generalization of Theorem \[thm:devil-staircase\] and the classical result of Glendinning and Sidorov [@Glendinning_Sidorov_2001] for the set ${\ensuremath{\mathcal{U}}}_q$ with $q\in(1,2]$ and alphabet $\{0,1\}$ (see Proposition \[prop:unique expansion-two digits case\] below).
Note that, while the function $H: q\mapsto h({\widetilde{\mathbf{U}}}_q)$ is constant on each relative plateau $J$, the set-valued map $F: q\mapsto {\widetilde{\mathbf{U}}}_q$ is [*not*]{} constant on $J$. Since $F$ is non-decreasing, it is natural to investigate the variation of the map $q\mapsto \dim_H({\widetilde{\mathbf{U}}}_q\setminus{\widetilde{\mathbf{U}}}_{q_L})$ on $J=[q_L,q_R]$, where the Hausdorff dimension is well defined by equipping the symbolic space $\Omega_M$ with the metric $\rho$ defined by $$\rho((x_i), (y_i))={2^{-\inf{\left\{i\ge 0: x_{i+1}\ne y_{i+1}\right\}}}}.
\label{eq:rho-metric}$$ As an application of Theorem \[main2\] we have the following.
\[cor:variation-in-plateau\] Let $J=[q_L, q_R]$ be a relative plateau. Then the function $$D_J: J\mapsto [0,\infty), \quad q\mapsto \dim_H({\widetilde{\mathbf{U}}}_q\setminus{\widetilde{\mathbf{U}}}_{q_L})$$ is a Devil’s staircase on $J$. Furthermore, $D_J(q)=0$ if and only if $q\le q_c(J)$.
Unfortunately, the analogous statement for topological entropy in place of Hausdorff dimension fails: Since sequences in ${\widetilde{\mathbf{U}}}_q\setminus{\widetilde{\mathbf{U}}}_{q_L}$ can have arbitrarily long prefixes from any sequence in ${\widetilde{\mathbf{U}}}_q$, the difference set ${\widetilde{\mathbf{U}}}_q\setminus{\widetilde{\mathbf{U}}}_{q_L}$ has the same entropy as ${\widetilde{\mathbf{U}}}_q$ for all $q\in J\backslash\{q_L\}$.
Theorems \[main1\] and \[main2\] show that the local dimensional functions $f$, $f_-$ and $f_+$ are highly discontinuous on $\overline{{\mathscr{U}}}$ (of course, they are everywhere continuous (and equal to zero) on $(1,M+1]\backslash \overline{{\mathscr{U}}}$):
\[cor:continuity-of-f\] The local dimensional function $f$ is continuous at $q\in\overline{{\mathscr{U}}}$ if and only if $q\in{ {\mathscr{C}}}$. The same statement holds for $f_-$ and $f_+$.
Next, for any relative entropy plateau $J$ we define the [*relative bifurcation set*]{} $${\mathscr{B}}(J):=\big\{q\in J: h({\widetilde{\mathbf{U}}}_p(J))\neq h({\widetilde{\mathbf{U}}}_q(J))\ \forall p\in J, p\neq q\big\}.$$ As a special case, for $J=J_\emptyset=[1, M+1]$ we have ${\mathscr{B}}(J)={\mathscr{B}}$.
\[main-b\] Let $J=J_{\mathbf{i}}=[q_L,q_R]$ be a relative plateau with generating word ${\mathbf{a}}=a_1\dots a_m$. Then
(i) ${\mathscr{B}}(J)={\mathscr{B}}(J_{\mathbf i})=J_{\mathbf i}\backslash \bigcup_{j=0}^{\infty}J_{\mathbf{i}j}$;
(ii) ${\mathscr{B}}(J)\subset {\mathscr{U}}\cap J$;
(iii) ${\mathscr{B}}(J)$ is Lebesgue null;
(iv) ${\mathscr{B}}(J)$ has full Hausdorff dimension. Precisely, $$\dim_H {\mathscr{B}}(J)=\dim_H({\mathscr{U}}\cap J)=\frac{\log 2}{m\log q_R};$$
(v) Let $p_0$ be the base with $\alpha(p_0)={\mathbf{a}}^+\overline{{\mathbf{a}}}^2\big(\overline{{\mathbf{a}}^+}{\mathbf{a}}{\mathbf{a}}^+\big)^{\infty}$. Then $$\dim_H \big(({\mathscr{U}}\cap J)\backslash {\mathscr{B}}(J)\big)=\frac{\log 2}{3m\log p_0}.$$
The representation of ${\mathscr{B}}(J)$ in (i) explains why we call the intervals $J_{\mathbf{i}j}$ relative entropy plateaus: They are the maximal intervals on which $h({\widetilde{\mathbf{U}}}_q(J_{\mathbf i}))$ is positive and constant. Comparing statements (i)-(iv) above with the properties of ${\mathscr{B}}$ given after , we can say that the set ${\mathscr{B}}(J)$ plays the same role on a local level (i.e. within $J$) as the bifurcation set ${\mathscr{B}}$ does on a global level. We may observe also that (v) is similar to [@Allaart-Baker-Kong-17 Theorem 4], which gives the Hausdorff dimension of ${\mathscr{U}}\backslash{\mathscr{B}}$.
From Proposition \[prop:property of C-infity\](i) and Theorem \[main-b\](i),(ii) we obtain the following decomposition of ${\mathscr{U}}$ into mutually disjoint subsets (recall that ${\mathscr{U}}\cap[q_L,q_c(J))=\emptyset$ while $q_c(J)\in{ {\mathscr{C}}}$ for any relative plateau $J=[q_L,q_R]$): $${\mathscr{U}}={ {\mathscr{C}}}\cup{\mathscr{B}}\cup \bigcup_{n=1}^{\infty}\bigcup_{\mathbf i\in\{1,2,\ldots\}^n}{\mathscr{B}}(J_{\mathbf i}).$$
Using Theorems \[main1\] and \[main2\] we can answer an open question of Kalle et al. [@Kalle-Kong-Li-Lv-2016], who asked for the Hausdorff dimension of ${\mathscr{U}}\cap[t_1, t_2]$ for any $t_1<t_2$.
\[main3\] For any $1<t_1<t_2\le M+1$ we have $$\dim_H({\mathscr{U}}\cap[t_1, t_2])=\max{\left\{\frac{h({\widetilde{\mathbf{U}}}_q(J))}{\log q}: q\in\overline{{\mathscr{B}}(J)\cap[t_1, t_2]}\right\}},$$ where $J=[q_L, q_R]$ is the smallest relative plateau containing $[t_1, t_2]$.
If $(t_1,t_2)$ intersects the bifurcation set ${\mathscr{B}}$, then $J=[1,M+1]$ and the expression in Theorem \[main3\] simplifies to $$\begin{aligned}
\dim_H({\mathscr{U}}\cap[t_1, t_2])&=\max{\left\{\frac{h({\widetilde{\mathbf{U}}}_q)}{\log q}: q\in\overline{{\mathscr{B}}\cap[t_1, t_2]}\right\}}\\
&=\max{\left\{\dim_H {\ensuremath{\mathcal{U}}}_q: q\in \overline{{\mathscr{B}}\cap[t_1, t_2]}\right\}}.\end{aligned}$$ Setting $t_1=1$ and noting that the map $q\mapsto \dim_H {\ensuremath{\mathcal{U}}}_q$ is continuous on $(1,M+1]$ and is decreasing inside each entropy plateau, we obtain Theorem 3 of [@Kalle-Kong-Li-Lv-2016], namely $$\dim_H({\mathscr{U}}\cap[1, t])=\max_{q\leq t}\dim_H {\ensuremath{\mathcal{U}}}_q, \qquad t\in[1,M+1].$$
Application to strongly univoque sets
-------------------------------------
In 2011, Jordan et al. [@Jordan-Shmerkin-Solomyak-2011] introduced the sets $$\check{\mathbf{U}}_q:=\bigcup_{k=1}^{\infty}{\left\{(x_i)\in\Omega_M: \overline{{\alpha}_1(q)\ldots{\alpha}_k(q)}\prec x_{n+1}\ldots x_{n+k}\prec {\alpha}_1(q)\ldots {\alpha}_k(q)~\forall n\ge 0\right\}}.
\label{eq:strongly-univoque-set}$$ (In fact, their definition was slightly different in that they require the above inequalities only for all sufficiently large $n$. They also defined $\check{\mathbf{U}}_q$ in a dynamical, rather than a symbolic way, but the definitions are easily seen to be equivalent.) Jordan et al. used the sets $\check{\mathbf{U}}_q$ to study the multifractal spectrum of Bernoulli convolutions. Recently, the first author [@Allaart-2016] used them to characterize the infinite derivatives of certain self-affine functions, and studied them in more detail in [@Allaart-2017] where they were called *strongly univoque sets*.
In view of (\[eq:def-widetilde-uq\]) it is clear that $\check{\mathbf{U}}_q\subseteq{\widetilde{\mathbf{U}}}_q$ for all $q\in(1, M+1]$. On the other hand, $\check{\mathbf{U}}_q\supset{\widetilde{\mathbf{U}}}_p$ for all $p<q$ (see [@Jordan-Shmerkin-Solomyak-2011] or [@Allaart-2017 Lemma 2.1]). It follows that $$\label{eq:kkk-1}
\check{\mathbf{U}}_q=\bigcup_{p<q}{\widetilde{\mathbf{U}}}_p,$$ and, since the function $q\mapsto \dim_H {\widetilde{\mathbf{U}}}_q$ is continuous, that $\dim_H \check{{\mathbf{U}}}_q=\dim_H {\widetilde{\mathbf{U}}}_q$ for every $q$. A natural question now, is whether $\check{\mathbf{U}}_q$ could in fact equal ${\widetilde{\mathbf{U}}}_q$. Following [@Allaart-2017], we define the difference set $$\begin{aligned}
\begin{split}
\mathbf W_q:&={\widetilde{\mathbf{U}}}_q\setminus\check {\mathbf{U}}_q\\ &=\bigcap_{k=1}^{\infty}\bigcup_{n=0}^\infty{\left\{(x_i)\in{\widetilde{\mathbf{U}}}_q: x_{n+1}\ldots x_{n+k}={\alpha}_1(q)\ldots {\alpha}_k(q)\textrm{ or }\overline{{\alpha}_1(q)\ldots {\alpha}_k(q)}\right\}},
\end{split}
\label{eq:Wq-def}\end{aligned}$$ and its projection, $\mathcal W_q:=\pi_q(\mathbf W_q)$. One of the main results in [@Allaart-2017] is that $\mathcal W_q\neq\emptyset$ if and only if $q\in\overline{{\mathscr{U}}}$, and then $\mathcal W_q$ is in fact uncountable. It is also shown in [@Allaart-2017] that $\dim_H\mathcal W_q=0$ whenever $q\in\mathscr C_0$ is a de Vries-Komornik number.
Using the techniques developed in this paper, we can improve on the results of [@Allaart-2017] and completely characterize the Hausdorff dimension of $\mathcal W_q$.
\[main4\] For any [$q\in(1, M+1]$]{} we have $$\dim_H\mathcal W_q=f_-(q).$$
1. By Proposition \[prop:local dimension-B\] and Theorem \[main4\] it follows that for each $q\in{\mathscr{B}}$ we have $$\dim_H\mathcal W_q=\dim_H{\ensuremath{\mathcal{U}}}_q>0.$$ This provides a negative answer to Question 1.8 of [@Allaart-2017], where it was conjectured that $\dim_H\mathcal W_q<\dim_H{\ensuremath{\mathcal{U}}}_q$ for all $q>q_{KL}$. Looking at , the above result is not too surprising, since the set-valued function $q\mapsto {\widetilde{\mathbf{U}}}_q$ is “most discontinuous" at points of ${\mathscr{B}}$.
2. Let $q\in\overline{{\mathscr{U}}}$. By Theorem \[main1\] (i) and Theorem \[main4\] it follows that $\dim_H\mathcal W_q=0$ if and only if $q\in{ {\mathscr{C}}}$. This completely characterizes the set $\{q:\dim_H\mathcal W_q=0\}$, extending Theorem 1.5 of [@Allaart-2017].
3. In view of and remark (2) above, we could say that, at points of $\overline{{\mathscr{U}}}\backslash { {\mathscr{C}}}$, the set-valued function $q\mapsto {\widetilde{\mathbf{U}}}_q$ “jumps" by a set of positive Hausdorff dimension.
The remainder of this article is organized as follows. In Section \[sec:C\] we prove Proposition \[prop:property of C-infity\] and give some examples of points in ${ {\mathscr{C}}}_{\infty}$. In Section \[sec:map Phi-J\] we introduce for each relative plateau $J$ a bijection $\Phi_J$ between symbol spaces and its induced map $\hat{\Phi}_J$ between suitable sets of bases, and develop their properties. These maps allow us to answer questions about relative plateaus and relative bifurcation sets by relating them directly to entropy plateaus $[p_L,p_R]$ and the bifurcation set ${\mathscr{B}}$ for the alphabet $\{0,1\}$. This is done in Section \[sec: proofs of th-1-2\], where we prove Theorems \[main1\], \[main2\] and \[main-b\]. Section \[sec:proof-of-theorem3\] contains a short proof of Theorem \[main3\], and Section \[sec:proof-of-theorem4\] is devoted to the proof of Theorem \[main4\].
Properties of the set ${ {\mathscr{C}}}$ {#sec:C}
========================================
In this section we prove Proposition \[prop:property of C-infity\]. Recall that $\alpha(q)$ is the quasi-greedy expansion of $1$ in base $q$. The following useful result is well known (cf. [@Baiocchi_Komornik_2007]).
\[lem:quasi-greedy expansion-alpha-q\] The map $q\mapsto {\alpha}(q)$ is strictly increasing and bijective from $(1, M+1]$ to the set of sequences $(a_i)\in\Omega_M$ not ending with $0^{\infty}$ and satisfying $${\sigma}^n((a_i)){\preccurlyeq}(a_i)\quad\forall n\ge 0.$$
(i). It is known that all de Vries-Komornik numbers belong to ${\mathscr{U}}$ (cf. [@Kong_Li_2015]), i.e., ${ {\mathscr{C}}}_0\subset{\mathscr{U}}$. Now let $q\in{ {\mathscr{C}}}_{\infty}$ and ${\alpha}(q)={\alpha}_1{\alpha}_2\ldots$. Then $q$ belongs to infinitely many relative plateaus. Hence, there are infinitely many integers $m_1<m_2<\cdots$ such that for each $k$, ${\alpha}_1\ldots{\alpha}_{m_k}^-$ is admissible, since, if $q$ lies in the relative plateau generated by $b_1\ldots b_n$, then ${\alpha}(q)$ must begin with $b_1\ldots b_n^+$. It follows by (\[eq:admissible word\]) that for each $k$, $$\overline{{\alpha}_1\ldots {\alpha}_{m_k-i}}\prec {\alpha}_{i+1}\ldots {\alpha}_{m_k}{\preccurlyeq}{\alpha}_1\ldots {\alpha}_{m_k-i}\quad\forall ~1\le i<m_k.$$ This implies by induction that $\overline{{\alpha}(q)}\prec {\sigma}^i({\alpha}(q)){\preccurlyeq}{\alpha}(q)$ for all $i\in{\ensuremath{\mathbb{N}}}$, and hence $q\in\overline{{\mathscr{U}}}$ (cf. [@Komornik_Loreti_2007]). But $\overline{{\mathscr{U}}}\setminus{\mathscr{U}}$ contains only left endpoints of relative plateaus, and these points do not lie in ${ {\mathscr{C}}}_{\infty}$. Therefore, $q\in{\mathscr{U}}$.
(ii). Clearly, by the construction of ${ {\mathscr{C}}}_{\infty}$ it follows that ${ {\mathscr{C}}}_{\infty}$ is uncountable, because each relative plateau of level $n$ contains infinitely many pairwise disjoint relative plateaus of level $n+1$. That ${ {\mathscr{C}}}$ has no isolated points follows since any right neighborhood of a de Vries-Komornik number contains infinitely many relative plateaus.
(iii). In [@Allaart-Baker-Kong-17], the following was proved: If $J=[q_L, q_R]$ is a relative plateau generated by $a_1\ldots a_m$, then $$\label{eq:dimension-local-u}
\dim_H(\overline{{\mathscr{U}}}\cap [p, q_R])=\frac{\log 2}{m\log q_R}\quad\textrm{for any }p\in[q_L, q_R).$$ (This was stated in [@Allaart-Baker-Kong-17] only for entropy plateaus, i.e., the first level relative plateaus, but the proof carries over verbatim to any relative plateau.) Observe that ${ {\mathscr{C}}}_0$ is countable. Furthermore, for a relative plateau $J_{\mathbf i}$ with $\mathbf{i}\in\{1,2,\dots\}^n$, its generating block $a_1\dots a_m$ satisfies $m\geq n$. That $\dim_H{ {\mathscr{C}}}=0$ now follows from the definition of ${ {\mathscr{C}}}_{\infty}$, the countably stability of Hausdorff dimension, and (\[eq:dimension-local-u\]).
It is easy to create specific examples of points in ${ {\mathscr{C}}}_{\infty}$. For instance, let ${\mathbf{a}}=a_1\ldots a_m$ be an admissible word not of the form ${\mathbf{b}}\overline{{\mathbf{b}}}$ (e.g. ${\mathbf{a}}=1110010$ when $M=1$), and construct a sequence ${\alpha}_1{\alpha}_2\ldots$ as follows: Set ${\alpha}_1\ldots{\alpha}_m={\mathbf{a}}^+$, and recursively for $k=0,1,\ldots$, let $${\alpha}_{3^k m+1}\ldots {\alpha}_{2\cdot 3^k m}={\alpha}_{2\cdot 3^k m+1}\ldots {\alpha}_{3^{k+1}m}=\overline{\alpha_1\dots\alpha_{3^k m}}^+.$$ Then ${\alpha}_1{\alpha}_2\ldots ={\alpha}(q)$ for some $q$, and this $q$ lies in ${ {\mathscr{C}}}_{\infty}$.
More generally, one can create many more examples by the following procedure. Let again ${\mathbf{a}}=a_1\dots a_m$ be any admissible word not of the form ${\mathbf{b}}\overline{{\mathbf{b}}}$. Now let $\mathbf w$ be a word using the letters ${\mathbf{a}}^+, \overline{{\mathbf{a}}}, \overline{{\mathbf{a}}^+}, {\mathbf{a}}$ beginning with ${\mathbf{a}}^+$ such that $\mathbf w^-$ is admissible (e.g. $\mathbf w={\mathbf{a}}^+\overline{{\mathbf{a}}}^2\overline{{\mathbf{a}}^+}{\mathbf{a}}^+\overline{{\mathbf{a}}^+}{\mathbf{a}}^+$). Put $\mathbf v_0:=\mathbf w$, and recursively, for $k=0,1,2,\dots$, let $\mathbf v_{i+1}$ be the word obtained from $\mathbf v_{i}$ by performing the substitutions $${\mathbf{a}}\mapsto \mathbf v_i^-, \qquad {\mathbf{a}}^+ \mapsto \mathbf{v}_i, \qquad \overline{{\mathbf{a}}}\mapsto \overline{\mathbf{v}_i}^+, \qquad \overline{{\mathbf{a}}^+}\mapsto \overline{\mathbf{v}_i}.$$ Since $\mathbf v_{i+1}$ extends $\mathbf v_i$, the limit $\mathbf{v}:=\lim_{i\to\infty}\mathbf v_i$ exists, and $\mathbf{v}=\alpha(q)$ for some $q$, as the interested reader may check using Lemma \[lem:quasi-greedy expansion-alpha-q\]. Some reflection reveals that $q\in\mathscr{C}$. The de Vries-Komornik numbers are obtained from $\mathbf w={\mathbf{a}}^+\overline{{\mathbf{a}}}$; all other examples obtained this way lie in ${ {\mathscr{C}}}_{\infty}$, including the example given at the beginning of this remark, which is obtained from $\mathbf w={\mathbf{a}}^+\overline{{\mathbf{a}}}^2$. (For each $i$, $q$ lies in the relative plateau $[q_L(i),q_R(i)]$ given by $\alpha(q_L(i))=(\mathbf{v}_i^-)^{\infty}$ and $\alpha(q_R(i))=\mathbf{v}_i\big(\overline{\mathbf{v}_i}^+\big)^{\infty}$; we leave the details for the interested reader.)
Descriptions of the map $\Phi_J$ and the induced map $\hat\Phi_J$ {#sec:map Phi-J}
=================================================================
In this section we fix a relative plateau $J=[q_L, q_R]$ with $${\alpha}(q_L)={\mathbf{a}}^{\infty}\quad\textrm{and}\quad {\alpha}(q_R)={\mathbf{a}}^+(\overline{{\mathbf{a}}})^{\infty}$$ for some admissible word ${\mathbf{a}}=a_1\ldots a_m$. Note by Definition \[def:admissible-words\] and Lemma \[lem:quasi-greedy expansion-alpha-q\] that $q_L$ and $q_R$ are well defined and $q_L<q_R$.
A special role in this paper is played by sets associated with the alphabet $\{0,1\}$. When the alphabet $\{0,1\}$ is intended, we will [affix]{} a superscript $^*$ to our notation. Thus, ${\mathscr{B}}^*={\mathscr{B}}$ when $M=1$, ${\mathscr{U}}^*={\mathscr{U}}$ when $M=1$, etc. We call ${\mathscr{B}}^*$ the [*reference bifurcation set*]{}. The key to the proofs of our main results, and the main methodological innovation of this paper, is the construction of a bijection $\hat{\Phi}_J$ from ${\mathscr{B}}(J)$ to ${\mathscr{B}}^*$. More generally, $\hat{\Phi}_J$ maps important points of $J$ to important points of $(1,2]$ for the case $M=1$. Associated with $\hat{\Phi}_J$ is a symbolic map $\Phi_J$ which maps each set ${\widetilde{\mathbf{U}}}_q(J)$ to the symbolic univoque set ${\widetilde{\mathbf{U}}}_{\hat{q}}^*$ for $M=1$, where $\hat{q}=\hat{\Phi}_J(q)$. By using properties of the maps $\Phi_J$ and $\hat{\Phi}_J$, many classical results on univoque sets with alphabet $\{0,1\}$ can be transferred to the relative entropy plateaus and the sets ${\widetilde{\mathbf{U}}}_q(J)$.
Figure \[fig1\] shows a directed graph $G$ with two sets of labels. The labeled graph $\mathcal G=(G, \mathcal L)$ with labels in $\mathcal L:=\big\{{\mathbf{a}}, {\mathbf{a}}^+, \overline{{\mathbf{a}}}, \overline{{\mathbf{a}}^+}\big\}$ is right-resolving, i.e. the out-going edges from the same vertex in $\mathcal G$ have different labels. Let $X(J)$ be the set of infinite sequences determined by the automata $\mathcal G=(G, \mathcal L)$, beginning at the “Start" vertex (cf. [@Lind_Marcus_1995]). We emphasize that each digit ${\mathbf{d}}$ in $\mathcal L$ is a block of length $m$, and any sequence in $X(J)$ is an infinite concatenation of blocks from $\mathcal{L}$.
Likewise, the [*reference labeled graph*]{} $\mathcal G^*=(G, \mathcal L^*)$ with labels in $\mathcal L^*:={\left\{0,1\right\}}$ is right-resolving. Hence for each $q\in(1,2]$ [the quasi-greedy expansion ${\alpha^*}(q)$ of $1$ in base $q$]{} is uniquely represented by an infinite path determined by the automata $\mathcal G^*$. Let $X^*\subset \{0,1\}^{\ensuremath{\mathbb{N}}}$ be the set of all infinite sequences determined by the automata $\mathcal G^*$, and note that $X^*=\{(x_i)\in\{0,1\}^{\ensuremath{\mathbb{N}}}:x_1=1\}$. Then ${\left\{{\alpha^*}(q): q\in(1, 2]\right\}}\subset X^*$, the inclusion being proper in view of Lemma \[lem:quasi-greedy expansion-alpha-q\].
=\[minimum size=0pt,fill=none,draw=black,text=black\]
\(A) [ $A$]{}; (B) \[ right of=A\] [$B$ ]{}; (C) \[ above of=A\] [$Start$]{};
\(C) edge\[->\] node[${\mathbf{a}}^+\; / \; 1$]{} (A)
\(A) edge \[loop left,->\] node [$\overline{{\mathbf{a}}}\;/\; 1$]{} (A) edge \[bend left\] node [$\overline{{\mathbf{a}}^+}\;/\; 0$]{} (B)
\(B) edge \[loop right\] node [${\mathbf{a}}\; /\; 0$]{} (B) edge \[bend left\] node [${\mathbf{a}}^+\; / \; 1$]{} (A);
\[prop:property of Uq(J)\] ${\widetilde{\mathbf{U}}}_q(J)\subset X(J)$ for [every]{} $q\in(q_L, q_R]$.
To prove the proposition we need the following.
\[lem:uq-xb\] Any sequence $(x_i)\in\Omega_M$ satisfying $x_1\ldots x_m={\mathbf{a}}^+$ and $$\label{eq:inequality-1}
\overline{{\mathbf{a}}^+}{\mathbf{a}}^{\infty}{\preccurlyeq}{\sigma}^n((x_i)){\preccurlyeq}{\mathbf{a}}^+(\overline{{\mathbf{a}}})^{\infty}\quad\forall ~ n\ge 0$$ belongs to $X(J)$.
Take a sequence $(x_i)$ satisfying $x_1\ldots x_m={\mathbf{a}}^+=a_1\ldots a_m^+$ and (\[eq:inequality-1\]). Then by (\[eq:inequality-1\]) with $n=0$ and $n=m$ it follows that $$\overline{{\mathbf{a}}^+}{\preccurlyeq}x_{m+1}\ldots x_{2m}{\preccurlyeq}\overline{{\mathbf{a}}}.$$ So, either $x_{m+1}\ldots x_{2m}=\overline{{\mathbf{a}}^+}$ or $x_{m+1}\ldots x_{2m}=\overline{{\mathbf{a}}}$.
1. If $x_{m+1}\ldots x_{2m}=\overline{{\mathbf{a}}^+}$, then by (\[eq:inequality-1\]) with $n=m$ and $n=2m$ it follows that the next block $x_{2m+1}\ldots x_{3m}={\mathbf{a}}^+$ or ${\mathbf{a}}$.
2. If $x_{m+1}\ldots x_{2m}=\overline{{\mathbf{a}}}$, then $x_{1}\ldots x_{2m}={\mathbf{a}}^+\overline{{\mathbf{a}}}$. By (\[eq:inequality-1\]) with $n=0$ and $n=2m$ it follows that the next block $x_{2m+1}\ldots x_{3m}=\overline{{\mathbf{a}}^+}$ or $\overline{{\mathbf{a}}}$.
Iterating the above reasoning and referring to Figure \[fig1\] we conclude that $(x_i)\in X(J)$.
Take $q\in(q_L, q_R]$. [Since ${\alpha}(q_L)={\mathbf{a}}^{\infty}$ and ${\alpha}(q_R)={\mathbf{a}}^+(\overline{{\mathbf{a}}})^{\infty}$, Lemma \[lem:quasi-greedy expansion-alpha-q\] implies that ${\alpha}_1(q)\ldots {\alpha}_m(q)={\mathbf{a}}^+$ and ${\alpha}(q){\preccurlyeq}{\alpha}(q_R)={\mathbf{a}}^+(\overline{{\mathbf{a}}})^{\infty}$. Hence, by , and Lemma \[lem:uq-xb\], it follows that ${\widetilde{\mathbf{U}}}_q(J)\subset X(J)$.]{}
We next introduce the right bifurcation set ${\mathscr{V}}$ for the set-valued map $q\mapsto {\widetilde{\mathbf{U}}}_q$ (cf. [@DeVries_Komornik_2008]): $${\mathscr{V}}:={\left\{q\in(1, M+1]: {\widetilde{\mathbf{U}}}_r\ne {\widetilde{\mathbf{U}}}_q\ \forall r>q\right\}}.$$ Recall that ${\mathscr{U}}$ is the set of univoque bases. The following characterizations of ${\mathscr{U}}$ and ${\mathscr{V}}$ are proved in [@Vries-Komornik-Loreti-2016].
\[lem:characterization of V-U\]
1. $q\in{\mathscr{U}}\setminus{\left\{M+1\right\}}$ if and only if $\overline{{\alpha}(q)}\prec {\sigma}^n({\alpha}(q))\prec {\alpha}(q)$ [for all ]{} $n\ge 1$.
2. $q\in{\mathscr{V}}$ if and only if $\overline{{\alpha}(q)}{\preccurlyeq}{\sigma}^n({\alpha}(q)){\preccurlyeq}{\alpha}(q)$ [for all ]{} $n\ge 1$.
Clearly, Lemma \[lem:characterization of V-U\] implies that ${\mathscr{U}}\subset{\mathscr{V}}$. Furthermore, ${\mathscr{V}}\setminus{\mathscr{U}}$ is at most countable. Set $$\begin{aligned}
{\mathbf{U}}(J):={\left\{{\alpha}(q): q\in{\mathscr{U}}\cap (q_L, q_R]\right\}}\quad\textrm{and}\quad
{\mathbf{V}}(J):={\left\{{\alpha}(q): q\in{\mathscr{V}}\cap (q_L, q_R]\right\}}.\end{aligned}$$ Then ${\mathbf{U}}(J)\subset{\mathbf{V}}(J)$. As a consequence of Lemmas \[lem:uq-xb\] and \[lem:characterization of V-U\] we have the following.
\[proposition:U(J)-V(J)\] ${\mathbf{U}}(J)\subset{\mathbf{V}}(J)\subset X(J)$. Furthermore, $$\begin{gathered}
{\mathbf{U}}(J)=\big\{({\mathbf{c}}_i)\in X(J): \overline{({\mathbf{c}}_i)}\prec {\sigma}^n(({\mathbf{c}}_i))\prec ({\mathbf{c}}_i)~\forall n\ge 1\big\},\\
{\mathbf{V}}(J)=\big\{({\mathbf{c}}_i)\in X(J): \overline{({\mathbf{c}}_i)}{\preccurlyeq}{\sigma}^n(({\mathbf{c}}_i)){\preccurlyeq}({\mathbf{c}}_i)~\forall n\ge 1\big\}. \end{gathered}$$
We shall also need the following sets. For $M=1$ we denote by $$\begin{aligned}
&{\mathscr{U}}^*:={\mathscr{U}}\quad\textrm{and}\quad {\mathbf{U}}^*:={\left\{{\alpha}^*(q): q\in{\mathscr{U}}^*\right\}},\\
&{\mathscr{V}}^*:={\mathscr{V}}\quad\textrm{and}\quad {\mathbf{V}}^*:={\left\{{\alpha}^*(q): q\in{\mathscr{V}}^*\right\}}.\end{aligned}$$ Then by Lemma \[lem:characterization of V-U\] with $M=1$ it follows that $$\label{eq:U-star-V-star}
\begin{split}
{\mathbf{U}}^*\setminus{\left\{1^{\infty}\right\}}&=\left\{(a_i)\in{\left\{0,1\right\}}^{\ensuremath{\mathbb{N}}}: (1-a_i)\prec {\sigma}^n((a_i))\prec (a_i)~\forall n\ge 1\right\},\\
{\mathbf{V}}^*&={\left\{(a_i)\in{\left\{0,1\right\}}^{\ensuremath{\mathbb{N}}}: (1-a_i){\preccurlyeq}{\sigma}^n((a_i)){\preccurlyeq}(a_i)~\forall n\ge 1\right\}}.
\end{split}$$
Description of $\Phi_J$
-----------------------
We now define a map $\phi: \mathcal{L}\to\mathcal{L}^*$ by $$\phi(\overline{{\mathbf{a}}^+})=\phi({\mathbf{a}})=0, \qquad\mbox{and} \qquad \phi(\overline{{\mathbf{a}}})=\phi({\mathbf{a}}^+)=1.
\label{eq:phi}$$ Then $\phi$ induces a block map $\Phi_J: X(J){\rightarrow}X^*$ defined by $$\Phi_J(({\mathbf{d}}_i)):=\phi({\mathbf{d}}_1)\phi({\mathbf{d}}_2)\ldots.$$
\[prop:chareacterization of Phi-J\] The map $\Phi_J: X(J){\rightarrow}X^*$ is strictly increasing and bijective. Furthermore, $$\Phi_J({\mathbf{U}}(J))={\mathbf{U}}^*\quad\textrm{and}\quad \Phi_J({\mathbf{V}}(J))={\mathbf{V}}^*.$$
First we verify that $\Phi_J$ is a bijection.
\[lem:bijective map-Phi\] The map $\Phi_J: X(J){\rightarrow}X^*$ is strictly increasing and bijective.
Note by Definition \[def:admissible-words\] that the blocks in $\mathcal{L}$ are ordered by $\overline{{\mathbf{a}}^+}\prec\overline{{\mathbf{a}}}\prec{\mathbf{a}}\prec{\mathbf{a}}^+$. Take two sequences $({\mathbf{c}}_i), ({\mathbf{d}}_i)\in X(J)$ with $({\mathbf{c}}_i)\prec ({\mathbf{d}}_i)$. Then ${\mathbf{c}}_1={\mathbf{d}}_1={\mathbf{a}}^+$, and there is an integer $k\ge 2$ such that ${\mathbf{c}}_1\ldots {\mathbf{c}}_{k-1}={\mathbf{d}}_1\ldots {\mathbf{d}}_{k-1}$ and ${\mathbf{c}}_k\prec {\mathbf{d}}_k$. We will show that $\phi({\mathbf{c}}_k)<\phi({\mathbf{d}}_k)$. To this end we consider two cases (see Figure \[fig1\]):
- If ${\mathbf{c}}_{k-1}={\mathbf{a}}^+$ or $\overline{{\mathbf{a}}}$, then ${\mathbf{c}}_k=\overline{{\mathbf{a}}^+}$ and ${\mathbf{d}}_k=\overline{{\mathbf{a}}}$, and so $\phi({\mathbf{c}}_k)=0$ and $\phi({\mathbf{d}}_k)=1$.
- If ${\mathbf{c}}_{k-1}={\mathbf{a}}$ or $\overline{{\mathbf{a}}^+}$, then ${\mathbf{c}}_k={\mathbf{a}}$ and ${\mathbf{d}}_k={\mathbf{a}}^+$, so again $\phi({\mathbf{c}}_k)=0$ and $\phi({\mathbf{d}}_k)=1$.
Thus, $\Phi_J$ is strictly increasing on $X(J)$. Finally, since the labeled graphs $\mathcal G$ and $\mathcal G^*$ are both right-resolving, the definitions of $X(J)$ and $X^*$ imply that $\Phi_J$ is bijective.
\[lem:description-Phi-J\] The following statements are equivalent for sequences $({\mathbf{c}}_i), ({\mathbf{d}}_i)\in X(J)$.
1. $
\overline{({\mathbf{d}}_i)}\prec{\sigma}^n(({\mathbf{c}}_i))\prec ({\mathbf{d}}_i)~~\forall n\ge 0.
$
2. $
\overline{({\mathbf{d}}_i)}\prec{\sigma}^{mn}(({\mathbf{c}}_i))\prec ({\mathbf{d}}_i)~~\forall n\ge 0.
$
3. The image sequences $(x_i):=\Phi_J(({\mathbf{c}}_i)), (y_i):=\Phi_J(({\mathbf{d}}_i))$ in $X^*$ satisfy $${(1-y_i)}\prec {\sigma}^n((x_i))\prec (y_i)~~\forall n\ge 0.$$
Since ${\mathbf{a}}=a_1\ldots a_m$ is admissible, Definition \[def:admissible-words\] implies $$\overline{a_1\ldots a_m^+}\prec a_{i+1}\ldots a_m^+\overline{a_1\ldots a_i}\prec a_1\ldots a_m^+$$ and $$\overline{a_1\ldots a_m^+}\prec a_{i+1}\ldots a_m a_1\ldots a_i\prec a_1\ldots a_m^+$$ for all $1\le i<m$. Using ${\mathbf{d}}_1={\mathbf{a}}^+=a_1\dots a_m^+$ and $({\mathbf{c}}_i)\in X(J)$ this proves the equivalence (i) $\Leftrightarrow$ (ii).
Next, we prove (ii) $\Rightarrow$ (iii). We only verify the second inequality in (iii); the first one can be proved in the same way. Take $({\mathbf{c}}_i), ({\mathbf{d}}_i)\in X(J)$ satisfying the inequalities in (ii), and let $(x_i):=\Phi_J(({\mathbf{c}}_i))$, $(y_i):=\Phi_J(({\mathbf{d}}_i))$. Fix $n\geq 0$. If ${\mathbf{c}}_{n+1}\in\{\overline{{\mathbf{a}}^+}, {\mathbf{a}}\}$, then ${x}_{n+1}=\phi({\mathbf{c}}_{n+1})=0<1=\phi({\mathbf{d}}_1)=y_1$, using that ${\mathbf{d}}_1={\mathbf{a}}^+$. Furthermore, if ${\mathbf{c}}_{n+1}={\mathbf{a}}^+$, then $\sigma^{mn}(({\mathbf{c}}_i))\in X(J)$ and the second inequality in (iii) follows from Lemma \[lem:bijective map-Phi\]. Therefore, the critical case is when ${\mathbf{c}}_{n+1}=\overline{{\mathbf{a}}}$, which we assume for the remainder of the proof.
[Since $({\mathbf{c}}_i)\in X(J)$, there is $0\le j<n$ such that ${\mathbf{c}}_{j+1}\ldots {\mathbf{c}}_{n+1}={\mathbf{a}}^+(\overline{{\mathbf{a}}})^{n-j}$. (See Figure \[fig1\].) Furthermore, since ${\mathbf{c}}_{j+1}{\mathbf{c}}_{j+2}\ldots \prec {\mathbf{d}}_1{\mathbf{d}}_2\ldots{\preccurlyeq}{\mathbf{a}}^+(\overline{{\mathbf{a}}})^{\infty}$, there is a number $k\geq n-j$ such that $${\mathbf{c}}_{j+1}\dots {\mathbf{c}}_{j+k+2}={\mathbf{a}}^+(\overline{{\mathbf{a}}})^k\overline{{\mathbf{a}}^+}, \qquad\mbox{and} \qquad {\mathbf{d}}_1\dots {\mathbf{d}}_{k+1}={\mathbf{a}}^+(\overline{{\mathbf{a}}})^k.
\label{c-and-d}$$ The second equality in yields $y_1\dots y_{k+1}=\phi({\mathbf{d}}_1)\ldots\phi({\mathbf{d}}_{k+1})=1^{k+1}$, and the first equality implies $${\mathbf{c}}_{n+1}\dots {\mathbf{c}}_{j+k+2}=(\overline{{\mathbf{a}}})^{k-(n-j)+1}\overline{{\mathbf{a}}^+}.$$ Hence, $$x_{n+1}\dots x_{j+k+2}=\phi({\mathbf{c}}_{n+1})\ldots \phi({\mathbf{c}}_{j+k+2})=1^{k-(n-j)+1}0\prec 1^{k-(n-j)+2}=y_1\dots y_{k-(n-j)+2},$$ since $j<n$ implies $k-(n-j)+2\leq k+1$. Therefore, $\sigma^n((x_i))\prec (y_i)$, which gives the second inequality in (iii). ]{}
Finally, we prove (iii) $\Rightarrow$ (ii). First we verify the second inequality of (ii). Let $({x}_i), (y_i)\in X^*$ and let $({\mathbf{c}}_i), ({\mathbf{d}}_i)\in X(J)$ such that $\Phi_J(({\mathbf{c}}_i))=({x}_i), \Phi_J(({\mathbf{d}}_i))=(y_i)$. Fix $n\ge 0$. We may assume ${\mathbf{c}}_{n+1}={\mathbf{a}}^+$, as otherwise the inequality is trivial. But then ${\mathbf{c}}_{n+1}{\mathbf{c}}_{n+2}\ldots \in X(J)$, and since ${x}_{n+1}{x}_{n+2}\ldots\prec y_1 y_2\ldots$ it follows from Lemma \[lem:bijective map-Phi\] that ${\mathbf{c}}_{n+1}{\mathbf{c}}_{n+2}\ldots \prec {\mathbf{d}}_1{\mathbf{d}}_2\ldots$. This proves the second inequality of (ii). The first inequality is verified analogously.
In view of Lemma \[lem:bijective map-Phi\] it remains to prove $$\Phi_J({\mathbf{U}}(J))={\mathbf{U}}^*\quad \textrm{and}\quad \Phi_J({\mathbf{V}}(J))={\mathbf{V}}^*.$$ Since the proof of the second equality is similar, we only prove the first one.
Let $({\mathbf{c}}_i)\in{\mathbf{U}}(J)$, and $(x_i):=\Phi_J(({\mathbf{c}}_i))$. Then by Proposition \[proposition:U(J)-V(J)\] it follows that $${\mathbf{c}}_1={\mathbf{a}}^+,\quad\textrm{and}\quad \overline{({\mathbf{c}}_i)}\prec {\sigma}^n(({\mathbf{c}}_i))\prec ({\mathbf{c}}_i)~\forall n\ge 1.$$ By Lemma \[lem:description-Phi-J\] with $({\mathbf{c}}_i)=({\mathbf{d}}_i)$ this is equivalent to $$x_1=1,\quad\textrm{and}\quad (1-x_i)\prec {\sigma}^n((x_i))\prec (x_i)~\forall n\ge 1.$$ So, by (\[eq:U-star-V-star\]) we have $(x_i)\in{\mathbf{U}}^*$, and thus $\Phi_J({\mathbf{U}}(J))\subseteq {\mathbf{U}}^*$.
Conversely, take $(x_i)\in{\mathbf{U}}^*\subset X^*$. By Lemma \[lem:bijective map-Phi\] there exists a (unique) sequence $({\mathbf{c}}_i)\in X(J)$ such that $\Phi_J(({\mathbf{c}}_i))=(x_i)$. If $(x_i)=1^{\infty}$, then $({\mathbf{c}}_i)={\mathbf{a}}^+(\overline{{\mathbf{a}}})^{\infty}={\alpha}(q_R)\in{\mathbf{U}}(J)$. If $(x_i)\in{\mathbf{U}}^*\setminus{\left\{1^{\infty}\right\}}$, then by (\[eq:U-star-V-star\]), Lemma \[lem:description-Phi-J\] and the same argument as above it follows that $({\mathbf{c}}_i)\in{\mathbf{U}}(J)$. Hence, $\Phi_J({\mathbf{U}}(J))={\mathbf{U}}^*$.
\(m) \[matrix of math nodes,row sep=3em,column sep=4em,minimum width=2em\] [ (q\_L, q\_R\] & \^\*\
(J) & \^\*\
]{}; (m-1-1) edge node \[left\] [${\alpha}$]{} (m-2-1) edge node \[above\] [$\hat\Phi_J$]{} (m-1-2) (m-2-1.east|-m-2-2) edge node \[above\] [$\Phi_J$]{} (m-2-2) (m-2-2) edge node \[right\] [$({\alpha}^*)^{-1}$]{} (m-1-2);
.
Description of the induced map $\hat\Phi_J$
-------------------------------------------
Recall from Proposition \[proposition:U(J)-V(J)\] that ${\mathbf{V}}(J)\subset X(J)$. Hence Proposition \[prop:chareacterization of Phi-J\] implies that the bijective map $\Phi_J: {\mathbf{V}}(J){\rightarrow}{\mathbf{V}}^*$ induces an increasing bijective map (see Figure \[figure:2\]) $$\hat\Phi_J: {\mathscr{V}}\cap(q_L, q_R]{\rightarrow}{\mathscr{V}}^*;\qquad q\mapsto ({\alpha}^*)^{-1}\circ\Phi_J\circ{\alpha}(q).$$
The relevance of the map $\hat{\Phi}_J$ is made clear by the following proposition. Here, for $M=1$ and $q\in(1,2]$ we write ${\widetilde{\mathbf{U}}}_q^*:={\widetilde{\mathbf{U}}}_q$.
\[th:characterization of hat-Phi-J\]
1. $\hat\Phi_J: {\mathscr{V}}\cap(q_L, q_R]{\rightarrow}{\mathscr{V}}^*$ is a strictly increasing homeomorphism.
2. $\hat\Phi_J({\mathscr{U}}\cap(q_L, q_R])=\hat\Phi_J({\mathscr{U}}\cap J)={\mathscr{U}}^*$.
3. For any $q\in{\mathscr{V}}\cap(q_L, q_R]$ and $\hat q:=\hat\Phi_J(q)$ we have $$\Phi_J\big({\widetilde{\mathbf{U}}}_q(J)\big)=\big\{(x_i)\in{\widetilde{\mathbf{U}}}_{\hat q}^*: x_1=1\big\} \qquad \textrm{and} \qquad h({\widetilde{\mathbf{U}}}_q(J))=\frac{h({\widetilde{\mathbf{U}}}_{\hat q}^*)}{m}.$$
In the special case when $M=1$, Proposition \[th:characterization of hat-Phi-J\](ii) implies that ${\mathscr{U}}$ can be viewed as an attractor of an inhomogeneous infinite iterated function system: Since ${\mathscr{U}}^*={\mathscr{U}}$ in this case, we can write $${\mathscr{U}}=\bigcup_{i=1}^\infty \hat{\Phi}_{J_i}^{-1}({\mathscr{U}})\cup \big({\mathscr{B}}\cup \{q_{KL}\}\big),$$ using and the definition of $J_i$.
Part (i) of Proposition \[th:characterization of hat-Phi-J\] follows from the following lemma, which proves something stronger: it implies Hölder properties of the maps $\hat{\Phi}_J$ and $\hat\Phi_J^{-1}$. These will be important later for Hausdorff dimension calculations.
\[lem:Holder continuity of hatPhi\] There exist constants $c_1, c_2>0$ such that for any $q_1, q_2\in{\mathscr{V}}\cap(q_L, q_R]$ with $q_1<q_2$ we have $$\label{eq:continuity-hat-Phi}
c_1(q_2-q_1)^{\frac{\log \hat q_2}{m\log q_2}}\le \hat\Phi_J(q_2)-\hat\Phi_J(q_1) \le c_2 (q_2-q_1)^{\frac{\log \hat q_2}{m\log q_2}},$$ where $\hat q_i:=\hat\Phi_J(q_i)$ for $i=1, 2$.
We only demonstrate the second inequality of (\[eq:continuity-hat-Phi\]), since the proof of the first inequality is very similar.
Let $q_1, q_2\in{\mathscr{V}}\cap(q_L, q_R]$ with $q_1<q_2$, and let $\hat q_i:=\hat\Phi_J(q_i)$, $i=1,2$. Then $\hat q_1<\hat q_2$ by the monotonicity of $\hat\Phi_J$. Furthermore, Lemma \[lem:quasi-greedy expansion-alpha-q\] gives ${\alpha}(q_1)\prec {\alpha}(q_2)$. Note by Proposition \[proposition:U(J)-V(J)\] that ${\alpha}(q_1), {\alpha}(q_2)\in{\mathbf{V}}(J)\subset X(J)$. Therefore, ${\alpha}(q_1), {\alpha}(q_2)$ can be written as ${\alpha}(q_1)=({\mathbf{c}}_i)$ and ${\alpha}(q_2)=({\mathbf{d}}_i)$ with ${\mathbf{c}}_i, {\mathbf{d}}_i\in\big\{{\mathbf{a}}, {\mathbf{a}}^+, \overline{{\mathbf{a}}}, \overline{{\mathbf{a}}^+}\big\}$ for all $i\ge 1$. In view of Figure \[fig1\], there exists $n\ge 2$ such that $$\label{eq:kd-1}
{\mathbf{c}}_1\ldots {\mathbf{c}}_{n-1}={\mathbf{d}}_1\ldots{\mathbf{d}}_{n-1}\quad\textrm{and}\quad {\mathbf{c}}_n\prec {\mathbf{d}}_n.$$ Observe that ${\alpha}(q_2)\in{\mathbf{V}}(J)$. By Proposition \[proposition:U(J)-V(J)\] it follows that $${\sigma}^{mn}({\alpha}(q_2)){\succcurlyeq}\overline{{\alpha}(q_2)}{\succcurlyeq}\overline{{\mathbf{a}}^+}{\mathbf{a}}^{\infty}{\succcurlyeq}0^m 10^{\infty},$$ which implies $$1=\sum_{i=1}^{\infty}\frac{{\alpha}_i(q_2)}{q_2^i}\ge \sum_{i=1}^{mn}\frac{{\alpha}_i(q_2)}{q_2^i}+\frac{1}{q_2^{mn+m+1}}.$$ Therefore, by (\[eq:kd-1\]) with ${\alpha}(q_1)=({\mathbf{c}}_i)$ and ${\alpha}(q_2)=({\mathbf{d}}_i)$ it follows that $$\begin{aligned}
\frac{1}{q_2^{mn+m+1}}&\le 1-\sum_{i=1}^{mn}\frac{{\alpha}_i(q_2)}{q_2^i}=\sum_{i=1}^{{\infty}}\frac{{\alpha}_i(q_1)}{q_1^i}-\sum_{i=1}^{mn}\frac{{\alpha}_i(q_2)}{q_2^i}\\
&\le \sum_{i=1}^{mn}\left(\frac{{\alpha}_i(q_2)}{q_1^i}-\frac{{\alpha}_i(q_2)}{q_2^i}\right)\\
&\le \sum_{i=1}^{\infty}\left(\frac{M}{q_1^i}-\frac{M}{q_2^i}\right)=\frac{M(q_2-q_1)}{(q_1-1)(q_2-1)}.\end{aligned}$$ Since $q_L<q_1<q_2<q_R$, we obtain $$\label{eq:kd-2}
\frac{1}{q_2^{mn}}\le \frac{M q_R^{m+1}}{(q_L-1)^2}(q_2-q_1).$$
Write [$({x}_i):=\Phi_J(({\mathbf{c}}_i))$ and $(y_i):=\Phi_J(({\mathbf{d}}_i))$.]{} Then (\[eq:kd-1\]) and [Lemma \[lem:bijective map-Phi\]]{} imply $$\label{eq:kd-3}
{x}_1\ldots {x}_{n-1}=y_1\ldots y_{n-1}\quad\textrm{and}\quad {x}_n<y_n.$$ Note that $({x}_i), (y_i)\in{\mathbf{V}}^*$. By the definition of $\hat\Phi_J$ we have [$({x}_i)=\Phi_J({\alpha}(q_1))={\alpha}^*\big(\hat{\Phi}_J(q_1)\big)={\alpha}^*(\hat q_1)$, and similarly $(y_i)={\alpha}^*(\hat q_2)$.]{} So, by (\[eq:kd-3\]) it follows that $$\begin{aligned}
\hat\Phi_J(q_2)-\hat\Phi_J(q_1)=\hat q_2-\hat q_1
&=\sum_{i=1}^{\infty}\frac{y_i}{\hat q_2^{i-1}}-\sum_{i=1}^{\infty}\frac{{x}_i}{\hat q_1^{i-1}}\\
&\le \sum_{i=1}^{n-1}\left(\frac{y_i}{\hat q_2^{i-1}}-\frac{{x}_i}{\hat q_1^{i-1}}\right)+\sum_{i=n}^{\infty}\frac{y_i}{\hat q_2^{i-1}}\\
&\le \frac{1}{\hat q_2^{n-2}}\le \frac{4}{\hat q_2^n}.\end{aligned}$$ Here the second inequality follows from the definition of the quasi-greedy expansion ${\alpha}^*(\hat q_2)=(y_i)$. This, together with (\[eq:kd-2\]), yields $$\hat\Phi_J(q_2)-\hat\Phi_J(q_1) \le 4\left(\frac{1}{q_2^{mn}}\right)^{\frac{\log \hat q_2}{m\log q_2}}\le c_2 (q_2-q_1)^{\frac{\log \hat q_2}{m\log q_2}}$$ for some constant $c_2$ independent of $q_1$ and $q_2$.
That $\hat{\Phi}_J$ is increasing and bijective follows since it is the composition of increasing and bijective maps. By Lemma \[lem:Holder continuity of hatPhi\], $\hat{\Phi}_J$ and $\hat{\Phi}_J^{-1}$ are continuous. Thus, we have proved (i). Since $q_L\not\in{\mathscr{U}}$, we have ${\mathbf{U}}(J)=\{\alpha(q): q\in {\mathscr{U}}\cap J\}$. Thus, statement (ii) is a direct consequence of Proposition \[prop:chareacterization of Phi-J\]. It remains only to establish (iii).
Take $q\in{\mathscr{V}}\cap(q_L, q_R]$. Then by Proposition \[proposition:U(J)-V(J)\] we have ${\alpha}(q)\in {\mathbf{V}}(J)\subset X(J)$. Note by Proposition \[prop:property of Uq(J)\] that ${\widetilde{\mathbf{U}}}_q(J)\subset X(J)$. Now take a sequence $({\mathbf{c}}_i)\in X(J)$ and let $(x_i):=\Phi_J(({\mathbf{c}}_i))\in X^*$. Then we have the equivalences $$\begin{aligned}
({\mathbf{c}}_i)\in{\widetilde{\mathbf{U}}}_q(J) \quad &\Longleftrightarrow \quad {\mathbf{c}}_1={\mathbf{a}}^+ \quad\mbox{and} \quad
\overline{{\alpha}(q)}\prec {\sigma}^n(({\mathbf{c}}_i))\prec {\alpha}(q)\quad\forall n\ge 0 \\
&\Longleftrightarrow \quad x_1=1 \quad\ \ \mbox{and} \quad
\Phi_J\left(\overline{{\alpha}(q)}\right)\prec {\sigma}^n(({x}_i))\prec \Phi_J({\alpha}(q))\quad\forall n\ge 0 \\
&\Longleftrightarrow \quad x_1=1 \quad\ \ \mbox{and} \quad
(1-{\alpha}_i^*(\hat q))\prec {\sigma}^n(({x}_i))\prec {\alpha}^*(\hat q)\quad\forall n\ge 0 \\
&\Longleftrightarrow \quad x_1=1 \quad\ \ \mbox{and} \quad (x_i)\in {\widetilde{\mathbf{U}}}^*_{\hat q}\end{aligned}$$ Here the second equivalence follows by Lemma \[lem:description-Phi-J\] [with $({\mathbf{d}}_i)=\alpha(q)$]{}, and the third equivalence follows since ${\alpha}^*(\hat q)=\Phi_J({\alpha}(q))$. As a result, $\Phi_J({\widetilde{\mathbf{U}}}_q(J))=\big\{(x_i)\in{\widetilde{\mathbf{U}}}^*_{\hat q}: x_1=1\big\}$.
For the entropy statement we observe that the map $$\Phi_J: {\widetilde{\mathbf{U}}}_q(J) {\rightarrow}{\widetilde{\mathbf{U}}}^*_{\hat q}(1):=\big\{(x_i)\in{\widetilde{\mathbf{U}}}^*_{\hat q}: x_1=1\big\};\qquad ({\mathbf{c}}_i)\mapsto (\phi({\mathbf{c}}_i))$$ is a bijective [$m$-block map]{}. Furthermore, ${\widetilde{\mathbf{U}}}^*_{\hat q}$ is the disjoint union of ${\widetilde{\mathbf{U}}}^*_{\hat q}(1)$ with its reflection $\big\{(1-x_i): (x_i)\in{\widetilde{\mathbf{U}}}_{\hat q}^*(1)\big\}$. This implies $h({\widetilde{\mathbf{U}}}_q(J))=h({\widetilde{\mathbf{U}}}^*_{\hat q})/m$.
Proofs of Theorems \[main1\], \[main2\] and \[main-b\] {#sec: proofs of th-1-2}
======================================================
Our first goal is to prove Theorem \[main1\]. We begin with a useful lemma.
\[lem:dense-plateaus\] Let $J=J_{\mathbf i}=[q_L,q_R]$ be a relative entropy plateau. Then the union $\bigcup_{j=1}^\infty J_{\mathbf{i}j}$ is dense in $(q_c(J),q_R]$.
Recall from [@AlcarazBarrera-Baker-Kong-2016] that the entropy plateaus $J_j^*$, $j\in{\ensuremath{\mathbb{N}}}$ are dense in $(q_{KL}^*,2]$. Note that we may order the intervals $J_{\mathbf{i}j}$, $j\in{\ensuremath{\mathbb{N}}}$ so that [$\hat{\Phi}_J({\mathscr{V}}\cap J_{\mathbf{i}j})={\mathscr{V}}^*\cap J_j^*$]{} for each $j$. Hence, the result follows from the continuity of $\hat{\Phi}_J^{-1}$ (cf. Lemma \[lem:Holder continuity of hatPhi\]).
For $M=1$ and $q\in(1,2]$ we denote the left and right local dimensional functions by $$f_-^*(q):=\lim_{\delta{\rightarrow}0}\dim_H({\mathscr{U}}^*\cap(q-\delta, q)) \qquad \mbox{and} \qquad
f_+^*(q):=\lim_{\delta{\rightarrow}0}\dim_H({\mathscr{U}}^*\cap(q, q+\delta)),$$ respectively.
\[lem:f-minus-bridge\] Let $J=[q_L,q_R]$ be a relative plateau generated by a word $a_1\dots a_m$, and $q\in\overline{{\mathscr{U}}}\cap(q_L,q_R]$. Then $$f_-(q)=\frac{\log \hat{q}}{m\log q}f_-^*(\hat{q}),$$ where $\hat{q}:=\hat{\Phi}_J(q)$.
By the assumption on $q$, we have that $q\in{\mathscr{V}}$ and there is a sequence $(p_i)$ in ${\mathscr{V}}\cap J$ such that $p_i<q$ for each $i$, and $p_i\nearrow q$ (cf. [@Vries-Komornik-Loreti-2016]). Let $\hat{p}_i:=\hat{\Phi}_J(p_i)$; then $\hat{p}_i<\hat{q}$ for each $i$, and $\hat{p}_i\nearrow \hat{q}$.
Observe from Lemma \[lem:Holder continuity of hatPhi\] that for each $i$, $\hat{\Phi}_J$ is Hölder continuous on $[p_i,q]$ with exponent $\log\hat{p}_i/(m\log q)$, and $\hat{\Phi}_J^{-1}$ is Hölder continuous on $[\hat{p}_i,\hat{q}]$ with exponent $m\log p_i/\log\hat{q}$. It follows on the one hand that $$\dim_H({\mathscr{U}}^*\cap(\hat{p}_i,\hat{q}))=\dim_H \hat{\Phi}_J({\mathscr{U}}\cap(p_i,q))
\leq \frac{m\log q}{\log \hat{p}_i}\dim_H({\mathscr{U}}\cap(p_i,q)),$$ so letting $i\to\infty$ we obtain $$f_-^*(\hat{q})\leq \frac{m\log q}{\log\hat{q}}f_-(q).$$ On the other hand, $$\dim_H({\mathscr{U}}\cap(p_i,q))=\dim_H \hat{\Phi}_J^{-1}\big({\mathscr{U}}^*\cap(\hat{p}_i,\hat{q})\big)
\leq \frac{\log\hat{q}}{m\log p_i}\dim_H\big({\mathscr{U}}^*\cap(\hat{p}_i,\hat{q})\big),$$ so letting $i\to\infty$ gives $$f_-(q)\leq \frac{\log\hat{q}}{m\log q}f_-^*(\hat{q}).$$ Hence, the lemma follows.
For the right local dimensional function $f_+$ we have a similar relationship, but with a subtle difference for the domain of $q$.
\[lem:f-plus-bridge\] Let $J=[q_L,q_R]$ be a relative plateau generated by a word $a_1\dots a_m$, and $q\in\overline{{\mathscr{U}}}\cap(q_L,q_R)$. Then $$f_+(q)=\frac{\log \hat{q}}{m\log q}f_+^*(\hat{q}),$$ where $\hat{q}:=\hat{\Phi}_J(q)$.
The proof is analogous to that of Lemma \[lem:f-minus-bridge\]. If $q\in{\mathscr{U}}$, then we can approximate $q$ from the right by a sequence of points $(r_i)$ from ${\mathscr{V}}\cap J$, and use the Hölder properties of $\hat{\Phi}_J$ and $\hat{\Phi}_J^{-1}$ in much the same way as before. On the other hand, if $q\in\overline{{\mathscr{U}}}\backslash{\mathscr{U}}$, then $q$ is a left endpoint of some relative plateau inside $J$. In this case, $\hat{q}$ is the left endpoint of an entropy plateau in $(1,2]$, and we have $f_+(q)=f_+^*(\hat{q})=0$, so the identity in the lemma holds trivially.
Motivated by [@Allaart-Baker-Kong-17] we introduce the *left and right bifurcation sets* ${\mathscr{B}}_L$ and ${\mathscr{B}}_R$, defined by $$\label{eq:left-right-bifurcation set}
\begin{split}
&{\mathscr{B}}_L:={\left\{q\in(1,M+1]: h({\mathbf{U}}_p)\neq h({\mathbf{U}}_q)\quad\textrm{for all } p<q\right\}},\\
&{\mathscr{B}}_R:={\left\{q\in(1, M+1]: h({\mathbf{U}}_r)\neq h({\mathbf{U}}_q)\quad \textrm{for all }r>q\right\}}.
\end{split}$$ Then ${\mathscr{B}}\subset{\mathscr{B}}_L$ and ${\mathscr{B}}\subset{\mathscr{B}}_R$. Furthermore, any $q\in{\mathscr{B}}_L\setminus{\mathscr{B}}$ is a left endpoint of an entropy plateau, and any $q\in{\mathscr{B}}_R\setminus{\mathscr{B}}$ is a right endpoint of an entropy plateau. As usual, when $M=1$ we write ${\mathscr{B}}^*_L={\mathscr{B}}_L$ and ${\mathscr{B}}^*_R={\mathscr{B}}_R$.
Below, we will need the following extension of Proposition \[prop:local dimension-B\], which follows from the main results of [@Allaart-Baker-Kong-17].
\[prop:left-and-right-bifurcation-results\]
(i) If $q\in{\mathscr{B}}_L$, then $f_-(q)=\dim_H {\ensuremath{\mathcal{U}}}_q$.
(ii) If $q\in{\mathscr{B}}_R$, then $f_+(q)=\dim_H {\ensuremath{\mathcal{U}}}_q$.
Note by (\[eq:dimension-local-u\]) that for any $q\in{ {\mathscr{C}}}_{\infty}$ we have $f(q)=f_-(q)=f_+(q)=0$. Suppose $q\in{ {\mathscr{C}}}_0$, i.e., $q$ is a de Vries-Komornik number. Then $f_-(q)=0$ since ${\mathscr{U}}\cap(q-{\varepsilon},q)=\emptyset$ for sufficiently small ${\varepsilon}>0$. Furthermore, $q=q_c(J)$ for some relative plateau $J$, so $\hat{\Phi}_J(q)=q_{KL}^*$. Since $f_+^*(q_{KL}^*)=0$ (see [@Allaart-Baker-Kong-17 Theorem 2]) and $q\in\overline{{\mathscr{U}}}$, it follows by Lemma \[lem:f-plus-bridge\] that $f_+(q)=0$. Thus, the proof will be complete once we establish (ii).
Consider first $f_-$. Take $q\in\overline{{\mathscr{U}}}\backslash{ {\mathscr{C}}}$, and let $J=[q_L,q_R]$ be the smallest relative plateau such that $q\in(q_L,q_R]$. If $J=[1,M+1]$, then $q\in{\mathscr{B}}_L$ and [by Proposition \[prop:left-and-right-bifurcation-results\](i),]{} $$f_-(q)=\dim_H {\ensuremath{\mathcal{U}}}_q=\frac{h({\mathbf{U}}_q)}{\log q}=\frac{h({\widetilde{\mathbf{U}}}_q(J))}{\log q}>0.$$
Otherwise, put $\hat{q}:=\hat{\Phi}_J(q)$. Then $\hat{q}\in{\mathscr{B}}_L^*$, so [by Proposition \[prop:left-and-right-bifurcation-results\](i)]{} it follows that $$f_-^*(\hat{q})=\dim_H {\ensuremath{\mathcal{U}}}_{\hat q}^*=\frac{h({\mathbf{U}}_{\hat q}^*)}{\log \hat{q}}=\frac{h({\widetilde{\mathbf{U}}^*}_{\hat q})}{\log\hat{q}}>0.$$ Hence, Lemma \[lem:f-minus-bridge\] along with Proposition \[th:characterization of hat-Phi-J\](iii) gives $$f_-(q)=\frac{h({\widetilde{\mathbf{U}}^*}_{\hat q})}{m\log q}=\frac{h({\widetilde{\mathbf{U}}}_q(J))}{\log q}>0.$$
Consider next $f_+$. Take again $q\in\overline{{\mathscr{U}}}\backslash{ {\mathscr{C}}}$. If $q\in\overline{{\mathscr{U}}}\backslash{\mathscr{U}}$, then $q$ is a left endpoint of a relative plateau and $f_+(q)=0$. So assume $q\in{\mathscr{U}}\backslash{ {\mathscr{C}}}$. Let $J=[q_L,q_R]$ now be the smallest relative plateau such that $q\in(q_L,q_R)$. If $J=[1,M+1]$, then $q\in{\mathscr{B}}_R$ and [by Proposition \[prop:left-and-right-bifurcation-results\](ii),]{} $$f_+(q)=\dim_H {\ensuremath{\mathcal{U}}}_q=\frac{h({\mathbf{U}}_q)}{\log q}=\frac{h({\widetilde{\mathbf{U}}}_q(J))}{\log q}>0.$$ Otherwise, put $\hat{q}:=\hat{\Phi}_J(q)$. Then $\hat{q}\in{\mathscr{B}}_R^*$, and [using Proposition \[prop:left-and-right-bifurcation-results\](ii) and Lemma \[lem:f-plus-bridge\] it follows in the same way as above that $$f_+(q)=\frac{h({\widetilde{\mathbf{U}}}_q(J))}{\log q}>0.$$ ]{}
The statement about $f(q)$ is a direct consequence of the statements about $f_-$ and $f_+$.
We next prepare to prove Theorem \[main2\]. Fix a relative plateau $J=[q_L, q_R]$ generated by ${\mathbf{a}}=a_1\ldots a_m$. Recall from Section \[s1\] that the bases $q_G(J), q_F(J)\in J$ satisfy $${\alpha}(q_G(J))=\left({\mathbf{a}}^+\overline{{\mathbf{a}}^+}\right)^{\infty}\quad \textrm{and} \quad {\alpha}(q_F(J))=\left({\mathbf{a}}^+\overline{{\mathbf{a}}}\overline{{\mathbf{a}}^+}{\mathbf{a}}\right)^{\infty}.$$ Furthermore, the de Vries-Komornik number $q_c(J)=\min({\mathscr{U}}\cap J)$ satisfies $${\alpha}(q_c(J))={\mathbf{a}}^+\overline{{\mathbf{a}}}\overline{{\mathbf{a}}^+}{\mathbf{a}}^+\overline{{\mathbf{a}}^+}{\mathbf{a}}{\mathbf{a}}^+\overline{{\mathbf{a}}}\cdots.$$ By Lemma \[lem:characterization of V-U\], the bases $q_G(J), q_F(J)$ and $q_c(J)$ all belong to ${\mathscr{V}}\cap(q_L, q_R]$, so we may define their image bases in ${\mathscr{V}}^*$ by $$\hat q_G:=\hat\Phi_J(q_G(J)), \quad\hat q_F:=\hat\Phi_J(q_F(J))\quad\textrm{and}\quad \hat q_c:=\hat\Phi_J(q_c(J)).$$ The quasi-greedy expansions of these bases are given by $${\alpha}^*(\hat q_G)=(10)^{\infty},\quad {\alpha}^*(\hat q_F)=(1100)^{\infty},\quad\textrm{and}\quad {\alpha}^*(\hat q_c)=11010011\;00101101\cdots.$$ We have $\hat q_G={(1+\sqrt{5})/2}\approx 1.61803, \hat q_F\approx 1.75488$ and $\hat q_c\approx 1.78723$. Note that $\hat{q}_c$ is simply the Komornik-Loreti constant $q_{KL}^*$. The following result is due to Glendinning and Sidorov [@Glendinning_Sidorov_2001] and Komornik et al. [@Komornik-Kong-Li-17]; see also [@Allaart-Kong-2018].
\[prop:unique expansion-two digits case\] Let $q\in(1,2]$. Then the entropy function $$H: q\mapsto h({\widetilde{\mathbf{U}}}_q^*)$$ is a Devil’s staircase, i.e., $H$ is continuous, non-deceasing and locally constant almost everywhere on $(1,2]$.
1. If $1<q\le \hat q_G$, then ${\widetilde{\mathbf{U}}}_q^*=\emptyset$.
2. If $\hat q_G<q\le \hat q_F$, then ${\widetilde{\mathbf{U}}}_q^*=\big\{(01)^{\infty}, (10)^{\infty}\big\}$.
3. If $\hat q_F<q< \hat q_{c}$, then ${\widetilde{\mathbf{U}}}_q^*$ is countably infinite.
4. If $q=\hat q_{c}$, then ${\widetilde{\mathbf{U}}}_q^*$ is uncountable but $h({\widetilde{\mathbf{U}}}_q^*)=0$.
5. If $\hat q_c<q\le 2$, then $h({\widetilde{\mathbf{U}}}_q^*)>0$.
Recall from Proposition \[th:characterization of hat-Phi-J\] (iii) that for each $q\in{\mathscr{V}}\cap(q_L, q_R]$ we have $$\label{eq:jh-4}
{\widetilde{\mathbf{U}}}_q(J)=\Phi_J^{-1}\left(\big\{(x_i)\in{\widetilde{\mathbf{U}}^*}_{\hat q}: x_1=1\big\}\right)\quad\textrm{and}\quad h({\widetilde{\mathbf{U}}}_q(J))=\frac{h({\widetilde{\mathbf{U}}^*}_{\hat q})}{m},$$ where $\hat{q}:=\hat{\Phi}_J(q)$. Since ${\mathscr{B}}^*\subset{\mathscr{U}}^*$, the function $q\mapsto h({\widetilde{\mathbf{U}}^*}_q)$ is constant on each connected component of $(1,2]\backslash {\mathscr{U}}^*$. Recalling from Proposition \[th:characterization of hat-Phi-J\] (ii) that $\hat{\Phi}_J({\mathscr{U}}\cap J)={\mathscr{U}}^*$, it follows by that the function $H_J: q\mapsto h({\widetilde{\mathbf{U}}}_q(J))$ is constant on each connected component of $(q_L,q_R]\backslash ({\mathscr{U}}\cap J)$. Since ${\mathscr{U}}$ is Lebesgue null, this implies that $H_J$ is almost everywhere locally constant on $J$. That $H_J$ is also continuous follows since ${\mathscr{U}}\cap J$ has no isolated points, and the restriction of $H_J$ to ${\mathscr{U}}\cap J$ is the composition of the map $q\mapsto h({\widetilde{\mathbf{U}}^*}_q)$ with $\hat{\Phi}_J$; the former is continuous by Proposition \[prop:unique expansion-two digits case\], the latter by Lemma \[lem:Holder continuity of hatPhi\]. Therefore, the entropy function $H_J$ is a Devil’s staircase.
Statements (i)-(v) of Theorem \[main2\] now follow from the corresponding statements of Proposition \[prop:unique expansion-two digits case\]. For example, if $q_L<q\le q_G(J)$, then by (\[eq:jh-4\]) it follows that $${\widetilde{\mathbf{U}}}_q(J)\subset{\widetilde{\mathbf{U}}}_{q_G(J)}(J)=\Phi_J^{-1}\left(\big\{(x_i)\in{\widetilde{\mathbf{U}}^*}_{\hat q_G}: x_1=1\big\}\right)=\emptyset,$$ where the last equality follows from Proposition \[prop:unique expansion-two digits case\] (i).
Similarly, for (ii) we take $q\in(q_G(J), q_F(J)]$. Then by (\[eq:jh-4\]) and Proposition \[prop:unique expansion-two digits case\] (ii) it follows that $${\widetilde{\mathbf{U}}}_q(J)\subset{\widetilde{\mathbf{U}}}_{q_F(J)}(J)=\Phi_J^{-1}\left(\big\{(x_i)\in{\widetilde{\mathbf{U}}^*}_{\hat q_F}: x_1=1\big\}\right)=\Phi_J^{-1}(\big\{(10)^{\infty}\big\})=\big\{({\mathbf{a}}^+\overline{{\mathbf{a}}^+})^{\infty}\big\}.$$ Vice versa, one checks easily using that $\big\{({\mathbf{a}}^+\overline{{\mathbf{a}}^+})^{\infty}\big\}\subset{\widetilde{\mathbf{U}}}_q(J)$.
For (iii), we take $q\in (q_F(J),q_c(J))$. Then $$\left\{\big({\mathbf{a}}^+\overline{{\mathbf{a}}^+}\big)^k\big({\mathbf{a}}^+\overline{{\mathbf{a}}}\overline{{\mathbf{a}}^+}{\mathbf{a}}\big)^{\infty}: k\in{\ensuremath{\mathbb{N}}}\right\}\subset {\widetilde{\mathbf{U}}}_q(J).$$ On the other hand, we can find a sequence $(q_n)$ in ${\mathscr{V}}$ that converges from the left to $q_c(J)$: if $\alpha(q_c(J))=\theta_1\theta_2\dots$, we can take $q_n$ with $\alpha(q_n)=(\theta_1\dots \theta_{2^nm}^-)^{\infty}$. Then for large enough $n$, $q<q_n$ and $\hat{q}_n:=\hat{\Phi}_J(q_n)<q_{KL}^*$, so ${\widetilde{\mathbf{U}}}_q(J)$ is countable by Proposition \[prop:unique expansion-two digits case\] (iii) and .
Statement (iv) is immediate from and Proposition \[prop:unique expansion-two digits case\](iv), since $q_c(J)\in{\mathscr{V}}$.
For (v) we first note that $q_c(J)\in{\mathscr{V}}\cap(q_L, q_R]$ and $\hat q_c\in{\mathscr{V}}^*$. Furthermore, there exists a sequence $(r_i)$ in ${\mathscr{V}}\cap(q_L, q_R]$ such that $r_i\searrow q_c(J)$. This follows from Lemma \[lem:dense-plateaus\], since the endpoints of relative plateaus lie in ${\mathscr{V}}$. Accordingly, the image sequence $(\hat r_i)$ in ${\mathscr{V}}^*$ satisfies $\hat r_i\searrow \hat q_c$, where $\hat r_i=\hat \Phi_J(r_i)$. So, for any $q\in(q_c(J), q_R]$ there exists $r_i\in{\mathscr{V}}\cap (q_c(J), q)$ such that $${\widetilde{\mathbf{U}}}_q(J)\supset{\widetilde{\mathbf{U}}}_{r_i}(J)\quad\textrm{and}\quad h({\widetilde{\mathbf{U}}}_{r_i}(J))=\frac{h({\widetilde{\mathbf{U}}^*}_{\hat r_i})}{m}>0.$$ This proves (v).
Take $q\in(q_L, q_R]$. Then ${\alpha}_1(q)\ldots {\alpha}_m(q)=a_1\ldots a_m^+$. Note that ${\alpha}(q_L)=(a_1\ldots a_m)^{\infty}$. Then by the definitions of ${\widetilde{\mathbf{U}}}_q$ and ${\widetilde{\mathbf{U}}}_{q_L}$ it follows that ${\widetilde{\mathbf{U}}}_q(J)\subset {\widetilde{\mathbf{U}}}_q\setminus{\widetilde{\mathbf{U}}}_{q_L}$. Furthermore, any sequence $(x_i)\in{\widetilde{\mathbf{U}}}_q\setminus{\widetilde{\mathbf{U}}}_{q_L}$ or its reflection $\overline{(x_i)}$ has a tail sequence in ${\widetilde{\mathbf{U}}}_q(J)\cup\big\{{\mathbf{a}}^{\infty}\big\}$. Therefore, $$\dim_H({\widetilde{\mathbf{U}}}_q\setminus{\widetilde{\mathbf{U}}}_{q_L})= \dim_H{\widetilde{\mathbf{U}}}_q(J).$$ Hence, the result follows from Theorem \[main2\].
Fix $q_0\in\overline{{\mathscr{U}}}$. If $q_0\in{ {\mathscr{C}}}_{\infty}$, the same argument based on that we used to prove $f(q_0)=0$ shows also that $f(q)\to 0$ as $q\to q_0$. Hence $f$ is continuous at $q_0$. If $q_0\in{ {\mathscr{C}}}_0$, then $q_0=q_c(J)$ for some relative plateau $J$. Since ${\widetilde{\mathbf{U}}}_q(I)\subset {\widetilde{\mathbf{U}}}_q(J)$ whenever $I\subset J$, Theorem \[main1\] implies that $$f(q)\leq \frac{h({\widetilde{\mathbf{U}}}_q(J))}{\log q}, \qquad\mbox{for all $q\in J, q\neq q_0$}.$$ But by Theorem \[main2\], $h({\widetilde{\mathbf{U}}}_q(J))\to 0$ as $q\to q_0=q_c(J)$. Hence, $f(q)\to 0=f(q_0)$. This shows that $f$ is continuous on ${ {\mathscr{C}}}$.
Now suppose $q_0\in\overline{{\mathscr{U}}}\backslash{ {\mathscr{C}}}$. Then, using Lemma \[lem:dense-plateaus\], there is a sequence of relative plateaus $[p_L(i),p_R(i)]$ such that $p_L(i)\nearrow q_0$ as $i\to\infty$. Each of these plateaus contains a point $q_i\in{ {\mathscr{C}}}$ (in fact, infinitely many), so that $q_i\nearrow q_0$. By Theorem \[main1\], we obtain $f(q_0)>0=\lim_{i\to \infty} f(q_i)$. Therefore, $f$ is discontinuous at $q_0$. The corresponding statements for $f_-$ and $f_+$ follow in the same way.
Finally, we prove Theorem \[main-b\].
\(i) Let $J=J_{\mathbf i}=[q_L, q_R]$ be a relative plateau generated by ${\mathbf{a}}=a_1\ldots a_m$. We show that the next level relative plateaus $J_{\mathbf{i}j}$, $j=1,2,\dots$ are exactly the maximal intervals on which $h({\widetilde{\mathbf{U}}}_q(J))$ is positive and constant; this, along with Theorem \[main2\], will imply (i). Fix $j\in{\ensuremath{\mathbb{N}}}$, and write $I:=J_{\mathbf{i}j}=[p_L,p_R]$. Then $p_L,p_R\in{\mathscr{V}}$, so we may put $\hat{p}_L:=\hat{\Phi}_J(p_L)$ and $\hat{p}_R:=\hat{\Phi}_J(p_R)$. Then $\hat{I}:=[\hat{p}_L,\hat{p}_R]$ is an entropy plateau in $(1,2]$, and so $h({\widetilde{\mathbf{U}}}_{\hat q}^*)$ is positive and constant on $\hat{I}$. By Proposition \[th:characterization of hat-Phi-J\](iii), it follows that $h({\widetilde{\mathbf{U}}}_q(J))$ is positive and constant on $I$.
By Lemma \[lem:dense-plateaus\] the union $\bigcup_{j\in{\ensuremath{\mathbb{N}}}}J_{\mathbf{i}j}$ is dense in $(q_c(J),q_R]$. As a result, $I$ is a [*maximal*]{} interval on which $h({\widetilde{\mathbf{U}}}_q(J))$ is constant.
\(ii) Since $\bigcup_{j\in{\ensuremath{\mathbb{N}}}}J_{\mathbf{i}j}$ is dense in $(q_c(J),q_R]$, each $q\in{\mathscr{B}}(J)$ is an accumulation point of the set of endpoints of the intervals $J_{\mathbf{i}j}$. Since these endpoints lie in ${\mathscr{V}}$ and ${\mathscr{V}}$ is closed, it follows that ${\mathscr{B}}(J)\subset {\mathscr{V}}$. Hence, $\hat{\Phi}_J(q)$ is well defined for all $q\in{\mathscr{B}}(J)$. It now follows immediately from part (i) that $$\hat{\Phi}_J({\mathscr{B}}(J))={\mathscr{B}}^*.
\label{eq:bifurcation-bridge}$$ Since ${\mathscr{B}}^*\subset {\mathscr{U}}^*$, it follows from Proposition \[th:characterization of hat-Phi-J\](ii) that ${\mathscr{B}}(J)\subset {\mathscr{U}}\cap J$.
\(iii) That ${\mathscr{B}}(J)$ is Lebesgue null is now obvious from (ii), since ${\mathscr{U}}$ is Lebesgue null.
\(iv) Note by (\[eq:dimension-local-u\]) that $$\dim_H({\mathscr{U}}\cap J)=\frac{\log 2}{m\log q_R}.$$ Since ${\mathscr{B}}(J)\subset{\mathscr{U}}\cap J$, it therefore suffices to prove $$\label{eq:k15-1}
\dim_H{\mathscr{B}}(J)\ge \frac{\log 2}{m\log q_R}.$$ Observe that $2=\hat\Phi_J(q_R)\in{\mathscr{B}}^*$. Furthermore, the proof of [@AlcarazBarrera-Baker-Kong-2016 Theorem 3] shows that $\dim_H({\mathscr{B}}^*\cap[2-\eta,2])=1$ for every $\eta>0$. Given ${\varepsilon}>0$, we can choose a point $q_0\in {\mathscr{V}}\cap(q_R-{\varepsilon},q_R)$; let $\hat{q}_0:=\hat{\Phi}_J(q_0)$. By Lemma \[lem:Holder continuity of hatPhi\], $\hat{\Phi}_J$ is Hölder continuous with exponent $\log \hat{q}_0/(m\log q_R)$ on $[q_0,q_R]$, and so, using , $$1=\dim_H({\mathscr{B}}^*\cap[\hat{q}_0,2])=\dim_H \hat{\Phi}_J({\mathscr{B}}(J)\cap[q_0,q_R])\leq \frac{m\log q_R}{\log \hat{q}_0}\dim_H {\mathscr{B}}(J).
$$ Letting ${\varepsilon}\to 0$, $\hat{q}_0\to 2$ and we obtain (\[eq:k15-1\]), as desired.
\(v) By (i) and the countable stability of Hausdorff dimension, $$\dim_H\big(({\mathscr{U}}\cap J)\backslash {\mathscr{B}}(J)\big)=\sup_{j\in{\ensuremath{\mathbb{N}}}} \dim_H({\mathscr{U}}\cap J_{\mathbf{i}j}).$$ If $J_{\mathbf{i}j}=[p_L,p_R]$ is generated by the block $b_1\dots b_l$, then $$\dim_H({\mathscr{U}}\cap J_{\mathbf{i}j})=\frac{\log 2}{l\log p_R}.
\label{eq:relative-plateau-dimension}$$ Furthermore, $b_1\dots b_l$ must be a concatenation of words from $\mathcal{L}=\big\{{\mathbf{a}},{\mathbf{a}}^+,\overline{{\mathbf{a}}},\overline{{\mathbf{a}}^+}\big\}$ , so $l$ is a multiple of $m$. Since $b_1\dots b_l$ is admissible and $\alpha(p_L)>q_c(J)$, it follows from that $l\geq 3m$. (See Figure \[fig1\].) Moreover, the only relative plateau among the $J_{\mathbf{i}j}$ with $l=3m$ is the one with generating word $b_1\dots b_l={\mathbf{a}}^+\overline{{\mathbf{a}}}\overline{{\mathbf{a}}^+}$, whose right endpoint is $p_0$.
It remains to check that this plateau maximizes the expression in . To this end, take any other relative plateau $[p_L,p_R]\subset J$ generated by a block of length $l=km$. If $p_R\geq p_0$, then $l\log p_R\geq 3m\log p_0$. On the other hand, suppose $p_R<p_0$. Then $\alpha(q_c(J))\prec \alpha(p_L)\prec \big({\mathbf{a}}^+\overline{{\mathbf{a}}}\overline{{\mathbf{a}}^+}\big)^{\infty}$, and since $\alpha(p_L)$ must correspond to an infinite path in the labeled digraph $\mathcal{G}=(G,\mathcal{L})$ from Figure \[fig1\], this is only possible when $k\geq 5$. In [@Allaart-Baker-Kong-17] it was observed that $q_{KL}\geq (M+2)/2$. Estimating $p_R$ below by $q_{KL}$ and $p_0$ above by $M+1$, we thus obtain for all $M\geq 2$, $$l\log p_R \geq 5m\log q_{KL} \geq 5m\log\left(\frac{M+2}{2}\right) \geq 3m\log(M+1)>3m\log p_0,$$ where we used the algebraic inequality $(M+2)^5\geq 32(M+1)^3$, valid for $M\geq 2$. For the case $M=1$ we can use the better estimate $q_{KL}>1.78$, giving $5\log q_{KL}>2.8>3\log 2>3\log p_0$, where we have used the natural logarithm. Thus, in all cases, $l\log p_R\geq 3m\log p_0$, as was to be shown.
Note by and Proposition \[th:characterization of hat-Phi-J\](i) that the [relative bifurcation sets ${\mathscr{B}}(J_{\mathbf i}): \mathbf{i}\in\{1,2,\dots\}^n$]{}, $n\in{\ensuremath{\mathbb{N}}}$ are mutually homeomorphic.
To end this section, we illustrate how Theorem \[main1\] can be combined with the entropy “bridge" of Proposition \[th:characterization of hat-Phi-J\](iii) to compute $f(q)$ explicitly at some special points.
\[ex:endpoints\] Let $J=[p_L,p_R]$ be a relative plateau generated by the word ${\mathbf{a}}=a_1\dots a_m$. For any integer $k\geq 3$, let $[q_L,q_R]$ be the relative plateau generated by the admissible word $\mathbf{b}:={\mathbf{a}}^+\overline{{\mathbf{a}}}^{k-2}\overline{{\mathbf{a}}^+}$. Then $[q_L,q_R]\subset J$, and $J$ is the parent interval of $[q_L,q_R]$. Note that $q_L\in{\mathscr{V}}$. Hence, by Theorem \[main1\] and Proposition \[th:characterization of hat-Phi-J\](iii), $$f(q_L)=f_-(q_L)=\frac{h\big({\widetilde{\mathbf{U}}}_{q_L}(J)\big)}{\log q_L}=\frac{h\big({\widetilde{\mathbf{U}}}_{\hat{q}_L}^*\big)}{m\log q_L},$$ where $\hat{q}_L:=\hat{\Phi}_J(q_L)$. Note that $$\alpha^*(\hat{q}_L)=\Phi_J(\alpha(q_L))=\Phi_J\left(\big({\mathbf{a}}^+\overline{{\mathbf{a}}}^{k-2}\overline{{\mathbf{a}}^+}\big)^{\infty}\right)=(1^{k-1}0)^{\infty}.$$ Define the sets $${\widetilde{\mathbf{V}}}_{\hat{q}}^*:=\big\{(x_i)\in\{0,1\}^{\ensuremath{\mathbb{N}}}: \overline{{\alpha}^*(\hat{q})}\preceq{\sigma}^n((x_i))\preceq {\alpha}^*(\hat{q})\ \forall n\ge 0\big\}, \qquad \hat{q}\in(1,2].$$ It is well known (see [@Komornik-Kong-Li-17] or [@Allaart-Kong-2018]) that $h\big({\widetilde{\mathbf{U}}}_{\hat{q}}^*\big)=h\big({\widetilde{\mathbf{V}}}_{\hat{q}}^*\big)$. Moreover, ${\widetilde{\mathbf{V}}}_{\hat{q}_L}^*$ is a subshift of finite type and it consists of precisely those sequences in $\{0,1\}^{\ensuremath{\mathbb{N}}}$ which do not contain the word $1^{k}$ or $0^{k}$. A standard argument (see [@Lind_Marcus_1995] or [@Allaart-Baker-Kong-17 Lemma 4.2]) now shows that $h\big({\widetilde{\mathbf{V}}}_{\hat{q}_L}^*\big)=\log\varphi_{k-1}$, where for each $j\in{\ensuremath{\mathbb{N}}}$, $\varphi_j$ is the unique root in $(1,2)$ of $1+x+\dots+x^{j-1}=x^j$. Therefore, $$f(q_L)=f_-(q_L)=\frac{\log\varphi_{k-1}}{m\log q_L}.$$ Of course, $f_+(q_L)=0$. Similarly, since $h\big({\widetilde{\mathbf{U}}}_q(J)\big)$ is constant on $[q_L,q_R]$, Theorem \[main1\] gives $$f(q_R)=f_+(q_R)=\frac{\log\varphi_{k-1}}{m\log q_R}.$$ On the other hand, by , $$f_-(q_R)=\frac{\log 2}{mk\log q_R},$$ since the generating word $\mathbf{b}$ of $[q_L,q_R]$ has length $mk$. Observe that $f_-(q_R)<f_+(q_R)$. This last inequality holds generally, for any relative plateau $[q_L,q_R]$ in $J$: If $[q_L,q_R]$ has generating block $\mathbf{b}$ of length $l$, then $l=mk$ for some $k\in{\ensuremath{\mathbb{N}}}$. Again putting $\hat{q}_L:=\hat{\Phi}_J(q_L)$, Lemma 3.1(ii) in [@Allaart-Baker-Kong-17] gives $$h\big({\widetilde{\mathbf{V}}}_{\hat{q}_L}^*\big)>\frac{\log 2}{k},$$ and so $$f_-(q_R)=\frac{\log 2}{mk\log q_R}<\frac{h\big({\widetilde{\mathbf{V}}}_{\hat{q}_L}^*\big)}{m\log q_R}=\frac{h\big({\widetilde{\mathbf{V}}}_{\hat{q}_R}^*\big)}{m\log q_R}=f_+(q_R).$$ (There is one exception: If $[q_L,q_R]$ is a first-level relative plateau (i.e. an entropy plateau) generated by ${\mathbf{a}}=a_1\dots a_m$, then the parent interval $J$ is $J_\emptyset=[1,M+1]$. In this case, there is no map $\Phi_J$ relating $h({\widetilde{\mathbf{U}}}_{q_L}(J))$ to the alphabet $\{0,1\}$. Instead, $$f_+(q_R)=\frac{h({\widetilde{\mathbf{U}}}_{q_R})}{\log q_R}=\frac{h({\widetilde{\mathbf{U}}}_{q_L})}{\log q_R}, \qquad\mbox{and} \qquad
f_-(q_R)=\frac{\log 2}{m\log q_R}.$$ As shown in [@Allaart-Baker-Kong-17 Lemma 3.1(ii)], these two quantities are equal if (and only if) $M=2j+1\geq 3$, and ${\mathbf{a}}=a_1:=j+1$.)
The above procedure generalizes to other relative plateaus: ${\widetilde{\mathbf{V}}}_{\hat{q}_L}^*$ is always a subshift of finite type of $\{0,1\}^{\ensuremath{\mathbb{N}}}$, so its topological entropy can be calculated, numerically at least, by writing down the corresponding adjacency matrix and computing its spectral radius; [see [@Lind_Marcus_1995 Chap. 5].]{}
Proof of Theorem \[main3\] {#sec:proof-of-theorem3}
==========================
[Let $1<t_1<t_2\leq M+1$, and let $J=J_{\mathbf i}=[q_L, q_R]$ be the smallest relative plateau containing $[t_1,t_2]$. Define]{} $$g_J(t_1,t_2):=\max{\left\{\frac{h({\widetilde{\mathbf{U}}}_q(J))}{\log q}: q\in\overline{{\mathscr{B}}(J)\cap[t_1, t_2]}\right\}},$$ so we need to show that $$\dim_H({\mathscr{U}}\cap[t_1,t_2])=g_J(t_1,t_2).
\label{eq:Th3-equation}$$ Note first that, if $t_1=q_L$, then there exists $\delta>0$ such that ${\mathscr{U}}\cap[t_1,t_1+\delta]=\emptyset$, and hence ${\mathscr{B}}(J)\cap [t_1, t_2]={\mathscr{B}}(J)\cap[t_1+\delta, t_2]$. Therefore, both sides of remain unchanged upon replacing $t_1$ with $t_1+\delta$. Consequently, we may assume that $t_1>q_L$.
We first demonstrate the lower bound. Since ${\mathscr{B}}(J)\subset {\mathscr{U}}$, we may assume without loss of generality that ${\mathscr{U}}\cap(t_1, t_2)\neq\emptyset$. Then by the definition of $J$ we also have ${\mathscr{B}}(J)\cap[t_1, t_2]\ne \emptyset$. Since $t_1>q_L$, Theorem \[main1\] gives for any $q\in{\mathscr{B}}(J)\cap[t_1, t_2]$ that $$f_-(q)=\frac{h({\widetilde{\mathbf{U}}}_q(J))}{\log q}>0.$$ Since ${\mathscr{B}}(J)\cap[t_1, t_2]\subset{\mathscr{U}}\cap[t_1, t_2]$, this implies $$\dim_H({\mathscr{U}}\cap[t_1, t_2])\ge \sup{\left\{f_-(q): q\in{\mathscr{B}}(J)\cap[t_1, t_2]\right\}}=g_J(t_1,t_2),
$$ where in the last step we used the continuity of the map $q\mapsto h({\widetilde{\mathbf{U}}}_q(J))$ (cf. Theorem \[main2\]).
This proves the lower bound. For the upper bound, we use a compactness argument similar to that used in [@Kalle-Kong-Li-Lv-2016]. Recall from Theorem \[main-b\](i) that $$(q_L,q_R]={\mathscr{B}}(J)\cup(q_L, q_c(J)]\cup\bigcup_{j=1}^\infty J_{\mathbf{i}j}.$$ Let $J_{\mathbf{i}j}=[p_L,p_R]$ be a relative plateau that intersects $[t_1,t_2]$ in more than one point. Then either $p_L$ or $p_R$ lies in $(t_1,t_2)$, so at least one of these two points lies in $\overline{{\mathscr{B}}(J)\cap[t_1, t_2]}$. Then by the proof of [@Allaart-Baker-Kong-17 Theorem 4.1] it follows that $$\dim_H({\mathscr{U}}\cap[p_L,p_R])=\frac{\log 2}{m\log p_R}\le \min{\left\{\frac{h({\widetilde{\mathbf{U}}}_{p_L})}{\log p_L}, \frac{h({\widetilde{\mathbf{U}}}_{p_R})}{\log p_R}\right\}} \le g_J(t_1,t_2).$$ By the countable stability of Hausdorff dimension, we obtain $$\dim_H\left({\mathscr{U}}\cap \bigcup_{j=1}^\infty J_{\mathbf{i}j}{\cap[t_1,t_2]}\right)\leq g_J(t_1,t_2).
\label{eq:bound-1}$$ Now let ${\varepsilon}>0$. Then for each $q\in \overline{{\mathscr{B}}(J)\cap[t_1, t_2]}$ there is a number $\delta(q)>0$ such that $$\dim_H\big({\mathscr{U}}\cap(q-\delta(q),q+\delta(q))\big)\leq f(q)+{\varepsilon}\leq \frac{h({\widetilde{\mathbf{U}}}_q(J))}{\log q}+{\varepsilon}\leq g_J(t_1,t_2)+{\varepsilon}.$$ The intervals $(q-\delta(q),q+\delta(q))$ form an open cover of $\overline{{\mathscr{B}}(J)\cap[t_1, t_2]}$, and since $\overline{{\mathscr{B}}(J)\cap[t_1, t_2]}$ is compact, this open cover contains a finite subcover. Therefore, $$\dim_H\left({\mathscr{U}}\cap \overline{{\mathscr{B}}(J)\cap[t_1, t_2]}\right)\leq g_J(t_1,t_2)+{\varepsilon}.
\label{eq:bound-2}$$ Letting ${\varepsilon}\to 0$, and together give the upper bound in , since ${\mathscr{U}}\cap(q_L,q_c(J))=\emptyset$.
Proof of Theorem \[main4\] {#sec:proof-of-theorem4}
==========================
Recall the definitions and of $\check{{\mathbf{U}}}_q$ and ${\mathbf{W}}_q$, and that $\mathcal{W}_q=\pi_q({\mathbf{W}}_q)$. When $M=1$ we write ${\mathbf{W}}_q^*:={\mathbf{W}}_q$. We will prove Theorem \[main4\] indirectly, by showing that $\dim_H\mathcal W_q=0$ for $q\in{ {\mathscr{C}}}$, and if $q\in\overline{{\mathscr{U}}}\backslash { {\mathscr{C}}}$, then $$\dim_H\mathcal W_q=\frac{h({\widetilde{\mathbf{U}}}_q(J))}{\log q},$$ where $J=[q_L, q_R]$ is the smallest relative plateau such that $q\in(q_L, q_R]$. The result then follows from Theorem \[main1\].
Recall that on the sequence space $\Omega_M$ we are using the metric $\rho$ from . The following lemma allows us to work with subsets of $\Omega_M$ rather than sets in Euclidean space.
\[lem:bi-Lipschitz\] Let $q\in(1,M+1]$. For any subset $F\subset {\widetilde{\mathbf{U}}}_q$, we have $$\dim_H \pi_q(F)=\frac{\log 2}{\log q}\dim_H F.$$
It is well known (see [@Jordan-Shmerkin-Solomyak-2011 Lemma 2.7] or [@Allaart-2017 Lemma 2.2]) that $\pi_q$ is bi-Lipschitz on ${\widetilde{\mathbf{U}}}_q$ with respect to the metric $$\rho_q((x_i), (y_i)):=q^{-\inf{\left\{i\ge 0: x_{i+1}\ne y_{i+1}\right\}}}.$$ Hence, with respect to the metric $\rho_q$ on $\Omega_M$, $F$ and $\pi_q(F)$ have the same Hausdorff dimension for any $F\subset {\widetilde{\mathbf{U}}}_q$. The lemma now follows since $\rho_q=\rho^{\log q/\log 2}$.
In view of Lemma \[lem:bi-Lipschitz\], it suffices to compute $\dim_H {\mathbf{W}}_q$. The next lemma facilitates this.
\[lem:W-bridge\] Let $J=[q_L,q_R]$ be a relative plateau generated by ${\mathbf{a}}=a_1\dots a_m$, and $q\in{\mathscr{V}}\cap(q_L,q_R]$. Then $$\dim_H {\mathbf{W}}_q=\frac{1}{m}\dim_H {\mathbf{W}}_{\hat{q}}^*,$$ where $\hat{q}:=\hat{\Phi}_J(q)$.
Since ${\mathbf{W}}_q\subset{\widetilde{\mathbf{U}}}_q$ and every sequence in ${\mathbf{W}}_q$ must eventually contain the word $\alpha_1(q)\dots\alpha_m(q)$, we have $$\dim_H {\mathbf{W}}_q=\dim_H ({\mathbf{W}}_q\cap {\widetilde{\mathbf{U}}}_q(J)).
$$ By a trivial extension of Proposition \[th:characterization of hat-Phi-J\](iii), $$\Phi_J\big({\mathbf{W}}_q\cap {\widetilde{\mathbf{U}}}_q(J)\big)={\mathbf{W}}_{\hat{q}}^*\cap {\widetilde{\mathbf{U}}}_{\hat{q}}^*(1),$$ where ${\widetilde{\mathbf{U}}}_{\hat{q}}^*(1):=\{(x_i)\in {\widetilde{\mathbf{U}}}_{\hat{q}}^*: x_1=1\}$. Since $\Phi_J$ is bi-Hölder continuous with exponent $1/m$, it follows that $$\dim_H {\mathbf{W}}_q=\frac{1}{m}\dim_H\left({\mathbf{W}}_{\hat{q}}^*\cap {\widetilde{\mathbf{U}}}_{\hat{q}}^*(1)\right)=\frac{1}{m}\dim_H {\mathbf{W}}_{\hat{q}}^*,$$ as desired.
We first consider the case when $q\in{ {\mathscr{C}}}$.
\[prop:W-q-C\] If $q\in{ {\mathscr{C}}}$, then $\dim_H {\mathbf{W}}_q=0$.
If $q=q_{KL}$, then $\dim_H{\mathbf{W}}_q\leq\dim_H {\widetilde{\mathbf{U}}}_q=0$ by Proposition \[prop:unique expansion-two digits case\], which holds also for larger alphabets (cf. [@Kong_Li_Dekking_2010]). And if $q\in{ {\mathscr{C}}}_0\backslash \{q_{KL}\}$, then $q=q_c(J)$ for some relative plateau $J$, so that $\hat{q}:=\hat{\Phi}_J(q)=q_{KL}^*$ and the result follows from Lemma \[lem:W-bridge\] and Proposition \[prop:unique expansion-two digits case\].
Suppose $q\in{ {\mathscr{C}}}_{\infty}$. Then $q\in{\mathscr{U}}\subset {\mathscr{V}}$ by Proposition \[prop:property of C-infity\], and there are infinitely many relative plateaus $J=[q_L,q_R]$ such that $q\in(q_L,q_R]$. If $J$ is one such relative plateau generated by a word of length $m$, then Lemma \[lem:W-bridge\] gives $$\dim_H {\mathbf{W}}_q=\frac{1}{m}\dim_H {\mathbf{W}}_{\hat{q}}^*\leq \frac{1}{m}\dim_H \{0,1\}^{\ensuremath{\mathbb{N}}}=\frac{1}{m}.$$ Letting $m\to\infty$, we obtain $\dim_H {\mathbf{W}}_q=0$.
Recall the definition of ${\mathscr{B}}_L$ (and ${\mathscr{B}}_L^*$) from .
\[prop:W-q-B\] Let $q\in{\mathscr{B}}_L$. Then $$\dim_H{\mathbf{W}}_q=\frac{h({\widetilde{\mathbf{U}}}_q)}{\log 2}=\dim_H {\widetilde{\mathbf{U}}}_q.$$
The proof uses the following lemma.
\[lem:number-of-paths\] Let $G=(V,E)$ be a strongly connected directed graph with adjacency matrix $A$, and let $\gamma$ be the spectral radius of $A$. Let $\mathbf P_k^{u, v}$ be the set of all directed paths of length $k$ in $G$ starting from vertex $u$ and ending at vertex $v$. Then there are constants $0<C_1<C_2$ such that the following hold:
(i) For each vertex $v\in V$ and for each $K\in{\ensuremath{\mathbb{N}}}$, there is an integer $k\geq K$ such that $$\#\mathbf P_k^{v, v}\geq C_1\gamma^k.$$
(ii) For all $k\in{\ensuremath{\mathbb{N}}}$, $$\sum_{u,v\in V} \#\mathbf P_k^{u, v}\leq C_2\gamma^k.$$
By the Perron-Frobenius theorem, $\gamma$ is an eigenvalue of $A$ and there is a strictly positive left eigenvector $\mathbf{v}=[v_1 \dots v_N]$ of $A$ corresponding to $\gamma$, where $N:=\#V$. We may normalize $\mathbf{v}$ so that $\max v_i=1$. Set $$C_1:=\frac{\min_i v_i}{N\gamma^N}.$$ Clearly, for any two vertices $u$ and $v$ in $V$, there is a path from $u$ to $v$ of length at most $N$. Fix $v\in V$ and $K\in{\ensuremath{\mathbb{N}}}$. Without loss of generality order $V$ so that $v$ is the first vertex. [Let $\mathbf{e}_1=[1\ 0\ \dots\ 0]^T$ be the first standard unit vector in ${\ensuremath{\mathbb{R}}}^N$, and let $\mathbf{1}=[1\ 1\ \dots\ 1]$ be the row vector of all $1$’s in ${\ensuremath{\mathbb{R}}}^N$. ]{} The number of paths in $G$ of length $K$ starting anywhere in $G$ but ending at $v$ is $$\mathbf{1}A^K \mathbf{e}_1\geq \mathbf{v}A^K\mathbf{e}_1=\gamma^K \mathbf{v}\mathbf{e}_1\geq \gamma^K\min v_i.$$ Hence there is a vertex $u$ in $V$ such that $$\#\mathbf P_K^{u, v}\geq N^{-1}\gamma^K \min v_i.$$ Let $L$ be the length of the shortest path in $G$ from $v$ to $u$; then $L\leq N$. Set $k:=K+L$. It follows that $$\#\mathbf P_k^{v, v}\geq N^{-1}\gamma^K \min v_i=N^{-1}\gamma^{k-L}\min v_i\geq \frac{\min v_i}{N\gamma^N}\gamma^k=C_1\gamma^k.$$ This proves (i). The proof of (ii) is standard (cf. [@Lind_Marcus_1995 Chap. 4]).
Recall from [@Komornik_Loreti_2002] that the Komornik-Loreti constant $q_{KL}=q_{KL}(M)$ satisfies $$\label{eq:lambda}
{\alpha}(q_{KL})={\lambda}_1{\lambda}_2\ldots,$$ where for each $i\ge 1$, $${\lambda}_i={\lambda}_i(M):=\begin{cases}
k+\tau_i-\tau_{i-1} & \qquad\textrm{if \quad$M=2k$},\\
k+\tau_i & \qquad\textrm{if \quad$M=2k+1$}.
\end{cases}$$ Here $(\tau_i)_{i=0}^{\infty}=0110100110010110\ldots$ is the classical Thue-Morse sequence.
In the proof below we use the sets $${\widetilde{\mathbf{V}}}_q:=\big\{(x_i)\in\Omega_M: \overline{{\alpha}(q)}\preceq{\sigma}^n((x_i))\preceq {\alpha}(q)\ \forall n\ge 0\big\}, \qquad q\in(1,M+1].$$ It is well known (see [@Komornik-Kong-Li-17] or [@Allaart-Baker-Kong-17]) that $\dim_H {\widetilde{\mathbf{U}}}_q=\dim_H {\widetilde{\mathbf{V}}}_q$ for every $q$.
Fix $q\in{\mathscr{B}}_L$. Then $q>q_{KL}$, so $\alpha(q)\succ{\lambda}_1{\lambda}_2\dots$, and hence there is a number $l_0\geq 1$ such that $\alpha_1\dots\alpha_{l_0-1}={\lambda}_1\dots{\lambda}_{l_0-1}$ and $\alpha_{l_0}>{\lambda}_{l_0}$, where for brevity we put $\alpha_i:=\alpha_i(q)$.
By [@AlcarazBarrera-Baker-Kong-2016 Lemma 3.16] (see also [@Allaart-Baker-Kong-17]), there is an increasing sequence $(l_n)$ of integers with $l_n>l_0$ such that for each $n$, there is an entropy plateau $[p_L(n),p_R(n)]$ with $\alpha(p_L(n))=(\alpha_1\dots\alpha_{l_n}^-)^{\infty}$, and moreover $p_L(n)\nearrow q$. By the continuity of the function $p\mapsto \dim_H {\widetilde{\mathbf{U}}}_p$ it is enough to prove that $\dim_H {\mathbf{W}}_q\geq \dim_H {\widetilde{\mathbf{U}}}_{p_L(n)}$ for each $n$.
Fix therefore an integer $n$, and put $p:=p_L(n)$, and $l:=l_n$. Then ${\widetilde{\mathbf{V}}}_p$ is a subshift of finite type, characterized by $$(x_i)\in {\widetilde{\mathbf{V}}}_p \qquad\Leftrightarrow \qquad \overline{\alpha_1\dots \alpha_l}\prec x_{k+1}\dots x_{k+l}\prec \alpha_1\dots \alpha_l \quad\forall k\geq 0.$$ We represent ${\widetilde{\mathbf{V}}}_p$ by a labeled directed graph $G=(V,E,L)$ in the usual way: the set $V$ of vertices consists of allowed words in ${\widetilde{\mathbf{V}}}_p$ of length $l-1$, and there is an edge $uv$ from $u=x_1\dots x_{l-1}$ to $v=y_1\dots y_{l-1}$ if and only if $x_2\dots x_{l-1}=y_1\dots y_{l-2}$ and $x_1\dots x_{l-1}y_{l-1}$ is an allowed word in ${\widetilde{\mathbf{V}}}_p$, in which case we label the edge $uv$ with $y_{l-1}$.
Assume first that ${\widetilde{\mathbf{V}}}_p$ is transitive, so the graph $G$ is strongly connected. [Let $\gamma$ be the spectral radius of the adjacency matrix of $G$, and]{} let $C_1,C_2$ be the constants from Lemma \[lem:number-of-paths\](i). [Put $C:=\max\{C_2,C_1^{-1}\}$.]{} Let $\mathbf{u}=\alpha_1\dots\alpha_{l-1}$ be the lexicographically largest vertex in $V$.
Next, let $0<s<\dim_H {\widetilde{\mathbf{U}}}_p$. We will construct a subset $\mathbf{Y}$ of ${\mathbf{W}}_q$ such that $\dim_H\mathbf{Y}\geq s$. Since the Hausdorff dimension of a subshift of finite type [is]{} given by its entropy, we have $$s<\dim_H {\widetilde{\mathbf{U}}}_p=\dim_H {\widetilde{\mathbf{V}}}_p=\log_2\gamma.
\label{eq:s-small-enough}$$
Let $(m_j)_{j\in{\ensuremath{\mathbb{N}}}}$ be any strictly increasing sequence of positive integers with $m_1>l$ such that $\alpha_1\dots\alpha_{m_j}^-$ is admissible for each $j$. We claim that for each $j$ there exists a connecting block $b_1\ldots b_{n_j}$ such that ${\alpha}_1\ldots {\alpha}_{m_j}^-b_1\ldots b_{n_j}{\mathbf u}$ is an allowed word in ${\widetilde{\mathbf{U}}}_q$. This follows essentially from the proof of [@AlcarazBarrera-Baker-Kong-2016 Proposition 3.17], but for the reader’s convenience we sketch the main idea.
Set $i_0:=m_j$. Recursively, for $\nu=0,1,2,\dots$, proceed as follows. If $i_\nu<l_0$, then stop; otherwise, let $i_{\nu+1}$ be the largest integer $i$ such that $$\alpha_{i_\nu-i+1}\dots\alpha_{i_\nu}=\overline{\alpha_1\dots \alpha_i}^+.$$ (If no such $i$ exists, set $i_{\nu+1}=0$.) We now argue that $$i_{\nu+1}<i_\nu \qquad\mbox{for every $\nu$}.
\label{eq:i-nu-decreasing}$$ This will follow once we show that $\alpha_1\dots\alpha_k\succ \overline{\alpha_1\dots\alpha_k}^+$ for every $k\geq l_0$. This inequality is clear for $k\geq 2$, since $q>q_{KL}$ implies $\alpha_1>\overline{\alpha_1}$. On the other hand, if $l_0=1$, then $\alpha_1>{\lambda}_1\geq\overline{{\lambda}_1}^+> \overline{\alpha_1}^+$, yielding the inequality for $k=1$ as well.
In view of , this process eventually stops, say after $N=N(j)$ steps, with $i_N<l_0$. It is easy to check that $\alpha_1\dots\alpha_{i_\nu}^-$ is admissible for each $\nu<N$. Since $q\in{\mathscr{B}}_L$ and $\alpha(q)\succ(\alpha_1\dots\alpha_{i_\nu}^-)^{\infty}$, it follows that $$\alpha(q)\succ\alpha_1\dots\alpha_{i_\nu}(\overline{\alpha_1\dots\alpha_{i_\nu}}^+)^\infty, \qquad \nu=1,2,\dots,N-1.$$ Hence there is a positive integer $k_\nu$ such that $$\alpha(q)\succ\alpha_1\dots\alpha_{i_\nu}(\overline{\alpha_1\dots\alpha_{i_\nu}}^+)^{k_\nu}, \qquad \nu=1,2,\dots,N-1,
\label{eq:representation-of-alpha-q}$$ where by $\alpha(q)\succ\beta_1\dots\beta_i$ we mean that $\alpha_1\dots\alpha_i\succ\beta_1\dots\beta_i$. Put $$B_\nu:=(\alpha_1\dots \alpha_{i_\nu}^-)^{k_\nu}, \qquad \nu=1,\dots,N-1,$$ and $b_1\dots b_{n_{j}}:=B_1 B_2\dots B_{N-1}$, where if $N=1$ we take $B_1 B_2\dots B_{N-1}$ to be the empty word.
Since $|\mathbf{u}|=l-1\geq l_0$, it can be verified using the admissibility of $\alpha_1\dots\alpha_{i_\nu}^-$ for each $\nu$ that ${\alpha}_1\ldots {\alpha}_{m_j}^-b_1\ldots b_{n_j}{\mathbf u}$ is an allowed word in ${\widetilde{\mathbf{U}}}_q$. Here we emphasize that the length $n_j$ of the connecting block depends only on $m_j$, since the word $\mathbf{u}$ is fixed throughout.
We now construct sequences $(r_j)$ and $(R_j)$ as follows: set $R_0=m_1+n_1$, and inductively, for $j=1,2,\dots$, we can choose by [ and]{} Lemma \[lem:number-of-paths\] an integer $r$ large enough so that $${(\log_2 \gamma-s)r\geq (R_{j-1}+m_{j+1}+n_{j+1}+l-1)s+(j+2)\log_2 C}
\label{eq:choice-of-rj}$$ and $$\#\mathbf P_r^{\mathbf{u},\mathbf{u}}\geq C^{-1}\gamma^r.
\label{eq:P_r-lower-bound}$$ Put $$r_j:=r, \qquad\mbox{and} \qquad R_j:=R_{j-1}+r_j+m_{j+1}+n_{j+1},$$ to complete the induction step. We also set $$M_j:=\sum_{i=1}^j(m_i+n_i+r_i), \qquad N_j:=M_j+m_{j+1}, \qquad\mbox{for $j\geq 0$}.$$
Now let $\mathbf{Y}$ be the set of sequences $(y_i)$ in $\Omega_M$ satisfying the following requirements for all $j\geq 0$:
1. $y_{M_j+1}\dots y_{M_j+m_{j+1}}=\alpha_1\dots\alpha_{m_{j+1}}^-$;
2. $y_{N_j+1}\dots y_{N_j+n_{j+1}}=b_1\dots b_{n_{j+1}}$;
3. [$y_{R_j+1}\dots y_{R_j+l-1}=\mathbf{u}$;]{}
4. $y_{R_j+l}\dots y_{M_{j+1}+l-1}=$ the word formed by reading the labels of any path of length $r_{j+1}$ in $G$ that starts and ends at $\mathbf{u}$.
[Note that (4) is consistent with (1) despite the overlapping definitions, since for each $j$, $\mathbf{u}$ is a prefix of $\alpha_1\dots\alpha_{m_j}^-$.]{} By the construction of the connecting block $b_1\dots b_{n_{j+1}}$, the word $y_{M_j+1}\dots y_{M_{j+1}}$ is allowed in ${\widetilde{\mathbf{U}}}_q$, for each $j$. It now follows easily that $\mathbf{Y}\subset {\mathbf{W}}_q$.
Next, we construct a mass distribution on $\mathbf{Y}$. Let $t_j$ denote the number of words satisfying the requirement of [(4)]{} above, and note that by , $$t_j\geq C^{-1}\gamma^{r_{j+1}}, \qquad j\geq 0.
\label{eq:t-lower-bound}$$ Define a measure $\mu$ on $\mathbf{Y}$ by $$\mu([y_1\dots y_k])=\frac{\tilde{t}_j(y_1\dots y_k)}{\prod_{i=0}^j t_i}, \qquad\mbox{for $j\geq 0$ and $R_j+l-1\leq k\leq M_{j+1}$},
\label{eq:definition-of-mu}$$ where [$[y_1\dots y_k]:=\{(x_i)\in\mathbf{Y}: x_1\dots x_k=y_1\dots y_k\}$ is the cylinder generated by $y_1\dots y_k$, and]{} $\tilde{t}_j(y_1\dots y_k)$ is the number of paths in $G$ of length $M_{j+1}+l-1-k$ starting at vertex $y_{k-l+2}\dots y_k$ and ending at $\mathbf{u}$. Observe that $$\tilde{t}_j(y_1\dots y_k)\leq C\gamma^{M_{j+1}+l-1-k}.
\label{eq:t-tilde-bound}$$ We complete the definition of $\mu$ by setting $\mu([y_1\dots y_k])=1$ for $k<R_0+l-1$, and $$\mu([y_1\dots y_k])=\mu([y_1\dots y_{M_{j}}]), \qquad\mbox{for $j\geq 1$ and $M_{j}<k<R_{j}+l-1$}.
\label{eq:definition-of-mu-complete}$$ [It is easy to see that Kolmogorov’s consistency conditions are satisfied, so that $\mu$ defines a unique mass distribution on $\mathbf{Y}$. We claim that]{} $$\mu([y_1\dots y_k])\leq {\tilde{C}}\big({\operatorname{diam}}([y_1\dots y_k])\big)^s
\label{eq:mass-distribution-inequality}$$ for any $k\in{\ensuremath{\mathbb{N}}}$ and any cylinder $[y_1\dots y_k]$, [where $\tilde{C}:=C^2 2^{(R_0+l-1)s}$. Observe that ${\operatorname{diam}}([y_1\dots y_k])=2^{-k}$. It is clearly sufficient to check for $R_j+l-1\leq k\leq M_{j+1}$, where $j\geq 0$. Assuming $k$ is in this range, and the estimates , give $$\begin{aligned}
\log_2\mu([y_1\dots y_k])+ks &\leq (j+2)\log_2 C+\left(M_{j+1}+l-1-k-\sum_{i=1}^{j+1}r_i\right)\log_2\gamma+ks\\
&\leq (R_j+l-1)s+(j+2)\log_2 C-\sum_{i=1}^j r_i\log_2\gamma,\end{aligned}$$ using that $\log_2\gamma>s$ and $M_{j+1}=R_j+r_{j+1}$. For $j=0$ this last expression reduces to $(R_0+l-1)s+2\log_2 C=\log_2\tilde{C}$. For $j\geq 1$, it can be written as $$\begin{aligned}
(R_{j-1}+m_{j+1}+n_{j+1}+l-1)s+(j+2)\log_2 C-\sum_{i=1}^{j-1}r_i\log_2\gamma-r_j(\log_2\gamma-s),\end{aligned}$$ which is $\leq 0$ by . Thus, in either case, we obtain .]{}
By the mass distribution principle, implies $\dim_H {\mathbf{W}}_q\geq\dim_H \mathbf{Y}\geq s$, as required. Finally, since $s<\dim_H {\widetilde{\mathbf{U}}}_p$ was arbitrary, we conclude that $\dim_H {\mathbf{W}}_q\geq \dim_H {\widetilde{\mathbf{U}}}_p$.
When ${\widetilde{\mathbf{V}}}_p$ is not transitive, ${\widetilde{\mathbf{V}}}_p$ contains by [@AlcarazBarrera-Baker-Kong-2016 Lemma 5.9] a transitive subshift $\mathbf{Z}_p$ of finite type with the same entropy $\log\gamma$, and $\alpha(p)\in\mathbf{Z}_p$. Hence the directed graph associated with $\mathbf{Z}_p$ still contains the vertex $\alpha_1\dots\alpha_{l-1}$, and the above argument goes through with $\mathbf{Z}_p$ replacing ${\widetilde{\mathbf{V}}}_p$.
[Note that ${\mathbf{W}}_q=\emptyset$ for any $q\in(1, M+1]\setminus\overline{{\mathscr{U}}}$. In view of Propositions \[prop:W-q-C\] and \[prop:W-q-B\] it remains to prove the theorem for $q\in\overline{{\mathscr{U}}}\setminus({ {\mathscr{C}}}\cup{\mathscr{B}}_L)$. Then $q\in\overline{{\mathscr{U}}}\cap(q_L,q_R]$ for some relative plateau $[q_L,q_R]$. Assume $J=J_{\mathbf{i}}=[q_L,q_R]$ is the [*smallest*]{} such plateau, and let its generating word be $a_1\dots a_m$. Then either $q\in{\mathscr{B}}(J)$ or $q$ is the left endpoint of $J_{\mathbf{i}j}$ for some $j\in{\ensuremath{\mathbb{N}}}$.]{} Let $\hat{q}:=\hat{\Phi}_J(q)$. Then $\hat{q}\in{\mathscr{B}}_L^*$, so using Lemma \[lem:W-bridge\], Proposition \[prop:W-q-B\], and Proposition \[th:characterization of hat-Phi-J\](iii) we obtain $$\dim_H {\mathbf{W}}_q=\frac{1}{m}\dim_H {\mathbf{W}}_{\hat{q}}^*=\frac{1}{m}\dim_H {\widetilde{\mathbf{U}}}_{\hat{q}}^*
=\frac{h\big({\widetilde{\mathbf{U}}}_{\hat{q}}^*\big)}{m\log 2}=\frac{h({\widetilde{\mathbf{U}}}_q(J))}{\log 2}.$$ By Lemma \[lem:bi-Lipschitz\] and Theorem \[main1\] this implies $$\dim_H\mathcal W_q=\frac{\log 2}{\log q}\dim_H{\mathbf{W}}_q=\frac{h({\widetilde{\mathbf{U}}}_q(J))}{\log q}=f_-(q),$$ completing the proof.
Acknowledgments {#acknowledgments .unnumbered}
===============
Allaart was partially sponsored by NWO visitor’s travel grant 040.11.647/4701. Allaart furthermore wishes to thank the mathematics department of Utrecht University, and in particular Karma Dajani, for their warm hospitality during a sabbatical visit in the spring of 2018 when much of this work was undertaken. Kong was supported by NSFC No. 11401516. [Kong would like to thank the Mathematical Institute of Leiden University.]{}
[30]{}
R. Alcaraz Barrera. Topological and ergodic properties of symmetric sub-shifts. , 34(11):4459–4486, 2014.
R. Alcaraz Barrera, S. Baker, and D. Kong. Entropy, topological transitivity, and dimensional properties of unique q-expansions. , 2018.
P. C. Allaart. The infinite derivatives of [O]{}kamoto’s self-affine functions: an application of [$\beta$]{}-expansions. , 3(1):1–31, 2016.
P. C. Allaart. On univoque and strongly univoque sets. , 308:575–598, 2017.
P. C. Allaart, S. Baker, and D. Kong. Bifurcation sets arising from non-integer base expansions. , To appear in [*J. Fractal Geom.*]{}, 2017.
P. C. Allaart and D. Kong. On the continuity of the Hausdorff dimension of the univoque set. , 2018.
J.-P. Allouche and M. Cosnard. Itérations de fonctions unimodales et suites engendrées par automates. , 296(3):159–162, 1983.
J.-P. Allouche and M. Cosnard. Non-integer bases, iteration of continuous real maps, and an arithmetic self-similar set. , 91(4):325–332, 2001.
C. Baiocchi and V. Komornik. Greedy and quasi-greedy expansions in non-integer bases. , 2007.
S. Baker. On small bases which admit countably many expansions. , 147:515–532, 2015.
S. Baker and N. Sidorov. Expansions in non-integer bases: lower order revisited. , 14:Paper No. A57, 15, 2014.
C. Bonanno, C. Carminati, S. Isola, and G. Tiozzo. Dynamics of continued fractions and kneading sequences of unimodal maps. , 33(4):1313–1332, 2013.
K. Dajani and M. de Vries. Invariant densities for random [$\beta$]{}-expansions. , 9(1):157–176, 2007.
K. Dajani, V. Komornik, D. Kong, and W. Li. Algebraic sums and products of univoque bases. 29:1087–1104, 2018.
Z. Dar[ó]{}czy and I. K[á]{}tai. On the structure of univoque numbers. , 46(3-4):385–408, 1995.
M. de Vries and V. Komornik. Unique expansions of real numbers. , 221(2):390–427, 2009.
M. de Vries, V. Komornik, and P. Loreti. Topology of the set of univoque bases. , 205:117–137, 2016.
P. Erdős, I. Joó, and V. Komornik. Characterization of the unique expansions $1=\sum_{i=1}^\infty
q^{-n_i}$ and related problems. , 118:377–390, 1990.
P. Erd[ő]{}s, M. Horv[á]{}th, and I. Jo[ó]{}. On the uniqueness of the expansions [$1=\sum q^{-n_i}$]{}. , 58(3-4):333–342, 1991.
P. Erd[ő]{}s and I. Jo[ó]{}. On the number of expansions [$1=\sum q^{-n_i}$]{}. , 35:129–132, 1992.
P. Glendinning and N. Sidorov. Unique representations of real numbers in non-integer bases. , 8:535–543, 2001.
T. Jordan, P. Shmerkin, and B. Solomyak. Multifractal structure of [B]{}ernoulli convolutions. , 151(3):521–539, 2011.
C. Kalle, D. Kong, W. Li, and F. Lü. On the bifurcation set of unique expansions. , 2018.
V. Komornik and D. Kong. Bases in which some numbers have exactly two expansions. , 2018.
V. Komornik, D. Kong, and W. Li. Hausdorff dimension of univoque sets and devil’s staircase. , 305:165–196, 2017.
V. Komornik and P. Loreti. Unique developments in non-integer bases. , 105(7):636–639, 1998.
V. Komornik and P. Loreti. Subexpansions, superexpansions and uniqueness properties in non-integer bases. , 44(2):197–218, 2002.
V. Komornik and P. Loreti. On the topological structure of univoque sets. , 122(1):157–183, 2007.
D. Kong and W. Li. Hausdorff dimension of unique beta expansions. , 28(1):187–209, 2015.
D. Kong, W. Li, and F. M. Dekking. Intersections of homogeneous [C]{}antor sets and beta-expansions. , 23(11):2815–2834, 2010.
D. Lind and B. Marcus. . Cambridge University Press, Cambridge, 1995.
F. Lü and J. Wu. Diophantine analysis in beta-dynamical systems and [H]{}ausdorff dimensions. , 290:919–937, 2016.
W. Parry. On the $\beta$-expansions of real numbers. , 11:401–416, 1960.
A. Rényi. Representations for real numbers and their ergodic properties. , 8:477–493, 1957.
N. Sidorov. Almost every number has a continuum of [$\beta$]{}-expansions. , 110(9):838–842, 2003.
N. Sidorov. Combinatorics of linear iterated function systems with overlaps. , 20(5):1299–1312, 2007.
N. Sidorov. Expansions in non-integer bases: lower, middle and top orders. , 129(4):741–754, 2009.
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- |
Yazid Delenda, Robert Appleby, Mrinal Dasgupta\
School of Physics and Astronomy, University of Manchester,\
Oxford road, Manchester M13 9PL, U.K.\
, ,
- |
Andrea Banfi\
Università degli studi di Milano Bicocca and INFN, Sezione di Milano, Italy.\
title: 'On QCD resummation with $k_t$ clustering'
---
Introduction
============
One of the most commonly studied QCD observables is the flow of transverse energy ($E_t$) into gaps between jets in various QCD hard processes. Since the $E_t$ flow away from jets is infrared and collinear safe it is possible to make perturbative predictions for the same, which can be compared to experimental data for a given hard process. However since one is typically examining configurations where $E_t$ is small compared to the hard scale $Q$ of the process (e.g. jet transverse momenta in hadronic collisions) the perturbative predictions involve large logarithms in the ratio $Q/E_t$. Resummation of logarithmically enhanced terms of the form $\alpha_s^n \ln^{n} (Q/E_t)$ has proved a challenge that is still to be fully met – complete calculations are available only in the large $N_c$ limit [@DSNG1; @DSNG2; @BMS]. Studies of the $E_t$ flow have in fact directly led to developments in the theoretical understanding of QCD radiation and this process is still ongoing [@FKS].
Another feature of the energy flow away from jets is its sensitivity to non-perturbative effects. Thus one may expect significant $1/Q$ power corrections to energy flow distributions of a similar origin to those extensively studied for various jet-shape observables [@DSreview]. Moreover the $E_t$ flow in hadronic collisions is a standard observable used to develop an understanding of the underlying event and to assess its role after accounting for perturbatively calculable QCD radiation [@MW; @BKS].
Given that $E_t$ flow studies potentially offer so much valuable information on QCD over disparate scales, involving perturbative parameters such as the strong coupling $\alpha_s$, QCD evolution, coherence properties of QCD radiation and non-perturbative effects, it is not surprising that they have been the subject of substantial theoretical effort over the past few years.
In this paper we wish to focus on the aspect of resummed predictions for the $E_t$ flow into gaps between jets. Perhaps the most significant problem involved in making such predictions is the non-global nature of the observable [@DSNG1; @DSNG2]. More precisely in order to resum the leading single logarithms involved, one has to address not just a veto on soft emissions coupled to the underlying primary hard parton antennae (known as the primary emission term), but additionally correlated emission or non-global contributions, where a clump of energy-ordered soft gluons coherently emit a still softer gluon into the gap region $\Omega$. For this latter contribution the highly non-trivial colour structure of the multi-gluon “hedgehog" configuration has proved at present too significant an obstacle to overcome. One thus has to resort to the large $N_c$ limit to provide merely a leading-log estimate for the away-from–jet $E_t$ flow. This situation can be contrasted with the case of event-shapes and Drell-Yan $q_T$ resummations which have been pushed to next-to–leading and next-to–next-to–leading logarithmic accuracy respectively. The impact of finite $N_c$ corrections in non-global observables is thus a factor in the theoretical uncertainty involved in the corresponding resummed predictions.
Given that the non-global component has a substantial quantitative impact over a significant range of $E_t$ values for a given hard scale $Q$ and that it is computable only in the large $N_c$ approximation, it is clearly desirable to reduce the sensitivity of a given observable to non-global logarithms. An important observation in this regard was made in Ref. [@AS1]: if one employs the $k_t$ clustering algorithm [@ktclus; @ktclusinc] to define the final state such that the energy flow into a gap between jets is due to soft $k_t$-clustered mini-jets (rather than individual hadrons), the non-global logarithms are significantly reduced in magnitude[^1]. This observation was exploited to study the case of $E_t$ flow in dijet photoproduction where a result was provided for the primary emission component of the $E_t$ distribution and the reduced non-global component was modeled [@AS2].
However it has subsequently been found that $k_t$ clustering also has a non-trivial impact on the primary emission component of the result [@BD05]. This was not taken into account in Refs. [@AS1; @AS2] and also affects the ability to make resummed predictions for a host of other jet observables such as azimuthal correlation between jets $\Delta \phi_{jj}$. In fact the findings of Ref. [@BD05] are not just specific to the $k_t$ algorithm but would also crop up in the case of jet observables defined using iterative cone algorithms.
In the present paper we wish to shed more light on the resummation of the primary or independent emission component of the result and its dependence on the clustering algorithm. While the leading ${\mathcal{O}} \left (\alpha_s^2 \ln^2 (Q/E_t) \right)$ clustering dependent behaviour was computed analytically in Ref. [@BD05], the full resummed result for the primary emission component was computed only numerically in the case of a single hard emitting dipole ($e^{+}e^{-} \to 2$ jets or DIS $1+1$ jets). Here while sticking to a single hard dipole we shed more light on the structure of the primary emission term and analytically compute it to an accuracy that is sufficient for a wide range of phenomenological applications.
The analytical insight and calculations we provide here will also make the generalisation of the $k_t$-clustered primary emission result to the case of several hard emitters (dijets produced in photoproduction or hadron-hadron processes), involving a non-trivial colour flow, relatively straightforward.
The above resummation of the primary component of the answer assumes greater significance when we discuss our second observation: once an error is corrected in the numerical code used for the purposes of Refs. [@AS1; @AS2] the non-global component of the result is reduced even more compared to the earlier estimate. With a very small non-global component (which can be numerically computed in the large $N_c$ limit) and a primary emission component that correctly treats the dependence on the jet algorithm, one is better placed to make more accurate resummed predictions than has been the case till now. This is true not just for the $E_t$ flow but also as we mentioned for a variety of jet observables for which there are either no resummed predictions as yet, or only those employing jet algorithms not directly used in experimental studies [@KS].
This paper is organised as follows. In the following section we define the observable in question and revisit the issue of the dependence of the primary and non-global pieces on the jet clustering algorithm. Following this we demonstrate how the primary or independent emission piece can be computed at all orders in $\alpha_s$, accounting to sufficient accuracy for the effects of the clustering algorithm. We explicitly describe the case of three and four-gluon contributions to demonstrate the steps leading to our all-order results. Following this we re-examine the non-global component of the answer and find that this is significantly smaller than earlier calculations of the same [@AS1]. We put our findings together to examine their impact on photoproduction data from the ZEUS collaboration [@ZEUS] and lastly point to the conclusions one can draw and future extensions of our work.
Resummation of the primary emissions
====================================
Let us consider for simplicity the process $e^{+}e^{-} \to 2$ jets. The calculations for processes involving a larger number of jets and more complex jet topologies can be done along similar lines.
We wish to examine the $E_t$ flow in a region $\Omega$ which we choose as a slice in rapidity[^2] of width $\Delta \eta$ which we can centre on $\eta=0$. We then define the gap transverse energy as: $$\label{defn} E_t = \sum_{i \in \Omega}{E_{t,i}}\,,$$ where the index $i$ refers to soft jets obtained after $k_t$ clustering of the final state. We shall concentrate on the integrated $E_t$ cross-section which is defined as: $$\Sigma(Q,Q_\Omega) = \frac{1}{\sigma}\int_0^{Q_\Omega}
\frac{d\sigma}{d E_t} d E_t\,,$$ with $\sigma$ the total cross-section for $e^+e^-\rightarrow$ hadrons, with center-of-mass energy $Q$.
The single-logarithmic result for the above, without $k_t$ clustering (where the sum over $i$ in Eq. refers to hadrons in the gap rather than jet clusters), was computed in Ref. [@DSNG2] and can be expressed as: $$\label{eq:signoclus} \Sigma(Q,Q_\Omega) = \Sigma_P (t)\,
S(t)\,,\qquad t = \frac{1}{2\pi} \int_{Q_\Omega}^{Q/2}
\frac{dk_t}{k_t}\, \alpha_s(k_t)\,.$$ The above result contains a primary emission or “Sudakov” term[^3] $\Sigma_P(t)$ and a non-global term $S(t)$.
The primary emission piece is built up by considering only emissions attached to the primary hard partons namely those emitted from the hard initiating $q\bar{q}$ dipole in our example, while the non-global term arises from coherent soft emission from a complex ensemble of soft emitters alongside the hard initiating dipole. More precisely we have: $$\label{eq:sudakov} \Sigma_P(t) = e^{-4 C_F t\Delta \eta }\,,$$ which is the result of resumming uncancelled $k_t$-ordered virtual-emission contributions, in the gap region. The non-global component, as we stated before, is computed numerically in the large $N_c$ limit.
Next we turn to the $k_t$-clustered case. The result stated in Ref. [@AS1] assumes that the primary or Sudakov piece is left unchanged by clustering since it appears to be the exponentiation of a single gluon emitted inside the gap. The non-global piece is recomputed numerically implementing clustering [@AS1]. As already shown in Ref. [@BD05] however, the assumption regarding the primary emission piece being unaffected is in fact untrue and this too needs to be recomputed in the presence of clustering. The corrections to the primary emission term first appear while considering two gluons emitted by the hard $q \bar{q}$ dipole and persist at all orders. Below we provide a reminder of the two-gluon case discussed in Ref. [@BD05] and subsequently consider explicitly the three and four-gluon emission cases before writing down the result to all orders as a function of the radius parameter $R$.
Two-gluon emission
------------------
In order to examine the role of the $k_t$ algorithm we point out that in our case ($k_t$-ordered soft limit) one can start the clustering procedure with the lowest transverse-energy parton or equivalently the softest parton. One examines the “distances” of this particle, $i$, from its neighbours, defined by $d_{ij} =
E_{t,i}^2 \left(\left(\Delta \eta_{ij} \right)^2 + \left( \Delta
\phi_{ij}\right)^2\right)$, where $E_{t,i}$ is the transverse energy of the softest parton. If the smallest of these distances is less than $E_{t,i}^2 R^2$, particle $i$ is recombined or clustered into its nearest neighbour and the algorithm is iterated. On the other hand if all $d_{ij}$ are greater than $E_{t,i}^2 R^2$, $i$ is counted as a jet and removed from the process of further clustering. The process continues until the entire final-state is made up of jets. Also in the limit of strong energy-ordering, which is sufficient to obtain the leading-logarithms we are concerned with here, the recombination of a softer particle with a harder one gives a jet that is aligned along the harder particle.
The dependence of the primary emission term on the jet clustering algorithm starts naturally enough from the two-gluon level. While the Sudakov result $\exp{\left(-4 C_F t\Delta \eta\right)}$ comes about due to assuming real-virtual cancellations such that one is left with only virtual emissions with $k_t \geq Q_{\Omega}$ in the gap region (for the integrated distribution), $k_t$ clustering spoils this assumed cancellation.
Specifically let us take two real gluons $k_1$ and $k_2$ that are ordered in energy ($\omega_1 \gg \omega_2$). We consider as in Ref. [@BD05] the region where the softer gluon $k_2$ is in the gap whilst the harder $k_1$ is outside. Additionally we take the case that the gluons are clustered by the jet algorithm which happens when $\left(\Delta \eta \right)^2 + \left( \Delta \phi
\right)^2 \leq R^2$ with $\Delta\eta = \eta_2-\eta_1$ and similarly for $\Delta \phi$, which condition we shall denote with the symbol $\theta_{21}$. Since $k_2$ is clustered to $k_1$ it gets pulled outside the gap, the recombined jet being essentially along $k_1$. Thus in this region the double real-emission term does not contribute to the gap energy *differential* distribution $d\sigma/dE_t$. Now let $k_1$ be a virtual gluon. In this case it cannot cluster $k_2$ out of the gap and we do get a contribution to the gap energy differential distribution. Thus a real-virtual cancellation which occurs for the unclustered case fails here and the mismatch for the integrated quantity $\Sigma(t)$, amounts to: $$\label{eq:twog} C_2^{p} = \frac{(-4 C_F t)^2}{2!}
\int_{k_1\notin\Omega} d\eta_1 \frac{d \phi_1}{2 \pi} \int_{ k_2 \in
\Omega} d \eta_2 \frac {d \phi_2}{2 \pi} \theta_{21} = \frac{(-4C_F
t)^2}{2!} \frac{2}{3 \pi} R^3\,,$$ where we reported above the result computed for $R \leq \Delta
\eta$, in Ref. [@BD05]. Here we introduced the primary emission term $C_n^{p}$ that corrects the Sudakov result at $\mathcal{O}(\alpha_s^n)$ due to the clustering requirement.
The fact that the result scales as the third power of the jet radius parameter is interesting in that by choosing a sufficiently small value of $R$ one may hope to virtually eliminate this piece and thus the identification of the primary result with the Sudakov exponent would be at least numerically accurate. However the non-global term would then be significant which defeats the main use of clustering. If one chooses to minimise the non-global component by choosing e.g. $R=1$, then one must examine the primary emission terms in higher orders in order to estimate their role. To this end we start by looking at the three and four-gluon cases below.
Three-gluon emission
--------------------
Consider the emission of three energy-ordered gluons $k_1$, $k_2$ and $k_3$ with $\omega_3\ll \omega_2 \ll \omega_1$, off the primary $q\bar{q}$ dipole, and employing the inclusive $k_t$ clustering algorithm [@ktclus; @ktclusinc] as explained previously.
We consider all the various cases that arise when the gluons (which could be real or virtual) are in the gap region or outside. We also consider all the configurations in which the gluons are affected by the clustering algorithm. We then look for all contributions where a real-virtual mismatch appears due to clustering, that is not included in the exponential Sudakov term. The Sudakov itself is built up by integrating just virtual gluons in the gap, above the scale $Q_\Omega$. The corrections to this are summarised in table \[tab:cont\].
In order to obtain the various entries of the table one just looks at the angular configuration in question, draws all possible real and virtual contributions and looks for a mismatch between them generated by the action of clustering. We translate table \[tab:cont\] to: $$\begin{aligned}
\label{eq:cont}
C_3^{p} & = & \frac{1}{3!}(-4C_F t)^3\times \nonumber\\
& & \times \bigg\{ \int_{k_1\notin\Omega} d\eta_1
\frac{d\phi_1}{2\pi} \int_{k_2\notin\Omega} d\eta_2
\frac{d\phi_2}{2\pi} \int_{k_3\in\Omega} d\eta_3\,
\theta_{32}\, \theta_{31} + \nonumber\\
& & + \int_{k_1\notin\Omega} d\eta_1 \frac{d\phi_1}{2\pi}
\int_{k_2\in\Omega} d\eta_2 \frac{d\phi_2}{2\pi} \int_{k_3\in\Omega}
d\eta_3 \left[\theta_{31}+(1-\theta_{31})(1-\theta_{32})
\theta_{21}\right] + \nonumber\\
& & + \int_{k_1\in\Omega} d\eta_1 \frac{d\phi_1}{2\pi}
\int_{k_2\notin\Omega} d\eta_2 \frac{d\phi_2}{2\pi}
\int_{k_3\in\Omega} d\eta_3\, \theta_{32}\bigg\}\,,\end{aligned}$$ where we used the freedom to set $\phi_3=0$. We identify three equal contributions consisting of the integrals in which there is only one theta function constraining only two particles: the last integral over $\theta_{32}$, the integral over $\theta_{31}$ and that over $\theta_{21}$ in the third line. The set of configurations $\theta_{32}$, $\theta_{31}$ and $\theta_{21}$ is just the set of constraints on all possible pairs of gluons, and in fact we can generalise the factor 3 to the case of any number $n$ of gluons by $n(n-1)/2$, which will enable us to resum $R^3$ terms. We shall return to this observation later. The integrals of the above type reduce essentially to the clustered two-gluon case as calculated in Eq. , and the integral over the third “unconstrained” gluon is just $\Delta\eta$.
Explicitly we write Eq. as: $$\begin{aligned}
\label{eq:cont2}
C_3^{p} & = & \frac{1}{3!} (-4C_F t)^3 \times \nonumber\\
& & \times\Bigg\{ \int_{k_1\notin\Omega} d\eta_1
\frac{d\phi_1}{2\pi} \int_{k_2\notin\Omega} d\eta_2
\frac{d\phi_2}{2\pi} \int_{k_3\in\Omega} d\eta_3\,
\theta_{32}\,\theta_{31}+\nonumber\\
& & + \int_{k_1\notin\Omega} d\eta_1 \frac{d\phi_1}{2\pi}
\int_{k_2\in\Omega} d\eta_2 \frac{d\phi_2}{2\pi} \int_{k_3\in\Omega}
d\eta_3 \left[\theta_{31}\theta_{32}-\theta_{31}-\theta_{32}
\right] \theta_{21}+ \nonumber\\
&& + 3\times \int_{k_1\in\Omega} d\eta_1 \frac{d\phi_1}{2\pi}
\int_{k_2\notin\Omega} d\eta_2 \frac{d\phi_2}{2\pi}
\int_{k_3\in\Omega} d\eta_3\,\theta_{32}\Bigg\}\,.\end{aligned}$$ Computing the various integrals above (for simplicity we take $R
\leq \Delta \eta/2$, which is sufficient for our phenomenological purposes) one obtains: $$\begin{gathered}
C_3^{p} = \frac{1}{3!} (-4C_F t)^3 \times\\
\times \left\{ \left( \frac{\pi}{3}-\frac{32}{45} \right)
\frac{R^5}{\pi^2} + f \frac{R^5}{\pi^2}- \left(\frac{\pi}{3} -
\frac{32}{45} \right) \frac{R^5}{\pi^2} - \frac{32}{45}
\frac{R^5}{\pi^2} + 3 \times \frac{2}{3\pi} \Delta\eta \, R^3
\right\},\end{gathered}$$ with $f \simeq 0.2807$ and we have written the results in the same order as the five integrals that arise from the various terms in Eq. . Hence: $$C_3^{p}=\frac{1}{3!}(-4C_F t)^3 \left\{3\times \frac{2}{3\pi}
\Delta\eta \, R^3 + f_2\, R^5 \right\},$$ where $f_2 \simeq -0.04360$. We note the appearance of an $R^5$ term which, as we shall presently see, persists at higher orders. This term is related to a clustering constraint on *three* gluons at a time via the product of step functions $\theta_{32}\, \theta_{21}
(\theta_{31}-1)\,$ with $k_2,\,k_3\in\Omega$ and $k_1\notin\Omega$.
Next we look at the emission of four soft, real or virtual energy-ordered gluons. This will help us move to a generalisation with any number of gluons.
Four-gluon case and beyond
--------------------------
Now we take the case of four-gluon emission and identify the patterns that appear at all orders. A table corresponding to table \[tab:cont\] is too lengthy to present here. The result can however be expressed in an equation similar to that for the three-gluon case. We have: $$\begin{aligned}
C_4^{p} & = & \frac{1}{4!} (-4C_Ft)^4 \times \nonumber\\
& & \times \bigg \{ \int_{1\,\mathrm{in}} \int_{2\,\mathrm{in}}
\int_{3\,\mathrm{out}} \int_{4\,\mathrm{in}} \theta_{43}+\nonumber\\
& & + \int_{1\,\mathrm{in}} \int_{2\,\mathrm{out}}
\int_{3\,\mathrm{in}} \int_{4\,\mathrm{in}}
\left[\theta_{42}+\theta_{32} (1-\theta_{43})
(1-\theta_{42})\right] + \nonumber\\
& & + \int_{1\,\mathrm{out}} \int_{2\,\mathrm{in}}
\int_{3\,\mathrm{in}} \int_{4\,\mathrm{in}} \left\{ \theta_{41} +
\theta_{-41} \left[ \theta_{31} \, \theta_{-43} + \theta_{43} \,
\theta_{21} \, \theta_{-42}+ \theta_{21} \, \theta_{-42}\,
\theta_{-43}\,\theta_{-31} \theta_{-32} \right] \right\} +
\nonumber\\ & & + \int_{1\,\mathrm{in}} \int_{2\,\mathrm{out}}
\int_{3\,\mathrm{out}} \int_{4\,\mathrm{in}}
\theta_{42}\, \theta_{43} + \nonumber\\
& & + \int_{1\,\mathrm{out}} \int_{2\,\mathrm{in}}
\int_{3\,\mathrm{out}} \int_{4\,\mathrm{in}} \theta_{43} \left[
\theta_{41}+ \theta_{-41} \, \theta_{-42}\, \theta_{21} \right] +
\nonumber\\ & & + \int_{1\,\mathrm{out}} \int_{2\,\mathrm{out}}
\int_{3\,\mathrm{in}} \int_{4\,\mathrm{in}} \theta_{41} \,
\theta_{42} + \theta_{41}\, \theta_{-42} \, \theta_{-43}\,
\theta_{32} + \theta_{-41}\, \theta_{-43} \, \theta_{31}
\left[\theta_{42} +
\theta_{-42}\, \theta_{32} \right] + \nonumber\\
& & + \int_{1\,\mathrm{out}} \int_{2\,\mathrm{out}}
\int_{3\,\mathrm{out}} \int_{4\,\mathrm{in}} \theta_{41}\,
\theta_{42}\, \theta_{43} \bigg\}\,,\end{aligned}$$ where $\theta_{-ij}=1-\theta_{ij}$ and “in” or “out” pertains to whether the gluon is inside the gap region or out. For brevity we did not write the differential phase-space factor for each gluon which is as always $d\eta\,d\phi/(2\pi)$. We identify six $R^3$ terms exactly of the same kind as computed before and similarly four $R^5$ terms. Explicitly we have: $$\begin{aligned}
C_4^{p} & = & \frac{1}{4!} (-4C_Ft)^4 \times \nonumber\\
& & \times \bigg\{ 6 \times \int_{1\,\mathrm{in}}
\int_{2\,\mathrm{in}} \int_{3\,\mathrm{out}}
\int_{4\,\mathrm{in}} \theta_{43}+\nonumber\\
& & + 4 \times \left( \int_{1\,\mathrm{in}} \int_{2\,\mathrm{out}}
\int_{3\,\mathrm{out}} \int_{4\,\mathrm{in}} \theta_{42}\,
\theta_{43} + \int_{1\,\mathrm{in}} \int_{2\,\mathrm{out}}
\int_{3\,\mathrm{in}} \int_{4\,\mathrm{in}} \theta_{32}
\left[\theta_{43} \, \theta_{42} - \theta_{43} - \theta_{42}
\right] \right) + \nonumber\\
& & + 3 \times \int_{1\,\mathrm{out}} \int_{2\,\mathrm{in}}
\int_{3\,\mathrm{out}} \int_{4\,\mathrm{in}} \theta_{21}\,
\theta_{43}
\left[1-\theta_{41}-\theta_{42}+\theta_{41}\,\theta_{42}\right]
+\nonumber\\
& & + \int_{1\,\mathrm{out}} \int_{2\,\mathrm{in}}
\int_{3\,\mathrm{in}} \int_{4\,\mathrm{in}} \theta_{21}
\big[\theta_{42}\,\theta_{43}-\theta_{42}-\theta_{43}-\theta_{41}\,
\theta_{-42}\, \theta_{-43}\big] \big[\theta_{31} \,
\theta_{32}-\theta_{31} -\theta_{32} \big] + \nonumber\\
& & + \int_{1\,\mathrm{out}} \int_{2\,\mathrm{out}}
\int_{3\,\mathrm{in}} \int_{4\,\mathrm{in}} \theta_{32}\,
\theta_{31} \left[\theta_{41}(1-\theta_{43})(\theta_{42}-2)
-\theta_{43}\right] +\nonumber\\
&&+\int_{1\,\mathrm{out}}\int_{2\,\mathrm{out}}\int_{3\,\mathrm{out}}
\int_{4\,\mathrm{in}}\theta_{41}\,\theta_{42}\,\theta_{43}
\bigg\}\,.\end{aligned}$$ We discuss below each set of integrals, generalise the result to the case of $n$ emitted gluons and then resum all orders.
- The integral: $$\frac{1}{4!} (-4C_Ft)^4\,6 \times \int_{1\,\mathrm{in}}
\int_{2\,\mathrm{in}} \int_{3\,\mathrm{out}} \int_{4\,\mathrm{in}}
\theta_{43}\,.$$
The integrals over particles 1 and 2 give $\left (\Delta\eta
\right)^2$. The remaining integrals reduce to the result computed for the two-gluon case, i.e. the $R^3$ term, multiplied by a factor of 6 accounting for the number of pairs of gluons $n(n-1)/2$, for $n=4$. Explicitly we have for this term: $$\frac{1}{4!} (-4C_Ft)^4 \frac{4\times
3}{2}\Delta\eta^{4-2}\frac{2}{3\pi}R^3\,.$$ For $n$ emitted gluons the $R^3$ term, which is always related to the clustering of two gluons, is given by: $$\frac{1}{n!} \frac{n(n-1)}{2} (-4C_Ft\Delta\eta)^n \Delta\eta^{-2}
\frac{2}{3\pi} R^3\,, \quad n\geq 2\,.$$ Hence to all orders one can sum the above to obtain: $$e^{-4C_Ft\Delta\eta}\frac{(-4C_Ft)^2}{2}\frac{2}{3\pi}R^3\,.$$
- The integrals: $$\begin{gathered}
\frac{1}{4!}(-4C_Ft)^4\,4\times \bigg( \int_{1\,\mathrm{in}}
\int_{2\,\mathrm{out}} \int_{3\,\mathrm{out}}
\int_{4\,\mathrm{in}} \theta_{42}\,\theta_{43}+\\
+\int_{1\,\mathrm{in}} \int_{2\,\mathrm{out}}\int_{3\,\mathrm{in}}
\int_{4\,\mathrm{in}}\theta_{32} \left[\theta_{43}\, \theta_{42}-
\theta_{43} - \theta_{42} \right] \bigg)\,.\end{gathered}$$
The integral over particle 1 gives $\Delta\eta$, while the rest of the integrals reduce to the ones calculated earlier which gave the $R^5$ result, accompanied with a factor of $4$ standing for the number of triplet combinations formed by four gluons. For $n$ emitted gluons this factor is $n(n-1)(n-2)/3!$. Explicitly we have for this case: $$\frac{1}{4!} (-4C_Ft)^4\frac{4\times 3\times 2}
{6}\Delta\eta^{4-3}f_2\,R^5\,.$$ At the $n^{\mathrm{th}}$ order we obtain: $$\frac{1}{n!} (-4C_Ft\Delta\eta)^n \frac{n(n-1)(n-2)} {6}
\Delta\eta^{-3} f_2\,R^5\,, \quad n\geq 3\,.$$ Summing all orders we get: $$e^{-4C_Ft\Delta\eta} \frac{(-4C_Ft)^3}{6} f_2\, R^5\,.$$
- The integral: $$\frac{1}{4!}(-4C_Ft)^4\,3\times \int_{1\,\mathrm{out}}
\int_{2\,\mathrm{in}} \int_{3\,\mathrm{out}} \int_{4\,\mathrm{in}}
\theta_{21}\, \theta_{43}\,.$$
This integral can be factored into two separate integrals involving the constraint on $k_1$ and $k_2$ and over $k_3$ and $k_4$ respectively. Each of these reduces to the $R^3$ result obtained in the two-gluon case. Thus we get: $$\frac{1}{4!} (-4C_Ft)^4\, 3\times\left(\frac{2}{3\pi}\right)^2R^6\,.$$ At $n^{\mathrm{th}}$ order this becomes: $$\frac{1}{n!} \frac{n(n-1)(n-2)(n-3)}{8}(-4C_F t\Delta\eta)^n
\Delta\eta^{-4} \left(\frac{2}{3\pi}\right)^2R^6\,,\quad n\geq 4\,,$$ which can be resummed to: $$e^{-4C_Ft\Delta\eta}\frac{(-4C_Ft)^4}{8}
\left(\frac{2}{3\pi}\right)^2R^6.$$ The factor 3 (and generally $n(n-1)(n-2)(n-3)/8$) is the number of configurations formed by four (and generally $n$) gluons such that we have two pairs of gluons each is formed by an out-of-gap gluon connected to a softer in-gap one.
- The remaining integrals
These integrals give at most an $\mathcal{O}(R^7)$ term because they constrain all the four gluons at once. In fact for gap sizes $\Delta\eta \geq 3R$, these integrals go purely as $R^7$ with no dependence on $\Delta \eta$. Since here however we wish to use the condition $\Delta \eta \geq 2R$, which allows us to make use of the whole range of HERA data, these integrals do not depend purely on $R$ but are a function of $R$ and $\Delta \eta$ which have an upper bound of order $R^7$. This can be seen by noting that there are three azimuthal integrations that each produce a function which has a maximum value proportional to $R$, so the result of integrating over all azimuthal variables is a factor that is bounded from above by $R^3$. Similarly there are four rapidity integrations with a clustering constraint on all four gluons implying that they can produce an $R^4$ term at most. In general the result at $n^\mathrm{th}$ order of constraining $n$ gluons at once, is bounded from above by a factor of order $R^{2n-1}$.
We can write the result for all these as $(-4C_Ft)^4/4!\,
y(R,\Delta\eta)$, and resum such terms to all orders (in the same manner as before) to: $$e^{-4C_Ft\Delta\eta}\frac{(-4C_Ft)^4}{4!} y(R,\Delta \eta)\,,$$ where $y(R,\Delta \eta)$ is at most ${\mathcal{O}}(R^7)$. We do not calculate these terms (though it is possible to do so) since the accuracy we achieve by retaining the $R^3$, $R^5$ and $R^6$ terms, we have already computed, is sufficient as we shall show.
The five-gluon case is too lengthy to analyse here. The same patterns as pointed out above persist here but new terms that are at most ${\mathcal{O}}(R^{9})$ appear when all five gluons are constrained. There is also an $R^8$ term, coming from the combination of $R^3$ and $R^5$ terms in the same manner that the $R^6$ term arose as a combination of two $R^3$ terms.
All-orders result
=================
From the above observations we can assemble an all-orders result to $R^6$ accuracy, where we shall consider $R$ to be at most equal to unity. The final result for primary emissions alone and including the usual Sudakov logarithms (for $\Delta \eta \geq 2R$) is: $$\begin{gathered}
\label{eq:result}
\Sigma_{P}(t) = e^{-4C_Ft\Delta\eta} \times\\ \times \left( 1+
(-4C_Ft)^2 \frac{1}{3\pi} R^3 + (-4C_Ft)^3 \frac{f_2} {6} R^5 +
(-4C_Ft)^4 \frac{1}{18\pi^2} R^6 + \frac{(-4C_Ft)^4}{4!}
\mathcal{O}(R^7) \right).\end{gathered}$$
Formally one may wish to extend this accuracy by computing a few more terms such as those integrals that directly give or are bounded by an $R^7$ behaviour and this is possible though cumbersome. It should also be unnecessary from a practical viewpoint, especially keeping in mind that $R=0.7$ is a preferable value to $R=1$, in the important case of hadron collisions[^4] and the fact that even at $R=1$ the $R^3$ term significantly dominates the result over the range of $t$ values of phenomenological interest, as we shall see below.
We further note that if one keeps track of all the terms that come about as a combination of $R^3$ and $R^5$ terms in all possible ways at all orders, one ends up with the following form for Eq. : $$\Sigma_{P}(t)= e^{-4C_Ft\Delta\eta} \exp
\left(\frac{(-4C_Ft)^2}{2!}\frac{2}{3\pi}R^3 +
\frac{(-4C_Ft)^3}{3!}f_2\,
R^5+\frac{(-4C_Ft)^4}{4!}\mathcal{O}(R^7)\right),$$ the expansion of which agrees with Eq. . In the above by ${\mathcal{O}}(R^7)$ we mean terms that, while they may depend on $\Delta \eta$, are at most as significant as an $R^7$ term. We also mention that in the formal limit $\Delta \eta \to
\infty$, there is no dependence of the clustering terms on $\Delta
\eta$ and they are a pure power series in $R$. The limit of an infinite gap appears in calculations where the region considered includes one of the hard emitting partons. An example of such cases (which have a leading double-logarithmic behaviour) is once again the quantity $\Delta \phi_{jj}$ between jets in e.g. DIS or hadron collisions.
Fig. \[fig:results\] represents a comparison between the leading $R^3$ result (i.e. the pure fixed-order result of Ref. [@BD05] combined with the resummed Sudakov exponent), the resummed $R^3$, $R^5$ and $R^6$ result (Eq. ) and a numerical Monte Carlo estimate with and without clustering. The Monte Carlo program in question is essentially that described in Ref. [@DSNG1], with the modification of $k_t$ clustering where we computed just emissions off the primary dipole “switching off” the non-global correlated emission.
We note that the resummed analytical form is in excellent agreement with the numerical result which contains the full $R$ dependence. We have tested this agreement with a range of values of $R$. We take this agreement as indicating that uncomputed $R^7$ and higher terms can safely be ignored even at $R=1$ and even more so at fractional values of $R$, e.g. $R = 0.7$. To provide an idea about the relative role of terms at different powers of $R$ in Eq. we note that for $R=1$ and $t = 0.25$ the resummed $R^3$ term increases the Sudakov result $\exp\left(-4 C_Ft
\Delta \eta \right)$ by $19 \%$, the $R^5$ term represents a further increase of $1.5 \%$ to the result after inclusion of the resummed $R^3$ term and the $R^6$ term has a similar effect on the result obtained after including up to $R^5$ terms.
Next we comment on the size of the non-global component at different values of $R$.
Revisiting the non-global contribution
======================================
We have seen above how the primary emission piece is dependent on the jet clustering algorithm. It was already noted in Ref. [@AS1] that the non-global contribution is significantly reduced by clustering. Here we wish to point out that after correction of an oversight in the code used there, the non-global component is even more significantly reduced than previously stated in the literature. Indeed for $R=1$ and the illustrative value of $t=0.15$, which corresponds to gap energy $Q_\Omega=1$ GeV for a hard scale $Q =100$ GeV, the non-global logarithms are merely a $5
\%$ effect as opposed to the $20 \%$ reported previously [@AS1] and the over $65 \%$ effect in the unclustered case.
In Fig. \[fig:ngs\] we plot the curves for the primary and full results (in the large $N_c$ limit) for the integrated quantity $\Sigma(t)$ as a function of $t$ defined earlier. We note that for $R=0.5$ the primary result is essentially identical to the Sudakov result. The non-global contribution (which is the ratio of the full and primary curves) is however still quite significant. Neglecting it leads to an overestimate of $40 \%$ for $t =0.15$. Increasing the jet radius in a bid to lower the non-global component we note that for $R=0.7$ the impact of the non-global component is now just over $20 \%$ while the difference between the full primary result and the Sudakov result is small (less than $5 \%$). The situation for $R=1$ is a bit different. Here it is the non-global logarithms that are only a $5\%$ effect (compared to the $20 \%$ claimed earlier [@AS1]) while the full primary result is bigger than the Sudakov term by around $11 \%$.
The value $R=1$ is in fact the one used in the HERA analyses of gaps-between–jets in photoproduction. It is now clear that such analyses will have a very small non-global component and a moderate effect on primary emissions due to clustering. In order to completely account for the primary emission case for dijet photoproduction one would need to generalise the calculations presented here for a single $q\bar{q}$ dipole to the case of several hard emitting dipoles. An exactly similar calculation would be needed for the case of hadron-hadron collisions and this is work in progress. It is straightforward however to at least estimate the effect of our findings on the photoproduction case and we deal with this in the following section.
Gaps between jets at HERA – the ZEUS analysis
=============================================
We can test the perturbative framework presented in this paper with energy flow measurements in the photoproduction of dijets. These energy flow observables are defined with two hard jets in the central detector region separated by a gap in pseudorapidity. A gap event is defined when the sum of the hadronic transverse energy in the gap is less than a cut-off, and the gap fraction is defined as the ratio of the gap cross-section to the total inclusive cross-section. The energy flow observables measured by ZEUS [@ZEUS] and H1 [@h1] use the $k_t$ clustering definition of the hadronic final state, and the transverse energy in the gap is given by the sum of the mini-jet contributions. In this paper we focus on the ZEUS measurements and provide revised theoretical estimates for them. These revisions lead to changes that are minor in the context of the overall theoretical uncertainty but should become more significant once the matching to fixed higher-orders is carried out and an estimate of the next-to–leading logarithms is obtained. The H1 data was considered in Ref. [@AS2], where the theoretical analysis consisted of only the resummed primary emission contribution without taking account of the effect of $k_t$ clustering.
The ZEUS data was obtained by colliding 27.5 GeV positrons with 820 GeV protons, with a total integrated luminosity of 38.6 $\pm$ 1.6 pb$^{-1}$ in the 1996-1997 HERA running period. The full details of the ZEUS analysis can be found in Ref. [@ZEUS], but the cuts relevant to the calculations in this paper are: $$\begin{aligned}
0.2 < y < 0.75\,, \nonumber \\
Q^2 < 1\,\mathrm{GeV}^2\,, \nonumber \\
6,5 \, \mathrm{GeV} < E_T(1,2)\,, \nonumber \\
|\eta(1,2)| < 2.4\,, \nonumber \\
|0.5(\eta_1 + \eta_2)| < 0.75\,, \nonumber \\
2.5 < \Delta\eta < 4\,,\nonumber\end{aligned}$$ where $y$ is the inelasticity, $Q^2$ is the virtuality of the photon, $E_T(1,2)$ are the transverse energies of the two hard jets, $\eta(1,2)$ are the pseudorapidities of the two hardest jets and $\Delta\eta$ is the jet rapidity difference. The further requirement for the gap sample is $E_{t,\,\mathrm{gap}}\, <Q_\Omega=$ 0.5, 1, 1.5, 2 GeV, and the clustering parameter $R$ is always taken to be unity.
The theoretical prediction for the gap fraction is composed of the primary piece, with corrections due to clustering, and the non-global piece. We shall now describe each in turn.
The resummed primary contribution ignoring the clustering corrections, is obtained from the factorisation methods of Sterman et al [@KS] and is described in Ref. [@AS2]. The four-jet case of photoproduction requires a matrix formalism, and the exponents of the Sudakov factors in the gap cross-section are anomalous dimension matrices over the basis of possible colour flows of the hard sub-process. The emission of soft gluons cause mixing of the colour basis. Consideration of the eigenvectors and eigenvalues of the anomalous dimension matrices, together with sub-process–dependent hard and soft matrices, allows the resummed four-jet primary emission differential cross-section to be written as [@AS2]: $$\label{primary} \frac{d\sigma}{d\eta}=\sum_{L,I}H_{IL}S_{LI}\exp
\left\{\left(\lambda_L^{*} (\eta,\Omega)+\lambda_I
(\eta,\Omega)\right)\int_{p_t}^{Q_{\Omega}} \frac{d\mu}{\mu}
\alpha_s(\mu)\right\},$$ where $H$ and $S$ denote the hard and soft matrices (expanded over the colour basis), $\lambda$ denotes the eigenvalues of the anomalous dimension matrices, $\eta=\Delta\eta/2$ and $p_t$ is the hard scale of the process. This was computed in Ref. [@AS2] for the case of photoproduction and energy flow observables measured by H1. In this paper we have recomputed this differential gap cross-section for the observable defined by the ZEUS collaboration. The uncertainty in the renormalisation scale is quantified by varying the hard scale in the resummation by a factor of 2 (upper bound) and 0.5 (lower bound).
We now need to account for the effect of clustering on Eq. . Since we do not have as yet the full results for the four-jet case of photoproduction we simply estimate the full correction as the square of the correction arising in the two-jet case dealt with here, using the appropriate colour factors for each hard sub-process. This was also the method used to approximate the non-global contribution for the four-jet case in Ref. [@AS2]. While we emphasise that this is only a rough way of examining the impact of the clustering dependent terms computed here, given the size of the effects we are dealing with, it is clear that no significant differences ought to emerge if one were to properly compute the various dipole sub-processes we need to account for. We also include the revised and virtually negligible non-global component in an identical fashion to arrive at the best current theoretical estimates.
The results for the ZEUS gap-fraction with a $k_t$-defined final state are shown in Figs. \[Fig:ZEUS1\] and \[Fig:ZEUS2\]. We consider here two different values for the gap energy $Q_\Omega$. For the value of $Q_\Omega = 0.5$ GeV one notes that the full prediction accounting approximately for all additional sources of single-logarithmic enhancements, is somewhat higher than the pure “Sudakov” type prediction. This is due to the extra primary terms we compute here, non-global corrections being negligible. For a larger value of $Q_\Omega = 1.0$ GeV the difference between the clustered and unclustered primary results is negligible. We also note the large theoretical uncertainty on the prediction as represented by the renormalisation scale dependence. This is to be expected in light of the fact that the predictions here are not matched to fixed-order and account only for the leading logarithms. Improvements along both these directions should be possible in the immediate future after which the role of the various effects we highlighted here should be revisited.
Conclusions
===========
In the present paper we have shed further light on resummations of $k_t$-clustered final states. We have shown that both the primary and non-global components of the resummed result are affected by clustering and dealt with the resummation of each in turn. For the non-global component we find that the results after applying clustering are different from those presented earlier [@AS1]. The new results we present here indicate an even smaller non-global component than previously believed.
We have also shown how the primary emission clustering effects can be resummed to all orders as an expansion in the clustering parameter $R$ and computed a few terms of the series. The analytical results we have provided here for a single emitting dipole should be generalisable to the case of several hard dipoles (multi-jet processes). This should then enable one to write a correct resummed result for primary emissions to a high accuracy and deal with the reduced non-global component in the large $N_c$ limit. Such progress is relevant not just to energy-flow studies but to any jet observable of a non-global nature, requiring resummation. An example is the azimuthal angle $\Delta \phi_{jj}$ between jets, mentioned previously. The work we have carried out should enable next-to–leading log calculations of such jet observables to sufficient accuracy to enable phenomenological studies of the same.
Lastly we have also mentioned the impact of the new findings on the ZEUS gaps-between–jets analysis. Since the non-global effects are very small for $R=1$ the main new effect is the additional clustering dependent primary terms we computed here. Approximating the effect of these terms for the case of photoproduction, somewhat changes the theoretical predictions but this change is insignificant given the large theoretical uncertainty that arises due to missing higher orders and unaccounted for next-to–leading logarithms. We consider both these areas as avenues for further work and hope that more stringent comparisons can thus be made in the very near future.
[99]{}
M. Dasgupta and G.P. Salam, *Resummation of non-global QCD observables*, \[\]. M. Dasgupta and G.P. Salam, *Accounting for coherence in interjet $E_t$ flow: a case study*, \[\]. A. Banfi, G. Marchesini and G. Smye, *Away-from-jet energy flow*, \[\]. J.R. Forshaw, A. Kyrieleis and M.H. Seymour, *Super-leading logarithms in non-global observables in QCD?*, \[\]. M. Dasgupta and G.P. Salam, *Resummed event-shape variables in DIS*, \[\]. G. Marchesini and B.R. Webber, *Associated transverse energy in hadronic jet production*, .
C.F. Berger, T. Kucs and G. Sterman, *Energy flow in interjet radiation*, \[\]. R.B. Appleby and M.H. Seymour, *Non-global logarithms in inter-jet energy flow with $k_t$ clustering requirement*, \[\]. S. Catani, Yu.L. Dokshitzer, M.H. Seymour and B.R. Webber, *Longitudinally-invariant $k_{\perp}$-clustering algorithms for hadron-hadron collisions*, .
S.D. Ellis and D.E. Soper, *Successive combination jet algorithm for hadron collisions*, \[\]. M. Cacciari and G.P. Salam, *Dispelling the $N^3$ myth for the $k_t$ jet-finder*, \[\].
R.B. Appleby and M.H. Seymour, *The resummation of inter-jet energy flow for gaps-between-jets processes at HERA*, \[\]. A. Banfi and M. Dasgupta, *Problems in resumming interjet energy flows with $k_t$ clustering*, \[\]. N. Kidonakis, G. Oderda and G. Sterman, *Threshold resummation for dijet cross sections*, \[\]. ZEUS Collaboration, *Photoproduction of events with rapidity gaps between jets at HERA*, in preparation.
C. Adloff et al. \[H1 Collaboration\], *Energy flow and rapidity gaps between jets in photoproduction at HERA*, \[\].
[^1]: For recent progress on aspects of the $k_t$ algorithm itself see Ref. [@CacSal].
[^2]: Since we are here dealing with back-to–back jets we can define the rapidity with respect to the jet axis or equivalently, for our purposes, the thrust axis.
[^3]: We use the term “Sudakov” in a loose sense since the primary emission result leads to an exponential that is analogous to a Sudakov form-factor.
[^4]: This is because the underlying event will contaminate jets less if one chooses a smaller $R$.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We consider linear preferential attachment trees, and show that they can be regarded as random split trees in the sense of Devroye (1999), although with infinite potential branching. In particular, this applies to the random recursive tree and the standard preferential attachment tree. An application is given to the sum over all pairs of nodes of the common number of ancestors.'
address: 'Department of Mathematics, Uppsala University, PO Box 480, SE-751 06 Uppsala, Sweden'
author:
- Svante Janson
date: '16 June, 2017'
title: Random recursive trees and preferential attachment trees are random split trees
---
Introduction {#S:intro}
============
The purpose of this paper is to show that the linear preferential attachment trees, a class of random trees that includes and generalises both the random recursive tree and the standard preferential attachment tree, can be regarded as random split trees in the sense of @Devroye, although with infinite (potential) branching.
Recall that the random recursive tree is an unordered rooted tree that is constructed by adding nodes one by one, with each node attached as the child of an existing node chosen uniformly at random; see [e.g[.=1000]{}]{} [@Drmota Section 1.3.1]. The general preferential attachment tree is constructed in a similar way, but for each new node, its parent is chosen among the existing nodes with the probability of choosing a node $v$ proportional to $w_{d(v)}$, where $d(v)$ is the outdegree (number of existing children) of $v$, and $w_0,w_1,\dots$ is a given sequence of weights. The constant choice $w_k=1$ thus gives the random recursive tree. The preferential attachment tree made popular by @BarabasiA (as a special case of more general preferential attachment graphs) is given by the choice $w_k=k+1$; this coincides with the plane oriented recursive tree earlier introduced by @Szymanski. We shall here consider the more general linear case $$\label{chirho}
w_k=\chi k+\rho$$ for some real parameters $\chi$ and $\rho>0$, which was introduced (at least for $\chi{\geqslant}0$) by @Pittel. Thus the random recursive tree is obtained for $\chi=0$ and $\rho=1$, while the standard preferential attachment tree is the case $\chi=\rho=1$. We allow $\chi<0$, but in that case we have to assume that $\rho/|\chi|$ is an integer, say $m$, in order to avoid negative weights. (We then have $w_m=0$ so a node never gets more than $m$ children, and $w_k$ for $k>m$ are irrelevant; see further [Section \[S<0\]]{}.) See also [@SJ306 Section 6] and the further references given there. We denote the random linear preferential attachment tree with $n$ nodes and weights by $T{^{\chi,\rho}}_n$.
\[R1\] Note that multiplying all $w_k$ by the same positive constant will not change the trees, so only the ratio $\chi/\rho$ is important. Hence we may normalize the parameters in some way when convenient; however, different normalizations are convenient in different situations, and therefore we keep the general and more flexible assumptions above unless we say otherwise.
Note also that our assumptions imply $w_1=\chi+\rho>0$ except in the case $\chi=-\rho$, when $w_1=0$ and $T_n{^{\chi,\rho}}$ deterministically is a path. We usually ignore that trivial case in the sequel, and assume $\chi+\rho>0$.
The three cases $\chi>0$, $\chi=0$ and $\chi<0$ give the three classes of very simple increasing trees defined and characterized by @PP-verysimple, see also [@BergeronFS92] and [@Drmota Section 1.3.3]. In fact, it suffices to consider $\chi=1$, $\chi=0$ and $\chi=-1$, see [Remark \[R1\]]{}. Then, $\chi=0$ yields the random recursive tree, as said above; $\chi=1$ yields the generalised plane oriented recursive tree; $\chi=-1$ (and $\rho=m\in{\mathbb N}$) yields the [$m$-ary increasing tree]{}, see further [Section \[S<0\]]{}.
Random split trees were defined by @Devroye as rooted trees generated by a certain recursive procedure using a stream of balls added to the root. We only need a simple but important special case (the case $s=1$, $s_0=1$, $s_1=0$ in the notation of [@Devroye]), in which case the general definition simplifies to the following (we use ${\mathcal P}$ and $P_i$ instead of ${\mathcal V}$ and $V_i$ in [@Devroye]):
Let $b{\geqslant}2$ be fixed and let ${\mathcal P}=(P_i)_1^b$ be a random vector of probabilities: in other words, $P_i{\geqslant}0$ and $\sum_{i=1}^b P_i=1$. Let ${{{\mathcal T}}_b}$ be the infinite rooted tree where each node has $b$ children, labelled $1,\dots,b$, and give each node $v\in {{{\mathcal T}}_b}$ an independent copy ${\mathcal P}{^{(v)}}=(P_i{^{(v)}})_1^b$ of ${\mathcal P}$. (These vectors are thus random, but chosen only once and fixed during the construction.) Each node in ${{{\mathcal T}}_b}$ may hold one ball; if it does, we say that the node is *full*. Initially all nodes are empty. Balls arrive, one by one, to the root of ${{{\mathcal T}}_b}$, and move (instantaneously) according to the following rules.
\[split1\] A ball arriving at an empty node stays there, making the node full.
\[split2\] A ball arriving at a node $v$ that already is full continues to a child of $v$; the child is chosen at random, with child $i$ chosen with probability $P_i{^{(v)}}$. Given the vectors ${\mathcal P}{^{(v)}}$, all these choices are made independently of each other.
The random split tree $T_n=T_n{^{{\mathcal P}}}$ is the subtree of ${{{\mathcal T}}_b}$ consisting of the nodes that contain the first $n$ balls. Note that the parameters apart from $n$ in (this version of) the construction are $b$ and the random $b$-dimensional probability vector ${\mathcal P}$ (or rather its distribution); ${\mathcal P}$ is called the *split vector*. @Devroye gives several examples of this construction (and also of other instances of his general definition). One of them is the random binary search tree, which is obtained with $b=2$ and ${\mathcal P}=(U,1-U)$, with $U\sim U(0,1)$, the uniform distribution on ${\ensuremath{[0,1]}}$. The main purpose of the definition of random split trees is that they encompass many different examples of random trees that have been studied separately; the introduction of split trees made it possible to treat them together. Some general results were proved in [@Devroye], and further results and examples have been added by other authors, see for example [@BroutinH; @Holmgren].
@Devroye considers only finite $b$, yielding trees $T_n$ where each node has at most $b$ children, but the definition above of random split trees extends to $b=\infty$, when each node can have an unlimited number of children. This is the case that we shall use. (Note that random recursive trees and linear preferential attachment trees with $\chi>0$ do not have bounded degrees; see [Section \[S<0\]]{} for the case $\chi<0$.) Our purpose is to show that with this extension, also linear preferential attachment trees are random split trees.
\[Rlabel\] The general preferential attachment tree is usually considered as an unordered tree. However, it is often convenient to label the children of each node by $1,2,3,\dots$ in the order that they appear; hence we can also regard the tree as a ordered tree. Thus both the preferential attachment trees and the split trees considered in the present paper can be regarded as subtrees of the infinite Ulam–Harris(–Neveu) tree ${{\mathcal T}}_\infty$, which is the infinite rooted ordered tree where every node has a countably infinite set of children, labelled $1,2,3,\dots$. (The nodes of ${{\mathcal T}}_\infty$ are all finite strings $\iota_1\dots\iota_m \in {\mathbb N}^*:=\bigcup_0^\infty{\mathbb N}^m$, with the empty string as the root.)
One advantage of this is that it makes it possible to talk unambiguously about inclusions among the trees. We note that both constructions above yield random sequences of trees $(T_n{^{\chi,\rho}})_{n=1}^\infty$ and $(T_n{^{{\mathcal P}}})_{n=1}^\infty$ that are increasing: $T{^{\chi,\rho}}_n\subset T{^{\chi,\rho}}_{n+1}$ and $T{^{{\mathcal P}}}_n\subset T{^{{\mathcal P}}}_{n+1}$.
\[Rsplit\] The random split tree, on the other hand, is defined as an ordered tree, with the potential children of a node labelled $1,2,\dots$. Note that these do not have to appear in order; child 2 may appear before child 1, for example.
We can always consider the random split tree as unordered by ignoring the labels. If we do so, any (possibly random) permutation of the random probabilities $P_i$ yields the same unordered split tree. (In particular, if $b$ is finite, then it is natural to permute $(P_i)_1^b$ uniformly at random, thus making all $P_i$ having the same (marginal) distribution [@Devroye]. However, we cannot do that when $b=\infty$.)
Using the GEM and Poisson–Dirichlet distributions defined in [Section \[Snot\]]{}, we can state our main result as follows. The proof is given in [Section \[Spf\]]{}, using Kingman’s paintbox representation of exchangeable partitions. ([Appendix \[AT\]]{} gives an alternative, but related, argument using exchangeable sequences instead.) In fact, the result can be said to be implicit in [@Pitman] and [@Bertoin], see [e.g[.=1000]{}]{} [@Bertoin Corollary 2.6].
\[T1\] Let $(\chi,\rho)$ be as above, and assume $\chi+\rho>0$. Then, provided the trees are regarded as unordered trees, the linear preferential attachment tree $T_n{^{\chi,\rho}}$ has, for every $n$, the same distribution as the random split tree $T_n{^{{\mathcal P}}}$ with $b=\infty$ and ${\mathcal P}\sim{\operatorname{GEM}}{\bigl(\chi/(\chi+\rho),\rho/(\chi+\rho)\bigr)}$,
Moreover, (re)labelling the children of each node in order of appearance, the sequences $(T_n{^{\chi,\rho}})_1^\infty$ and $(T_n{^{{\mathcal P}}})_1^\infty$ of random trees have the same distribution.
The same results hold also if we instead let ${\mathcal P}$ have the Poisson–Dirichlet distribution ${\operatorname{PD}}{\bigl(\chi/(\chi+\rho),\rho/(\chi+\rho)\bigr)}$.
The result extends to the trivial case $\chi+\rho=0$, with ${\mathcal P}\sim{\operatorname{GEM}}(0,0)={\operatorname{PD}}(0,0)$, [i.e[.=1000]{}]{}, $P_1=1$; in this case $T_n$ is a path.
\[CR\] The sequence of random recursive trees $(T_n){_1^\infty}=(T_n^{0,1}){_1^\infty}$ has the same distribution as the sequence of random split trees $(T_n{^{{\mathcal P}}}){_1^\infty}$ with ${\mathcal P}\sim{\operatorname{GEM}}(0,1)$ or ${\mathcal P}\sim{\operatorname{PD}}(0,1)$ (as unordered trees).
Recall that the split vector ${\operatorname{PD}}(0,1)$ appearing here also appears as, for example, the asymptotic distribution of the (scaled) sizes of the cycles in a random permutation; see [e.g[.=1000]{}]{} [@Pitman Section 3.1].
\[CPA\] The sequence of standard [preferential attachment]{} trees $(T_n){_1^\infty}\allowbreak=(T_n^{1,1}){_1^\infty}$ has the same distribution as the sequence of random split trees $(T_n{^{{\mathcal P}}}){_1^\infty}$ with ${\mathcal P}\sim{\operatorname{GEM}}(\frac12,\frac12)$ or ${\mathcal P}\sim{\operatorname{PD}}(\frac12,\frac12)$ (as unordered trees).
Note that in [Theorem \[T1\]]{} and its corollaries above, it is important that we ignore the original labels, and either regard the trees as unordered, or (re)label the children of each node in order of appearance (see [Remark \[Rlabel\]]{}); random split trees with the original labelling are different (see [Remark \[Rsplit\]]{}). In the case $\chi<0$, there is also a version for labelled trees, see [Theorem \[T2\]]{}.
We give an application of [Theorem \[T1\]]{} in [Section \[Sapp\]]{}.
Notation {#Snot}
========
If $T$ is a rooted tree, and $v$ is a node in $T$, then $T^v$ denotes the subtree of $T$ consisting of $v$ and all its descendants. (Thus $T^v$ is rooted at $v$.)
A *principal subtree* (also called branch) of $T$ is a subtree $T^v$ where $v$ is a child of the root $o$ of $T$. Thus the node set $V(T)$ of $T$ is partitioned into ${\ensuremath{\{o\}}}$ and the node sets $V(T^{v_i})$ of the principal subtrees.
For a (general) preferential attachment tree, with a given weight sequence $(w_k)_k$, the weight of a node $v$ is $w_{d(v)}$, where $d(v)$ is the outdegree of $v$. The (total) weight $w(S)$ of a set $S$ of nodes is the sum of the weights of the nodes in $S$; if $T'$ is a tree, we write $w(T')$ for $w(V(T'))$.
The Beta distribution $B({\alpha},{\beta})$ is for ${\alpha},{\beta}>0$, as usual, the distribution on ${\ensuremath{[0,1]}}$ with density function $c x^{{\alpha}-1}(1-x)^{{\beta}-1}$, with the normalization factor $c={\Gamma}({\alpha}+{\beta})/{\bigl({\Gamma}({\alpha}){\Gamma}({\beta})\bigr)}$. We allow also the limiting cases $B(0,{\beta}):={\delta}_0$ (${\beta}>0$) and $B({\alpha},0):={\delta}_1$ (${\alpha}>0$), [i.e[.=1000]{}]{}, the distributions of the deterministic variables 0 and 1, respectively.
The GEM distribution ${\operatorname{GEM}}({\alpha},\theta)$ is the distribution of a random infinite vector of probabilities $(P_i)_1^\infty$ that can be represented as $$\label{gem}
P_i=Z_i\prod_{j=1}^{i-1}(1-Z_j),
\qquad j{\geqslant}1,$$ where the $Z_j$ are independent random variables with Beta distributions $$\label{gemZ}
Z_j\sim B(1-{\alpha},\theta+j{\alpha}).$$ Note that has the interpretation that $P_1=Z_1$, $P_2$ is a fraction $Z_2$ of the remaining probability $1-P_1$, $P_3$ is a fraction $Z_3$ of the remainder $1-P_1-P_2=(1-Z_1)(1-Z_2)$, and so on. Here the parameters ${\alpha}$ and $\theta$ are assumed to satisfy $-\infty<{\alpha}<1$ and $\theta+{\alpha}{\geqslant}0$; furthermore, if ${\alpha}<0$, then ${\theta}/|{\alpha}|$ has to be an integer. (If ${\alpha}<0$ and ${\theta}=m|{\alpha}|$, then $Z_m=1$, and thus yields $P_{i}=0$ for all $i>m$; hence it does not matter that $Z_j$ really is defined only for $j{\leqslant}m$ in this case.) See further [e.g[.=1000]{}]{} [@Pitman Section 3.2].
The Poisson–Dirichlet distribution ${\operatorname{PD}}({\alpha},\theta)$ is the distribution of the random infinite vector $(\hat P_i){_1^\infty}$ obtained by reordering $(P_i){_1^\infty}\sim{\operatorname{GEM}}({\alpha},\theta)$ in decreasing order.
Proof of [Theorem \[T1\]]{} {#Spf}
===========================
\[L1\] With the linear weights , a tree $T$ with $m$ nodes has total weight $w(T)=(m-1)\chi+m\rho=m(\chi+\rho)-\chi$.
Let the nodes have outdegrees $d_1,\dots,d_m$. Then ${\sum_{i=1}^m}d_i=m-1$, and the weight of the tree is thus $$w(T)=
{\sum_{i=1}^m}(\chi d_i+\rho)
= \chi {\sum_{i=1}^m}d_i+m\rho=(m-1)\chi+m\rho.
\qedhere$$
\[L2\] Consider the sequence of [linear [preferential attachment]{}]{} trees $(T_n)_1^\infty=(T{^{\chi,\rho}}_n)_1^\infty$, with the children of the root labelled in order of appearence. Let $N_j(n):=|T^j_n|$, the size of the $j$-th principal subtree of $T_n$. Then $N_j(n)/n\to P_j$ [a.s[.=1000]{}]{} as [${n\to\infty}$]{}, for every $j{\geqslant}1$ and some random variables $P_j$ with the distribution ${\operatorname{GEM}}{\bigl(\chi/(\chi+\rho),\rho/(\chi+\rho)\bigr)}$. (In the trivial case $\chi+\rho=0$, interpret this as ${\operatorname{GEM}}(0,0)$.)
The case $\chi+\rho=0$ is trivial, with $N_1(n)=n-1$ and $P_1=1$. Hence we may assume that $\chi+\rho>0$. Furthermore, see [Remark \[R1\]]{}, we may and shall, for convenience, assume that $$\label{=1}
\chi+\rho=1.$$
The lemma now follows from @Pitman [Theorem 3.2], which is stated for “the Chinese restaurant with the $({\alpha},\theta)$ seating plan”, since we may regard the principal subtrees as tables in a Chinese restaurant (ignoring the root), and then the [preferential attachment]{} model with translates into the $(\chi,\rho)$ seating plan as defined in [@Pitman]. (Cf. the bijection between recursive trees and permutations in [@Drmota Section 6.1.1], which yields this correspondence; the uniform case treated there is the case $(\chi,\rho)=(0,1)$, which yields the usual Chinese restaurant process.)
For completeness, we give a direct proof using [Pólya]{} urns in [Appendix \[AL2\]]{}.
Recall that in the (general) preferential attachment tree, the parent $u$ of a new node is chosen to be a node $v$ with probability proportional to the current weight $w_{d(v)}$ of the node. We can make this random choice in several steps, by first deciding randomly whether $u$ is the root or not, and if not, which principal subtree it belongs to, making this choice with probabilities proportional to the total weights of these sets of nodes. If $u$ is chosen to be in a subtree $T^w$, we then continue recursively inside this tree, by deciding randomly whether $u$ is the root of $T^w$ or not, and if not, which principal subtree of $T^w$ it belongs to, again with probabilities proportional to the total weights, and so on.
Consequently, the general preferential attachment tree can be constructed recursively using a stream of new nodes (or balls) similarly to the random split tree, with the rules:
\[pax1\] A ball arriving at an empty node stays there, making the node full.
\[pax2\] A ball arriving at a node $v$ that already is full continues to a child of $v$. The child is chosen at random; if $v$ has $d$ children $v_1,\dots,v_d$, then the ball is passed to child $i$ with probability $cw(T^{v_i})$ for each $i=1,\dots,m$, and to the new child $m+1$ with probability $cw(v)=c(\chi
d+\rho)$, where $c=1/w(T^v)$ is a positive normalization factor.
Thus both the random split trees and the linear preferential attachment trees can be constructed recursively, and in order to show [Theorem \[T1\]]{}, it suffices to show that the two constructions yield the same result at the root, i.e., that balls after the first are passed on to the children of the root in the same way in both random trees. (Provided we ignore the order of the children, or (re)label the children in order of appearance.)
Consider the linear preferential attachment tree with the construction above. As in the proof of [Lemma \[L2\]]{}, we may assume that holds.
Label the children of the root in order of appearance, see [Remark \[Rlabel\]]{}. The first ball stays at the root, while all others are passed on; we label each ball after the first by the label of the child of the root that it is passed to. This gives a random sequence $(X_i)_{i=1}^\infty$ of labels in ${\mathbb N}$, (where $X_i$ is the label of ball $i+1$, the $i$th ball that is passed on). By construction, the random sequence $(X_i)_i$ is such that the first 1 appears before the first 2, which comes before the first 3, and so on; we call a finite or infinite sequence $(x_i)_i$ of labels in ${\mathbb N}$ *acceptable* if it has this property.
Let $(x_i)_1^n$ be a finite acceptable sequence of length $n{\geqslant}0$, and let $n_k$ be the number of times $k$ appears in the sequence; further, let $d_n$ be the largest label in the sequence, so $n_k{\geqslant}1$ if $1{\leqslant}k{\leqslant}d_n$, but $n_k=0$ if $k>d$. If $(X_i)_1^n=(x_i)_1^n$, then the subtree $T^k$ with label $k$ has $n_k$ nodes, and thus by [Lemma \[L1\]]{} and our assumption weight $n_k(\chi+\rho)-\chi=n_k-\chi$, provided $k{\leqslant}d_n$, while the root has weight $\chi d_n+\rho$. Hence, by the construction above, noting that the tree has $n+1$ nodes and thus by [Lemma \[L1\]]{} weight $(n+1)-\chi=n+\rho$, $${\operatorname{\mathbb P{}}}{\bigl(X_{n+1}=k\mid (X_i)_1^n=(x_i)_1^n\bigr)}=
\begin{cases}
(n_k-\chi)/(n+\rho), & 1{\leqslant}k{\leqslant}d_n,
\\
(d_n\chi+\rho)/(n+\rho), & k= d_n+1.
\end{cases}$$ It follows by multiplying these probabilities for $n=0$ to $N-1$ and rearranging factors in the numerator (or by induction) that, letting $d:=d_N$ and $N_k:=n_k$ for $n=N$, $$\begin{split}
{\operatorname{\mathbb P{}}}{\bigl((X_i)_1^N=(x_i)_1^N\bigr)}
= \frac{\prod_{j=0}^{d-1}(j\chi+\rho)\prod_{k=1}^d \prod_{n_k=1}^{N_k-1}(n_k-\chi)}
{\prod_{n=0}^{N-1}(n+\rho)}.
\end{split}$$ In particular, note that this probability depends on the sequence $(x_i)_1^N$ only through the numbers $N_k$. Consequently, if $(x_i')_1^N$ is another acceptable sequence that is a permutation of $(x_i)_1^N$, then $$\label{exch}
{\operatorname{\mathbb P{}}}{\bigl((X_i)_1^N=(x_i)_1^N\bigr)}={\operatorname{\mathbb P{}}}{\bigl((X_i)_1^N=(x_i')_1^N\bigr)}.$$
Return to the infinite sequence $(X_i)_1^\infty$. This sequence encodes a partition of ${\mathbb N}$ into the sets $A_j:={\ensuremath{\{k\in{\mathbb N}:X_k=j\}}}$, and interpreted in this way, says that the random partition ${\ensuremath{\{A_j\}}}_j$ of ${\mathbb N}$ is an exchangeable random partition; see [e.g[.=1000]{}]{} [@Bertoin Section 2.3.2] or [@Pitman Chapter 2]. (See [Appendix \[AT\]]{} for a version of the argument without using the theory of exchangeable partitions.) By Kingman’s paintbox representation theorem [@Kingman-partition; @Kingman-coalescent; @Pitman; @Bertoin], any exchangeable random partition of ${\mathbb N}$ can be constructed as follows from some random subprobability vector $(P_i)_1^\infty$, [i.e[.=1000]{}]{}, a random vector with $P_i{\geqslant}0$ and $\sum_iP_i{\leqslant}1$: Let $P_\infty:=1-\sum_{i<\infty}P_i{\geqslant}0$. Let $Y_i\in{\mathbb N}\cup{\ensuremath{\{\infty\}}}$ be [i.i.d[.=1000]{}]{} random variables with the distribution $(P_i)_1^\infty$. Then the equivalemce classes are ${\ensuremath{\{i:Y_i=k\}}}$ for each $k<\infty$, and the singletons ${\ensuremath{\{i\}}}$ for each $i$ with $Y_i=\infty$.
In the present case, [Lemma \[L2\]]{} shows that every principal subtree $T^j$ satisfies either $|T^j(n)|\to\infty$ as [${n\to\infty}$]{}, or $T^j(n)$ is empty for all $n$ (when $\chi<0$ and $\rho=m|\chi|$ with $m<j$). Hence, the equivalence classes defined by $(X_i)_1^\infty$ are either empty or infinite, so there are no singletons. Thus $P_\infty=0$, and $(P_i)_1^\infty$ is a random probability vector. Moreover, the paintbox construction is precisely what the split tree construction \[split1\]–\[split2\] does at the root, provided we ignore the labels on the children.
Consequently, the sequence of random split trees $T_n{^{{\mathcal P}}}$ with this random split vector ${\mathcal P}=(P_i)_1^\infty$ has the same distribution as the sequence $(T_n{^{\chi,\rho}}){_1^\infty}$, provided that we ignore the labels of the children, or (equivalently) relabel the children of a node in the split trees by their order of appearance. It remains to identify the split vector ${\mathcal P}$.
Let $T^j_n$ be the principal subtree of the split tree $T_n{^{{\mathcal P}}}$ whose root is labelled $j$, and let $N_j(n):=|T^j_n|$. Then, by the law of large numbers, as [${n\to\infty}$]{}, $$\label{LLN}
N_j(n)/n{\overset{\mathrm{a.s.}}{{\longrightarrow}}}P_j,
\qquad j{\geqslant}1.$$ Recall that we may permute the probabilities $P_i$ arbitrarily, see [Remark \[Rsplit\]]{}. Let us relabel the children of the root in their order of appearance, and permute the $P_i$ correspondingly; thus still holds. Moreover, we have shown that the tree also can be regarded as a [linear [preferential attachment]{}]{} tree, and with this labelling of the children, [Lemma \[L2\]]{} applies. Consequently, and [Lemma \[L2\]]{} yield $(P_i){_1^\infty}\sim{\operatorname{GEM}}(\chi,\rho)$.
Finally, ${\operatorname{PD}}(\chi,\rho)$ is by definition a permutation of ${\operatorname{GEM}}(\chi,\rho)$, and thus these two split vectors define random split trees with the same distribution (as unordered trees).
An auxiliary result {#Saux}
===================
In the theory of random split trees, an important role is played by the random variable $W$ defined as a size-biased sample from the split vector ${\mathcal P}$; in other words, we first sample ${\mathcal P}=(P_i){_1^\infty}$, then sample $I\in{\mathbb N}$ with the distribution ${\operatorname{\mathbb P{}}}(I=i)=P_i$, and finally let $W:=P_I$. Consequently, for any $r{\geqslant}0$, $$\label{sizebiased}
{\operatorname{\mathbb E{}}}W^r ={\operatorname{\mathbb E{}}}\sum_i P_i P_i^t=\sum_i {\operatorname{\mathbb E{}}}P_i^{t+1}.$$
We have a simple result for the distribution of $W$ in our case.
\[LW\] For the random split tree in [Theorem \[T1\]]{}, $W\sim B{\bigl(\rho/(\chi+\rho),1\bigr)}$. Thus $W$ has density function ${\gamma}x^{{\gamma}-1}$ on $(0,1)$, where ${\gamma}=\rho/(\chi+\rho)$.
Let $X_n$ be the number of nodes in $T_n$ that are descendants of the first node added after the root. In the split tree $T_n{^{{\mathcal P}}}$, let $I$ be the label of the subtree containing the first node added after the root. Conditioned on the split vector ${\mathcal P}$ at the root, by definition ${\operatorname{\mathbb P{}}}(I=i\mid{\mathcal P})=P_i$. Furthermore, still conditioned on ${\mathcal P}$, the law of large numbers yields that if $I=i$, then $X_n/n{\overset{\mathrm{a.s.}}{{\longrightarrow}}}P_i$. Hence, $X_n/n{\overset{\mathrm{a.s.}}{{\longrightarrow}}}P_I=W$.
On the other hand, in the [preferential attachment]{} tree $T_n{^{\chi,\rho}}$ with children labelled in order of appearance, the first node after the root always gets label 1 and thus in the notation of [Lemma \[L2\]]{}, $X_n=N_1(n)$. Consequently, [Lemma \[L2\]]{} implies $X_n/n{\overset{\mathrm{a.s.}}{{\longrightarrow}}}P_1$. Since [Theorem \[T1\]]{} implies that $X_n$ has the same distribution in the two cases, $W{\overset{\mathrm{d}}{=}}P_1$. Furthermore, by –, assuming again for simplicity , $P_1=Z_1\sim B(1-\chi,\chi+\rho)=B(\rho,1)$.
Thus $W{\overset{\mathrm{d}}{=}}P_1$ for our GEM distribution. This is only a special case of the general result that rearranging the $P_i$ in size-biased order preserves ${\operatorname{GEM}}({\alpha},\theta)$ for any pair of parameters, see [@Pitman Section 3.2].
By [Lemma \[LW\]]{} we have ${\operatorname{\mathbb E{}}}W={\gamma}/({\gamma}+1)$, and thus by $$\label{hex}
{\sum_{i=1}^\infty}{\operatorname{\mathbb E{}}}P_i^2={\operatorname{\mathbb E{}}}W = \frac{\rho}{\chi+2\rho}.$$ It is possible to calculate the sum in directly, using the definitions –, but the calculation is rather complicated: $$\begin{split}
{\sum_{i=1}^\infty}{\operatorname{\mathbb E{}}}P_i^2& = {\sum_{i=1}^\infty}{\operatorname{\mathbb E{}}}Z_i^2\prod_{j<i}{\operatorname{\mathbb E{}}}(1-Z_j)^2
\\&
={\sum_{i=1}^\infty}\frac{(1-{\alpha})(2-{\alpha})\prod_{1}^{i-1}(\theta+j{\alpha})(\theta+1+j{\alpha})}
{\prod_{1}^{i}(\theta+1+(j-1){\alpha})(\theta+2+(j-1){\alpha})}
\\&
= \frac{(1-{\alpha})(2-{\alpha})}{(\theta+1)(\theta+2)}
{\sum_{i=1}^\infty}\prod_{1}^{i-1}\frac{\theta+j{\alpha}}{\theta+2+j{\alpha}}.
\end{split}$$ The last sum can be summed, for example by writing it as a hypergeometric function $F(\theta/{\alpha}+1,1;(\theta+2)/{\alpha}+1;1)$ and using Gauss’s formula [@NIST (15.4.20)], leading to . The proof above seems simpler.
An application {#Sapp}
==============
@Devroye showed general results on the height and insertion depth for split trees, and used them to give results for various examples. The theorems in [@Devroye] assume that the split vectors are finite, so the trees have bounded degrees, but they may be extended to the present case, using [e.g[.=1000]{}]{} (for the height) results on branching random walks [@Biggins76; @Biggins77] and methods of [@BroutinDevroye], [@BroutinDevroyeEtAl2008]. However, for the [linear [preferential attachment]{}]{} trees, the height and insertion depth are well known by other methods, see [e.g[.=1000]{}]{} [@Pittel], [@SJ306]; hence we give instead another application. For a rooted tree $T$, let $h(v)$ denote the depth of a node $v$, [i.e[.=1000]{}]{}, its distance to the root. Furthermore, for two nodes $v$ and $w$, let $v\land w$ denote their last common ancestor. We define $$\label{Y}
Y=Y(T):=\sum_{v\neq w} h(v\land w),$$ summing over all pairs of distinct nodes. (For definiteness, we sum over ordered pairs; summing over unordered pairs is the same except for a factor $\frac12$. We may modify the definition by including the case $v=w$; this adds the total pathlength which [a.s[.=1000]{}]{} is of order $O(n\log n)$, see below, and thus does not affect our asymptotic result.)
The parameter $Y(T)$ occurs in various contexts. For example, if ${\hat W}(T)$ denotes the Wiener index and ${\hat P}(T)$ the total pathlength of $T$, then $Y(T)={\hat W}(T)-(n-1){\hat P}(T)$, see [@SJ146]. Hence, for the random recursive tree and binary search tree considered in [@Neininger-wiener], the theorems there imply convergence of $Y_n/n^2$ in distribution. We extend this to convergence [a.s[.=1000]{}]{}, and to all [linear [preferential attachment]{}]{} trees, with characterizations of the limit distribution $Q$ that are different from the one given in [@Neininger-wiener].
\[TQ\] Consider random split trees $T_n{^{{\mathcal P}}}$ of the type defined in the introduction for some random split vector ${\mathcal P}=(P_i){_1^\infty}$, and let $Y_n:=Y(T_n{^{{\mathcal P}}})$ be given by . Assume that with positive probability, $0<P_i<1$ for some $i$. Then there exists a random variable $Q$ such that $Y_n/n^2{\overset{\mathrm{a.s.}}{{\longrightarrow}}}Q$ as [${n\to\infty}$]{}. Furthermore, $Q$ has the representation in below and satisfies $$\label{EQ0}
{\operatorname{\mathbb E{}}}Q= \frac{1}{1-{\operatorname{\mathbb E{}}}\sum_i P_i^2}-1
<\infty,$$ and the distributional fixed point equation $$\label{Qeq}
Q{\overset{\mathrm{d}}{=}}{\sum_{i=1}^\infty}P_i^2(1+Q^{(i)}),$$ with all $Q^{(i)}$ independent of each other and of $(P_i){_1^\infty}$, and with $Q^{(i)}{\overset{\mathrm{d}}{=}}Q$.
If $W$ is the size-biased splitting variable defined in [Section \[Saux\]]{}, then also $$\label{EQW}
{\operatorname{\mathbb E{}}}Q
=\frac{{\operatorname{\mathbb E{}}}W}{1-{\operatorname{\mathbb E{}}}W}.$$
Higher moments may be calculated from or , with some effort.
We modify the definition of split trees by never placing a ball in an node; we use rule \[split2\] for all nodes, and thus each ball travels along an infinite path, chosen randomly with probabilities determined by the split vectors at the visited nodes. Let $X_{k,i}$ be the number of the child chosen by ball $k$ at the $i$th node it visits, and let ${\mathbf X}_k:=(X_{k,i})_{i=1}^\infty$. Label the nodes of ${{\mathcal T}}_\infty$ by strings in ${\mathbb N}^*$ as in [Remark \[Rlabel\]]{}. Then the path of ball $k$ is $\emptyset$, $X_{k,1}$, $X_{k,1}X_{k,2}$, …, visiting the nodes labelled by initial segments of ${\mathbf X}_k$. Note that conditioned on the split vectors ${\mathcal V}{^{(v)}}$ for all $v\in{{\mathcal T}}_\infty$, the sequences ${\mathbf X}_k$ are [i.i.d[.=1000]{}]{} random infinite sequences with the distribution $$\label{emil}
{\operatorname{\mathbb P{}}}(X_{k,j}=i_j,\, 1{\leqslant}j{\leqslant}m)
=\prod_{j=1}^m P_{i_j}^{(i_1\dotsm i_{j-1})}.$$
For two sequences ${\mathbf X},{\mathbf X}'\in{\mathbb N}^\infty$, let $$f({\mathbf X},{\mathbf X}'):=\min{\ensuremath{\{i:X_i\neq X'_i\}}}-1,$$ [i.e[.=1000]{}]{}, the length of the longest common initial segment. Let $v_k$ be the node in $T_n$ that contains ball $k$, and note that if neither $v_k$ nor $v_\ell$ is an ancestor of the other, then $h(v_k\land v_\ell)=f({\mathbf X}_k,{\mathbf X}_\ell)$.
We define, as an approximation of $Y_n$, $$\label{hY}
{\widehat Y}_n:=\sum_{k,\ell{\leqslant}n,\; k\neq\ell}f({\mathbf X}_k,{\mathbf X}_\ell)
=2\sum_{\ell<k{\leqslant}n}f({\mathbf X}_k,{\mathbf X}_\ell).$$
Condition on all split vectors ${\mathcal P}{^{(v)}}$. Then, using , $$\label{Q}
\begin{split}
& {\operatorname{\mathbb E{}}}{\bigl(f({\mathbf X}_1,{\mathbf X}_2)\mid {\ensuremath{\{{\mathcal P}{^{(v)}}, v\in{{\mathcal T}}_\infty\}}}\bigr)}
\\&\qquad
={\operatorname{\mathbb E{}}}\sum_{m=1}^\infty \sum_{i_1,\dots,i_m\in{\mathbb N}}
{\boldsymbol1{\{X_{1,j}=X_{2,j}=i_j \text{ for } j=1,\dots,m\}}}
\\&\qquad
= \sum_{m=1}^\infty \sum_{i_1,\dots,i_m\in{\mathbb N}}
{\Bigl(\prod_{j=1}^m P_{i_j}^{(i_1\dotsm i_{j-1})}\Bigr)}^2
=:Q.
\end{split}$$ Hence, since the split vectors are [i.i.d[.=1000]{}]{}, $$\label{EQ}
\begin{split}
{\operatorname{\mathbb E{}}}f({\mathbf X}_1,{\mathbf X}_2)
&= {\operatorname{\mathbb E{}}}Q
= \sum_{m=1}^\infty \sum_{i_1,\dots,i_m\in{\mathbb N}}
\prod_{j=1}^m {\operatorname{\mathbb E{}}}P_{i_j}^2
={\sum_{m=1}^\infty}{\Bigl(\sum_i {\operatorname{\mathbb E{}}}P_i^2\Bigr)}^m
\\&
= \frac{1}{1-\sum_i {\operatorname{\mathbb E{}}}P_i^2}-1.
\end{split}$$ Since $\sum_i P_i^2{\leqslant}\sum_i P_i=1$, with strict inequlity with positive probability, ${\operatorname{\mathbb E{}}}\sum_i P_i^2<1$, and thus shows that ${\operatorname{\mathbb E{}}}f({\mathbf X}_1,{\mathbf X}_2)<\infty$. Consequently, [a.s[.=1000]{}]{}, $$\label{efa}
Q= {\operatorname{\mathbb E{}}}{\bigl(f({\mathbf X}_1,{\mathbf X}_2)\mid {\ensuremath{\{{\mathcal P}{^{(v)}}, v\in{{\mathcal T}}_\infty\}}}\bigr)}<\infty.$$
Condition again on all split vectors ${\mathcal P}{^{(v)}}$. Then the random sequences ${\mathbf X}_k$ are [i.i.d[.=1000]{}]{}, and thus is a $U$-statistic. Hence, we can apply the strong law of large numbers for $U$-statistics by @Hoeffding, which shows that [a.s[.=1000]{}]{} $$\label{hylim}
\frac{{\widehat Y}_n}{n(n-1)}
\to
{\operatorname{\mathbb E{}}}{\bigl(f({\mathbf X}_1,{\mathbf X}_2)\mid {\ensuremath{\{{\mathcal P}{^{(v)}}, v\in{{\mathcal T}}_\infty\}}}\bigr)}=Q.$$ Consequently, also unconditionally, $$\label{hylim2}
\frac{{\widehat Y}_n}{n(n-1)}
{\overset{\mathrm{a.s.}}{{\longrightarrow}}}Q.$$
It remains only to prove that $({\widehat Y}_n-Y_n)/n^2{\overset{\mathrm{a.s.}}{{\longrightarrow}}}0$, since we already have shown , which implies by , and follows from the representation .
As noted above, if $\ell<k$, then $h(v_k\land v_\ell)=f({\mathbf X}_k,{\mathbf X}_\ell)$ except possibly when $v_\ell$ is an ancestor of $v_k$; furthermore, in the latter case $$\label{up}
0{\leqslant}h(v_k\land v_\ell){\leqslant}f({\mathbf X}_k,{\mathbf X}_\ell).$$ Let $H_n:=\max{\ensuremath{\{ h(v):v\in T_n\}}}$ be the height of $T_n=T_n{^{{\mathcal P}}}$, and let ${H^*}_n:=\max{\ensuremath{\{f({\mathbf X}_k,{\mathbf X}_\ell):\ell<k{\leqslant}n\}}}$. Since a node $v_k$ has at most $H_n$ ancestors, it follows from that, writing $v\prec w$ when $v$ is ancestor of $w$, $$\label{puh}
0{\leqslant}{\widehat Y}_n-Y_n
= 2{\sum_{k=1}^n}\sum_{v_l\prec v_k} {\bigl(f({\mathbf X}_k,{\mathbf X}_\ell) - h(v_k\land
v_\ell)\bigr)}
{\leqslant}2n H_n{H^*}_n.$$ Furthermore, there is some node $v_k$ with $h(v_k)=H_n$, and if $v_\ell$ is its parent, then $f({\mathbf X}_k,{\mathbf X}_\ell){\geqslant}H_n-1$; hence, $H_n{\leqslant}{H^*}_n+1$.
Let $m=m_n:={\lceilc \log n\rceil}$, where $c>0$ is a constant chosen later. Then, arguing similarly to –, $$\label{Qm}
\begin{split}
& {\operatorname{\mathbb P{}}}{\bigl(f({\mathbf X}_1,{\mathbf X}_2){\geqslant}m\mid {\ensuremath{\{{\mathcal P}{^{(v)}}, v\in{{\mathcal T}}_\infty\}}}\bigr)}
\\&\qquad
={\operatorname{\mathbb E{}}}\sum_{i_1,\dots,i_m\in{\mathbb N}}
{\boldsymbol1{\{X_{1,j}=X_{2,j}=i_j \text{ for } j=1,\dots,m\}}}
\\&\qquad
= \sum_{i_1,\dots,i_m\in{\mathbb N}}
{\Bigl(\prod_{j=1}^m P_{i_j}^{(i_1\dotsm i_{j-1})}\Bigr)}^2
\end{split}$$ and thus, letting $a:=\sum_i{\operatorname{\mathbb E{}}}P_i^2<1$, $$\label{EQm}
\begin{split}
{\operatorname{\mathbb P{}}}{\bigl( f({\mathbf X}_1,{\mathbf X}_2){\geqslant}m\bigr)}
= \sum_{i_1,\dots,i_m\in{\mathbb N}} \prod_{j=1}^m {\operatorname{\mathbb E{}}}P_{i_j}^2
=a^m.
\end{split}$$ By symmetry, we thus have $${\operatorname{\mathbb P{}}}({H^*}_n{\geqslant}m) {\leqslant}\sum_{\ell<k{\leqslant}n}{\operatorname{\mathbb P{}}}{\bigl(f({\mathbf X}_k,{\mathbf X}_\ell){\geqslant}m\bigr)} {\leqslant}n^2 a^m
{\leqslant}n^2 a^{c\log n} {\leqslant}n{^{-2}},$$ provided we choose $c{\geqslant}4/|\log a|$. Consequently, by the Borel–Cantelli lemma, [a.s[.=1000]{}]{} ${H^*}_n{\leqslant}m-1{\leqslant}c\log n$ for all large $n$. Hence, [a.s[.=1000]{}]{} for all large $n$, $$\label{hh}
H_n{\leqslant}{H^*}+1{\leqslant}c\log n+1,$$ and shows that [a.s[.=1000]{}]{} ${\widehat Y}_n-Y_n=O(n\log^2 n)$. In particular, $({\widehat Y}_n-Y_n)/n^2{\overset{\mathrm{a.s.}}{{\longrightarrow}}}0$, which as said above together with completes the proof.
Let $Y_n:=Y(T_n{^{\chi,\rho}})$ be given by for the [linear [preferential attachment]{}]{} tree $T_n{^{\chi,\rho}}$, and assume $\chi+\rho>0$. Then $Y_n/n^2{\overset{\mathrm{a.s.}}{{\longrightarrow}}}Q$ for some random variable $Q$ with $$\label{EQXP}
{\operatorname{\mathbb E{}}}Q = \frac{\rho}{\chi+\rho}.$$
Immediate by Theorems \[T1\] and \[TQ\], using and to obtain .
The case $\chi<0$: [$m$-ary increasing tree]{}[s]{} {#S<0}
===================================================
In this section we consider the case $\chi<0$ of [linear [preferential attachment]{}]{} tree[s]{} further; as noted above, this case has some special features. By [Remark \[R1\]]{}, we may assume $\chi=-1$, and then by our assumptions, $\rho>0$ is necessarily an integer, say $\rho=m\in{\mathbb N}$. As said in [Remark \[R1\]]{}, the case $m=1$ is trivial, with $T_n{^{-1,1}}$ a path, so we are mainly interested in $m\in{\ensuremath{\{2,3,\dots\}}}$.
By , $w_m=0$, and thus no node in $T_n{^{-1,m}}$ will get more that $m$ children. In other words, the trees will all have outdegrees bounded by $m$. It follows from [Lemma \[L2\]]{}, or directly from –, that if, as in [Theorem \[T1\]]{}, $(P_i){_1^\infty}\sim{\operatorname{GEM}}{\bigl(-\frac{1}{m-1},\frac{m}{m-1}\bigr)}$, then $P_j=0$ for $j>m$. Consequently, in this case, the split tree can be defined using a finite split vector $(P_j)_1^b$ as in Devroye’s original definition (with $b=m$).
Recall than an [$m$-ary]{} tree is a rooted tree where each node has at most $m$ children, and the children are labelled by distinct numbers in [$\{1,\dots,m\}$]{}; in other words, a node has $m$ potential children, labelled $1,\dots,m$, although not all of these have to be present. (Potential children that are not nodes are known as external nodes.) The [$m$-ary]{} trees can also be defined as the subtrees of the infinite [$m$-ary]{} tree ${{\mathcal T}}_m$ that contain the root. Note that [$m$-ary]{} trees are ordered, but that the labelling includes more information than just the order of children (for vertices of degree less than $m$).
It is natural to regard the trees $T_n{^{-1,m}}$ as $m$-ary trees by labelling the children of a node by $1,\dots,m$ in (uniformly) random order. It is then easy to see that the construction above, with $w_k=m-k$ by , is equivalent to adding each new node at random uniformly over all positions where it may be placed in the infinite tree ${{\mathcal T}}_m$, [i.e[.=1000]{}]{}, by converting a uniformly chosen random external node to a node; see [@Drmota Section 1.3.3]. Regarded in this way, the trees $T_n{^{-1,m}}$ are called [$m$-ary increasing tree]{}[s]{} (or $m$-ary recursive trees) See also [@BergeronFS92 Example 1].
\[EBST\] The case $\chi=-1$, $m=2$ gives, using the construction above with [$m$-ary]{} (binary) trees and external nodes, the random binary search tree. As mentioned in the introduction, the binary search tree was one of the original examples of random split trees in [@Devroye], with the split vector $(U,1-U)$ where $U\sim U(0,1)$.
Our [Theorem \[T1\]]{} also exhibits the binary search tree as a random split tree, but with split vector $(P_1,1-P_1)\sim{\operatorname{GEM}}(-1,2)$ and thus, by , $P_1=Z_1\sim B(2,1)$. There is no contradiction, since we consider the trees as unordered in [Theorem \[T1\]]{}, and thus any (possibly random) permutation of the split vector yields the same trees; in this case, it is easily seen that reordering $(P_1,P_2)$ uniformly at random yields $(U,1-U)$. ($P_1\sim B(2,1)$ has density $2x$, and $P_2=1-P_1$ thus density $2(1-x)$, leading to a density 1 for a uniformly random choice of one of them.)
There are many other split vectors yielding the same unordered trees. For example, [Theorem \[T1\]]{} gives ${\operatorname{PD}}(-1,2)$ as one of them. By definition, ${\operatorname{PD}}(-1,2)$ is obtained by ordering ${\operatorname{GEM}}(-1,2)$ in decreasing order; by the discussion above, this is equivalent to ordering $(U,1-U)$ in decreasing order, and it follows that the split vector $({\hat P}_1,{\hat P}_2)\sim{\operatorname{PD}}(-1,2)$ has ${\hat P}_1\sim U(\frac12,1)$ and ${\hat P}_2=1-{\hat P}_1$.
For the binary search tree, Devroye’s original symmetric choice $(U,1-U)$ for the split vector has the advantage that, by symmetry, the random split tree then coincides with the binary search tree also as binary trees.
For $m>2$, the [$m$-ary increasing tree]{} considered here is not the same as the $m$-ary search tree; the latter is also a random split tree [@Devroye], but not of the simple type studied here.
[Example \[EBST\]]{} shows that when $m=2$, we may see the [$m$-ary increasing tree]{} as a random split tree also when regarded as an [$m$-ary tree]{}, and not only as an unordered tree as in [Theorem \[T1\]]{}. We show next that this extends to $m>2$. Recall that the Dirichlet distribution ${\operatorname{Dir}}({\alpha}_1,\dots,{\alpha}_m)$ is a distribution of probability vectors $(X_1,\dots,X_m)$, [i.e[.=1000]{}]{} random vectors with $X_i{\geqslant}0$ and $\sum_1^m X_i=1$; the distribution has the density function $cx_1^{{\alpha}_1-1}\dotsm x_m^{{\alpha}_m-1}{\,\mathrm{d}}x_1\dotsm
{\,\mathrm{d}}x_{m-1}$ with the normalization factor $c={\Gamma}({\alpha}_1+\dots+{\alpha}_m)/\prod_1^m{\Gamma}({\alpha}_i)$.
\[T2\] Let $m{\geqslant}2$. The sequence of [$m$-ary increasing tree]{}[s]{} $(T_n){_1^\infty}=(T_n{^{-1,m}}){_1^\infty}$, considered as $m$-ary trees, has the same distribution as the sequence of random split trees $(T_n{^{{\mathcal P}}}){_1^\infty}$ with the split vector ${\mathcal P}=(P_i)_1^m\sim {\operatorname{Dir}}(\frac{1}{m-1},\dots,\frac{1}{m-1})$.
By [Theorem \[T1\]]{}, the sequence of [$m$-ary increasing tree]{}[s]{} $(T_n{^{-1,m}})_n$ has, as unordered trees, the same distribution as the random split trees $(T_n^{{\mathcal P}'})_n$, where ${\mathcal P}'=(P'_i){_1^\infty}\sim {\operatorname{GEM}}{\bigl(-\frac{1}{m-1},\frac{m}{m-1}\bigr)}$. As noted above, $P'_j=0$ for $j>m$, so we may as well use the finite split vector $(P'_i)_1^m$. Let ${\mathcal P}=(P_i)_1^m$ be a uniformly random permutation of $(P'_i)_1^m$. Then, as sequences of unordered trees, $(T_n^{{\mathcal P}})_n{\overset{\mathrm{d}}{=}}(T_n^{{\mathcal P}'})_n {\overset{\mathrm{d}}{=}}(T_n{^{-1,m}})_n$. Moreover, regarded as [$m$-ary]{} trees, both $(T_n^{{\mathcal P}})_n$ and $(T_n{^{-1,m}})_n$ are, by symmetry, invariant under random relabellings of the children of each node. Consequently, $(T_n^{{\mathcal P}})_n{\overset{\mathrm{d}}{=}}(T_n{^{-1,m}})_n$ also as [$m$-ary]{} trees, as claimed.
It remains to identify the split vector ${\mathcal P}$. The definition as a random permutation of $(P_i')_1^m$ does not seem very convenient; instead we use a variation of the argument in [Appendix \[AL2\]]{} for [Lemma \[L2\]]{}. We may assume that $T_n=T_n{^{-1,m}}=T_n{^{{\mathcal P}}}$, as [$m$-ary]{} trees, for all $n{\geqslant}1$. Let $N_j(n)$ be the number of nodes and $N^e_j(n)$ the number of external nodes in the principal subtree $T_n^j$ (now using the given labelling of the children of the root). It is easy to see that $N^e_j(n)=(m-1)N_j(n)+1$.
Consider first $T_n$ as the random split tree $T_n{^{{\mathcal P}}}$; then the law of large numbers yields, by conditioning on the split vector ${\mathcal P}$ at the root, $$\label{nu}
N_j(n)/n{\overset{\mathrm{a.s.}}{{\longrightarrow}}}P_j,
\qquad j=1,\dots,m.$$ Next, consider $T_n$ as the [$m$-ary increasing tree]{} $T_n{^{-1,m}}$, and regard the external nodes in $T_n^j$ as balls with colour $j$. Then the external nodes evolve as a [Pólya]{} urn with $m$ colours, starting with one ball of each colour and at each round adding $m-1$ balls of the same colour as the drawn one. Then, see [e.g[.=1000]{}]{} [@Athreya1969] or [@JohnsonKotz Section 4.7.1], the vector of proportions ${\bigl(N^e_j(n)/((m-1)n+1)\bigr)}_{j=1}^m$ of the different colours converges [a.s[.=1000]{}]{} to a random vector with a symmetric Dirichlet distribution ${\operatorname{Dir}}(\frac{1}{m-1},\dots,\frac{1}{m-1})$. Hence the vector ${\bigl(N_j(n)/n\bigr)}_j$ converges to the same limit. This combined with shows that ${\mathcal P}\sim{\operatorname{Dir}}(\frac{1}{m-1},\dots,\frac{1}{m-1})$.
If we modify the proof above by considering one $N_j$ at a time, using a sequence of two-colour [Pólya]{} urns as in [Appendix \[AL2\]]{}, we obtain a representation of the Dirichlet distributed split vector with $Z_j\sim B{\bigl(\frac{1}{m-1},\frac{m-j}{m-1}\bigr)}$, $j=1,\dots,m$; [cf[.=1000]{}]{} the similar but different . (This representation can also be seen directly.)
@BroutinDevroyeEtAl2008 study a general model of random trees that generalizes split trees (with bounded outdegrees) by allowing more general mechanisms to split the nodes (or balls) than the ones considered in the present paper. (The main difference is that the splits only asymptotically are given by a single split vector ${\mathcal V}$.) Their examples include the [$m$-ary increasing tree]{}, and also increasing trees as defined by @BergeronFS92 with much more general weights, assuming only a finite maximum outdegree $m$; they show that some properties of such trees asymptotically depend only on $m$, and in particular that the distribution of subtree sizes ${\bigl(N_j(n)/n\bigr)}_1^d$ converges to the Dirichlet distribution ${\operatorname{Dir}}(\frac{1}{m-1},\dots,\frac{1}{m-1})$ seen also in [Theorem \[T2\]]{} above. (Recall that [Theorem \[T2\]]{}, while for a special case only, is an exact representation for all $n$ and not only an asymptotic result.)
There is no analogue of [Theorem \[T2\]]{} for $\chi{\geqslant}0$, since then the split vector is infinite, and symmetrization is not possible.
Acknowledgement {#acknowledgement .unnumbered}
===============
I thank Cecilia Holmgren for helpful discussions.
Two alternative proofs {#AA}
======================
We give here two alternative arguments, a direct proof of [Lemma \[L2\]]{} and an alternative version of part of the proof of [Theorem \[T1\]]{} without using Kingman’s theory of exchangeable partitions. We do this both for completeness and because we find the alternative and more direct arguments interesting. (For the proof of [Theorem \[T1\]]{}, it should be noted that the two arguments, although stated using different concepts, are closely related, see the proof of Kingman’s paintbox theorem by @Aldous [§11].)
A direct proof of [Lemma \[L2\]]{} {#AL2}
----------------------------------
We often write $N_k$ for $N_k(n)$.
Consider first the evolution of the first principal subtree $T^1_n$. Let us colour all nodes in $T^1_n$ red and all other nodes white. If at some stage there are $r=N_1{\geqslant}1$ red nodes and $w$ white nodes, and thus $n=r+w$ nodes in total, then the total weight $R$ of the red nodes is, using [Lemma \[L1\]]{}, $$\label{R}
R=w(T^1_n)=r-\chi=N_1-\chi,$$ while the total weight of all nodes is $w(T_n)=n-\chi$, and thus the total weight $W$ of the white nodes is $$\label{W}
W=w(T_n)-w(T_n^1)=(n-\chi)-(r-\chi)=n-r=w.$$ By –, adding a new red node increases $R$ by 1, but does not change $W$, while adding a new white node increases $W$ by 1 but does not change $R$. Moreover, by definition, the probabilities that the next new node is red or white are proportional to $R$ and $W$. In other words, the total red and white weights $R$ and $W$ evolve as a [Pólya]{} urn with balls of two colours, where a ball is draw at random and replaced together with a new ball of the same colour. (See [e.g[.=1000]{}]{} [@EggPol; @Polya] and, even earlier, [@Markov].) Note that while the classical description of [Pólya]{} urns considers the numbers of balls of different colours, and thus implicitly assumes that these are integers, the weights considered here may be arbitrary positive real numbers; however, it has been noted many times that this extension of the original definition does not change the results, see [e.g[.=1000]{}]{} [@SJ154 Remark 4.2] and [cf[.=1000]{}]{} [@Jirina] for the related case of branching processes.
In our case, the first node is the root, which is white, and the second node is its first child, which is the root of the principal subtree $T^1$ and thus is red. Hence, the [Pólya]{} urn just described starts (at $n=2)$ with $r=w=1$, and thus by – $R=1-\chi$ and $W=1$.
It is well-known that for a [Pólya]{} urn of the type just described (adding one new ball each time, of the same colour as the drawn one), with initial (non-random) values $R_0$ and $W_0$ of the weights, the red proportion in the urn, [i.e[.=1000]{}]{}, $R/(R+W)$, converges [a.s[.=1000]{}]{} to a random variable $Z\sim
B(R_0,W_0)$. (Convergence in distribution follows easily from the simple exact formula for the distribution of the sequence of the first $N $ draws [@Markov]; convergence [a.s[.=1000]{}]{} follows by the martingale convergence theorem, or by exchangeability and de Finetti’s theorem. See also [@JohnsonKotz Sections 4.2 and 6.3.3].) Consequently, in our case, $R/(R+W){\overset{\mathrm{a.s.}}{{\longrightarrow}}}Z_1\sim B(1-\chi,1)$, and thus by – $N_1(n)/n{\overset{\mathrm{a.s.}}{{\longrightarrow}}}Z_1\sim B(1-\chi,1)$. Note that this is consistent with , with $({\alpha},\theta)=(\chi,\rho)$, since we assume . Furthermore, by the definition , we have $P_1=Z_1$, and thus $N_1(n)/n{\overset{\mathrm{a.s.}}{{\longrightarrow}}}P_1$.
We next consider $N_2$, then $N_3$, and so on. In general, for the $k$th principal subtree, we suppose by induction that $N_i(n)/n{\overset{\mathrm{a.s.}}{{\longrightarrow}}}P_i$ for $1{\leqslant}i<k$, with $P_i$ given by for some independent random variables $Z_i$ satisfying , $i<k$. We now colour all nodes in the principal subtree $T^k_n$ red, all nodes in $T_n^1,\dots,T_n^{k-1}$ black, and the remaining ones white. We then ignore all black nodes, and consider only the (random) times that a new node is added and becomes red or white. Arguing as above, we see that if there are $r=N_k{\geqslant}1$ red and $w$ white nodes, then the red and white total weights $R$ and $W$ are given by $$\begin{aligned}
\label{Rk}
R&=w(T^k_n)=r-\chi=N_k-\chi,
\\
\label{Wk}
W&=w(T_n)-\sum_{i=1}^kw(T_n^i)=(n-\chi)-\sum_{i=1}^k(N_i-\chi)=w+(k-1)\chi.\end{aligned}$$ Moreover, $(R,W)$ evolve as a [Pólya]{} urn as soon as there is a red node. When the first red node appears, there is only one white node (the root), since then $T^j$ is empty for $j>k$. Consequently, then $r=w=1$, and – show that the [Pólya]{} urn now starts with $R=1-\chi$ and $W=1+(k-1)\chi=k\chi+\rho$. Since the total number of non-black nodes is $n-\sum_{i<k}N_i$, it follows that, as [${n\to\infty}$]{}, $$\label{poli}
\frac{N_k(n)}{n-\sum_{i<k} N_i(n)}
{\overset{\mathrm{a.s.}}{{\longrightarrow}}}Z_k,$$ for some random variable $Z_k\sim B(1-\chi,k\chi+\rho)$, again consistent with . Moreover, this [Pólya]{} urn is independent of what happens inside the black subtrees, and thus $Z_k$ is independent of $Z_1,\dots,Z_{k-1}$. We have, by , the inductive hypothesis and , $$\begin{split}
\frac{N_k(n)}{n}
&=
\frac{N_k(n)}{n-\sum_{i<k} N_i(n)}\cdot \frac{n-\sum_{i<k} N_i(n)}{n}
\\&
{\overset{\mathrm{a.s.}}{{\longrightarrow}}}Z_k{\Bigl(1-\sum_{i<k} P_i\Bigr)}
= Z_k\prod_{i<k}(1-Z_i)=P_k .
\end{split}$$ This completes the proof.
An alternative argument in the proof of [Theorem \[T1\]]{} {#AT}
----------------------------------------------------------
The equality shows a kind of limited exchangeability for the infinite sequence $(X_i)_1^\infty$; limited because we only consider acceptable sequences, [i.e[.=1000]{}]{}, the first appearance of each label is in the natural order. We eliminate this restriction by a random relabelling of the principal subtrees; let $(U_i)_1^\infty$ be an [i.i.d[.=1000]{}]{} sequence of $U(0,1)$ random variables, independent of everything else, and relabel the balls passed to subtree $i$ by $U_i$. Then the sequence of new labels is $(U_{X_i})_1^\infty$, and it follows from and symmetry that this sequence is exchangeable, [i.e[.=1000]{}]{}, its distribution is invariant under arbitrary permutations. Hence, by de Finetti’s theorem [@Kallenberg Theorem 11.10], there exists a random probability measure ${\mathbf{P}}$ on ${\ensuremath{[0,1]}}$ such that the conditional distribution of $(U_{X_i})_1^\infty$ given ${\mathbf{P}}$ [a.s[.=1000]{}]{} equals the distribution of an [i.i.d[.=1000]{}]{} sequence of random variables with the distribution ${\mathbf{P}}$.
As in the proof in [Section \[Spf\]]{}, every principal subtree $T^j$ satisfies by [Lemma \[L2\]]{} either $|T^j(n)|\to\infty$ as [${n\to\infty}$]{}, or $T^j(n)=\emptyset$ for all $n$. Hence, [a.s[.=1000]{}]{} there exists some (random) index $\ell$ such that $X_\ell=X_1$, and thus $U_{X_\ell}=U_{X_1}$. It follows that the random measure ${\mathbf{P}}$ [a.s[.=1000]{}]{} has no continuous part, so ${\mathbf{P}}=\sum_{i=1}^\infty {P}_i{\delta}_{\xi_i}$, for some random variables ${P}_i{\geqslant}0$ and (distinct) random points $\xi_i\in{\ensuremath{[0,1]}}$, with $\sum_i {P}_i=1$. (We allow ${P}_i=0$, and can thus write ${\mathbf{P}}$ as an infinite sum even if its support happens to be finite.) The labels $\xi_i$ serve only to distinguish the subtrees, and we may now relabel again, replacing $\xi_i$ by $i$. After this relabelling, the sequence $(X_i)$ has become a sequence which conditioned on ${{\mathcal P}}:=({P}_i)_1^\infty$ is an [i.i.d[.=1000]{}]{} sequence with each variable having the distribution ${{\mathcal P}}$. In other words, up to a (random) permutation of the children, the rules \[pax1\]–\[pax2\] yield the same result as the split tree rules \[split1\]–\[split2\] given in the introduction, using the split vector ${{\mathcal P}}=({P}_i)_1^\infty$.
It remains to identify this split vector, which is done as in [Section \[Spf\]]{}, using and [Lemma \[L2\]]{}.
\#1\#2,[\#2, no. \#1,]{}
\#1
[99]{}
David J. Aldous. Exchangeability and related topics. *[É]{}cole d’[É]{}t[é]{} de Probabilit[é]{}s de Saint-Flour XIII – 1983*, 1–198, Lecture Notes in Math. 1117, Springer, Berlin, 1985.
Krishna B. Athreya. On a characteristic property of Polya’s urn. *Studia Sci. Math. Hungar.* **4** (1969), 31–35.
Albert-László Barabási and Réka Albert. Emergence of scaling in random networks. *Science* [286]{} (1999), no. 5439, 509–512. François Bergeron, Philippe Flajolet and Bruno Salvy. Varieties of increasing trees. *CAAP ’92 (Rennes, 1992)*, 24–48, Lecture Notes in Comput. Sci. 581, Springer, Berlin, 1992.
Jean Bertoin. [Random Fragmentation and Coagulation Processes]{}. Cambridge Univ. Press, Cambridge, 2006.
J. D. Biggins. The first- and last-birth problems for a multitype age-dependent branching process. *Advances in Appl. Probability* **8** (1976), no. 3, 446–459.
J. D. Biggins. Chernoff’s theorem in the branching random walk. *J. Appl. Probability* [14]{} (1977), no. 3, 630–636.
Nicolas Broutin and Luc Devroye. Large deviations for the weighted height of an extended class of trees. *Algorithmica* **46** (2006), no. 3-4, 271–297.
Nicolas Broutin, Luc Devroye, Erin McLeish and Mikael de la Salle. The height of increasing trees. *Random Structures Algorithms* **32** (2008), no. 4, 494–518.
Nicolas Broutin and Cecilia Holmgren. The total path length of split trees. *Ann. Appl. Probab.* **22** (2012), no. 5, 1745–1777.
Luc Devroye. Universal limit laws for depths in random trees. *SIAM J. Comput.* **28** (1999), no. 2, 409–432. Michael Drmota. *Random Trees*. Springer, Vienna, 2009.
F. Eggenberger and George [Pólya]{}. Über die Statistik verketteter Vorgänge. [Zeitschrift Angew. Math. Mech.]{} 3 (1923), 279–289.
Wassily Hoeffding. The strong law of large numbers for $U$-statistics. Institute of Statistics, Univ. of North Carolina, Mimeograph series 302 (1961). <https://repository.lib.ncsu.edu/handle/1840.4/2128>
Cecilia Holmgren. Novel characteristic of split trees by use of renewal theory. *Electron. J. Probab.* **17** (2012), no. 5, 27 pp.
Cecilia Holmgren and Svante Janson. Fringe trees, Crump–Mode–Jagers branching processes and $m$-ary search trees. *Probability Surveys* **14** (2017), 53–154.
Svante Janson, The Wiener index of simply generated random trees. [*Random Structures Algorithms* ]{}[22]{}4 (2003), 337–358.
Svante Janson. Functional limit theorems for multitype branching processes and generalized [Pólya]{} urns. *Stoch. Process. Appl.* **110** (2004), 177–245.
Miloslav Jiřina. Stochastic branching processes with continuous state space. [Czechoslovak Math. J.]{} [8 (83)]{} (1958), 292–313.
Norman L. Johnson and Samuel Kotz. Urn models and their application. John Wiley & Sons, New York, 1977. Olav Kallenberg. [Foundations of Modern Probability.]{} 2nd ed., Springer, New York, 2002.
John F. C. Kingman. The representation of partition structures. *J. London Math. Soc. (2)* [18]{} (1978), no. 2, 374–380.
John F. C. Kingman. The coalescent. *Stochastic Process. Appl.* **13** (1982), no. 3, 235–248.
A. A. Markov. Sur quelques formules limites du calcul des probabilités (Russian). *Bulletin de l’Académie Impériale des Sciences, Petrograd* **11** (1917), no. 3, 177–186.
Ralph Neininger. The Wiener index of random trees. *Combin. Probab. Comput.* **11** (2002), no. 6, 587–597.
*NIST Handbook of Mathematical Functions*. Edited by Frank W. J. Olver, Daniel W. Lozier, Ronald F. Boisvert and Charles W. Clark. Cambridge Univ. Press, 2010.\
Also available as *NIST Digital Library of Mathematical Functions*, <http://dlmf.nist.gov/>
Alois Panholzer and Helmut Prodinger. Level of nodes in increasing trees revisited. *Random Structures Algorithms* **31** (2007), no. 2, 203–226.
Jim Pitman. *Combinatorial Stochastic Processes*. [É]{}cole d’[É]{}t[é]{} de Probabilit[é]{}s de Saint-Flour XXXII – 2002. Lecture Notes in Math. 1875, Springer, Berlin, 2006. Boris Pittel. Note on the heights of random recursive trees and random $m$-ary search trees. *Random Structures Algorithms* **5** (1994), no. 2, 337–347.
George Pólya. Sur quelques points de la théorie des probabilités. [Ann. Inst. Poincaré]{} 1 (1930), 117–161.
Jerzy Szyma[ń]{}ski. On a nonuniform random recursive tree. *Annals of Discrete Math.* [33]{} (1987), 297–306.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: '(Ga,In)As/GaAs/Ga(As,Sb) multi-quantum well heterostructures have been investigated using continuous wave and time-resolved photoluminescence spectroscopy at various temperatures. A complex interplay was observed between the excitonic type-II transitions with electrons in the (Ga,In)As well and holes in the Ga(As,Sb) well and the type-I excitons in the (Ga,In)As and Ga(As,Sb) wells. The type-II luminescence exhibits a strongly non-exponential temporal behavior below a critical temperature of $T_c=\SI{70}{K}$. The transients were analyzed in the framework of a rate-equation model. It was found that the exciton relaxation and hopping in the localized states of the disordered ternary Ga(As,Sb) are the decisive processes to describe the dynamics of the type-II excitons correctly.'
address: 'Department of Physics and Material Sciences Center, Philipps-Universität Marburg, Renthof 5, 35032 Marburg, Germany'
author:
- 'S Gies, B Holz, C Fuchs, W Stolz and W Heimbrodt'
bibliography:
- 'TRPLreferences.bib'
title: 'Recombination dynamics of type-II excitons in (Ga,In)As/GaAs/Ga(As,Sb) heterostructures'
---
[*Keywords*]{}: photoluminescence, type-II excitons, time resolved spectroscopy
Introduction
============
The (Ga,In)As/Ga(As,Sb) material system is used for a wide variety of applications nowadays. For example (Ga,In)As/Ga(As,Sb) structures can be used as active medium for mid-infrared lasing.[@Pan2010; @Chang2014; @Pan2013; @Huang2009] Furthermore, applications as light sources in the $\SI{1.6}{\micro m}$ based on (Ga,In)As quantum dots capped with Ga(As,Sb) have been realized.[@Ripalda2005] Very recently, the (Ga,In)As/Ga(As,Sb) material system has been used to make vertical-external-cavity surface emitting lasers emitting light at $\SI{1.2}{\micro m}$ using the type-II band alignment.[@Berger2015; @Gies2015; @Moeller2016] To further improve these devices profound knowledge about the basic properties and processes happening in these materials are needed. Especially the recombination dynamics of this type-II system is an important issue and need to be studied carefully. Few reports exist to describe the recombination processes.[@Tatebayashi2008; @Morozov2014] In this work we aim to give a thorough analysis of the recombination processes in (Ga,In)As/GaAs/Ga(As,Sb) heterostructures. We investigate not only the decay behavior experimentally by means of time-resolved photoluminescence, but provide also a rate-equation model to reveal the important underlying processes of recombination, relaxation and tunneling. Furthermore, this comprehensive study addresses the changes in the type-II luminescence and recombination kinetics at different temperatures, detection energies and discusses the influence of an internal barrier of GaAs. The exciton relaxation turned out to be important in those type II structures. The barrier width can be used to selectively change tunneling and recombination times while keeping the relaxation process constant. This way we were able to determine independently all the important parameters of the type II exciton dynamics.
Experimental
============
Our samples were grown on exact GaAs (001) substrates using metal-organic vapor-phase epitaxy (MOVPE). The sample growth was carried out in an AIXTRON AIX 200 GFR (Gas Foil Rotation) reactor system at a pressure of and using H$_2$ as carrier gas. The native oxide layer was removed from the substrates prior to the sample growth by applying a tertiarybutylarsine (TBAs)-stabilized bake-out procedure. The following growth of the active region was carried out at a temperature of using triethylgallium (TEGa) and trimethylindium (TMIn) as group-III and TBAs and triethylantimony (TESb) as group-V precursors. The active region consists of a . Each repetition is composed of a thick (Ga$_x$,In$_{1-x}$)As layer which is followed by a GaAs interlayer of variable thickness d. The active region is completed by a second thick quantum well consisting of Ga(As$_{1-y}$,Sb$_y$) which is followed by a thick GaAs barrier.
d (nm) x$_{In}$ (%) y$_{Sb}$ (%)
-------- -------------- --------------
0.4 20.7 21.1
1.5 21.5 21.7
3.5 21.0 23.8
4.8 21.0 23.3
: Compositions and interlayer thicknesses of the investigated samples.
\[tab:tab1\]
The cw-photoluminescence (PL) spectra have been measured using a liquid nitrogen cooled Ge-detector and a grating spectrometer. A frequency-doubled solid state laser at provided the light for the excitation of the sample. The time resolved PL measurements were performed using a frequency-doubled Nd:YAG at and a repetition rate of . The PL was detected using a grating spectrometer and a thermoelectrically cooled InP/(In,Ga)(As,P) photomultiplier. Due to the temporal linewidth of the laser the time-resolution of our setup is .
Results and Discussion
======================
The room-temperature PL spectra are depicted in figure \[fig:fig1\] for the four samples with different interlayer thicknesses. The spectra are normalized to the respective Ga(As,Sb) type-I emission around [@Antypas1970; @Nahory1977] and shifted vertically for clarity. At approx. an additional peak can be seen in all spectra. This is due to the recombination of electrons in the (Ga,In)As and holes in the Ga(As,Sb). The intensity of this type-II emission increases with respect to the type-I PL with decreasing interlayer barrier thickness d. Such a behavior can easily be understood, because with decreasing separation of the electrons and holes the overlap of their wavefunctions is increased and therefore the recombination probability. The slight deviation in peak position between the different samples is explained by small variations in layer composition (c.f. table \[tab:tab1\]).
![Room-temperature PL spectra for the samples with different interlayer thickness d. The spectra are normalized to the Ga(As,Sb) emission and shifted vertically for clarity.[]{data-label="fig:fig1"}](Fig_1.eps){width="8.5cm"}
To further analyze the behavior of the type-II PL its transients are presented in figure \[fig:fig2\]. These were measured at the maximum of the type-II emission and are normalized to unity. Because of the spectrally close Ga(As,Sb) PL the depicted transients are taken after the excitation laser pulse had its maximum. This guarantees that only the type-II PL is analyzed, because the GaAsSb emission has decayed to a negligible level as its radiative lifetime is in the picosecond range. The transients were measured at a detection energy of $E_{Det} = \SI{1.016}{eV}$, which corresponds to the maximum of the type-II PL.
![Room-temperature decay curves of the type-II PL for the samples with differently thick interlayers measured at the PL maximum at $E_{Det} = \SI{1.016}{eV}$.[]{data-label="fig:fig2"}](Fig_2.eps){width="8.5cm"}
At room-temperature the decay of the type-II PL exhibits a monoexponential behavior with lifetimes in the ns-range, which is typical for type-II transitions. The PL lifetime of a given transient in figure \[fig:fig2\] can easily be determined. For the thinnest internal barrier of $d = \SI{0.4}{nm}$ we find an $e^{-1}$-time of $\tau_{\SI{0.4}{nm}} = \SI{9}{ns}$. The radiative lifetime increases with increasing barrier thickness to values of $\tau_{\SI{1.5}{nm}} = \SI{12}{ns}$, $\tau_{\SI{3.5}{nm}} = \SI{16}{ns}$, and $\tau_{\SI{4.8}{nm}} = \SI{18}{ns}$ for the thickest barrier. Considering the scatter of the data-points in figure \[fig:fig2\] the uncertainty for all these lifetimes is $\pm \SI{1}{ns}$. This increase in decay time is related to the decrease of type-II PL intensity (c.f. figure \[fig:fig1\]). Due to the reduced overlap of electron and hole wave-functions the radiative transition probability decreases and the radiative lifetime increases, respectively. To further analyze the behavior of the type-II PL we present the spectra of the four samples in figure \[fig:fig3\].
![Low-temperature spectra of the (Ga,In)As/GaAs/Ga(As,Sb) heterostructures. The spectra are normalized to the type-II emission and shifted vertically.[]{data-label="fig:fig3"}](Fig_3.eps){width="8.5cm"}
The PL spectra at low temperature are normalized again to the type-II emission. Besides the type-II PL, we detect an emission band at for the samples with thickest internal barriers. This emission band is characteristic for (Ga,In)As.[@Goetz1983] Surprisingly, this emission only occurs at low temperatures and is not observable at room-temperature. Furthermore, the (Ga,In)As emission is rather weak and only observable for the thickest interlayer barriers. We will come back to this behavior later. Additionally, the Ga(As,Sb) emission that was clearly visible at RT cannot be seen in the spectra. To understand this behavior, we exemplary investigate the temperature-dependence of the PL of the sample with $d = \SI{3.5}{nm}$. The spectra are given in figure \[fig:fig4\].
![Temperature-dependent PL spectra of the sample with $d = \SI{3.5}{nm}$. The spectra are normalized to the type-II peak and shifted vertically for clarity. In the upper right corner depicted are the band edges of our heterostructures with the growth direction being from left to right.[]{data-label="fig:fig4"}](Fig_4.eps){width="8.5cm"}
From the temperature-dependent spectra one can directly see, that the Ga(As,Sb) PL signal vanishes below temperatures of . Furthermore, the (Ga,In)As emission appears below this temperature. To understand this behavior it is helpful to take a look at the band structure of the (Ga,In)As/GaAs/Ga(As,Sb) heterostructure. It is depicted in the inset of figure \[fig:fig4\]. The growth direction is from left to right. Both, the (Ga,In)As[@Zubkov2004] and the Ga(As,Sb)[@Morozov2014] have a type-I band alignment with respect to GaAs. The band-discontinuity between (Ga,In)As and Ga(As,Sb), however, is of type-II.[@Morozov2014; @Hu1998] After relaxation, the energetically most favorable states for the electrons are in the conduction band (CB) of the (Ga,In)As well and for the holes the Ga(As,Sb) well. From this picture one would expect to see a dominant type-II luminescence between electrons in the (Ga,In)As and the holes in the Ga(As,Sb). Indeed, this transition dominates the spectrum at low temperatures (c.f. figure \[fig:fig3\]). Why do we see (Ga,In)As type-I luminescence at low temperatures? To answer this question, we need to consider the following. The excitation of our samples takes place at an energy of , which is well above the bandgap of the GaAs barriers in our heterostructure. Therefore, we create a lot of charge-carriers that can relax into both QWs. The holes in the (Ga,In)As then have two possibilities. They may recombine radiatively with the electrons in the (Ga,In)As, which yields the PL at , or they can tunnel into the energetically more favorable states in the Ga(As,Sb). At low temperatures the phonon assisted tunneling probability is reduced, which allows for an observation of the (Ga,In)As line. The fact that the (Ga,In)As PL increases with increasing interlayer thickness (c.f. figure \[fig:fig3\]) strengthens this argument as the tunneling probability is lower for a thicker barrier. The absence of the Ga(As,Sb) emission in the spectrum is in accordance with this model, since the electron-tunneling is much faster than hole-tunneling because the effective mass of the electrons is smaller but even the tunneling barrier is lower in our structure. Obviously, the Ga(As,Sb) luminescence cannot be observed at because the electrons tunnel on a timescale faster than the exciton recombination time in Ga(As,Sb). Increasing the temperature to leads to a drop in the PL intensity of the (Ga,In)As. This is now due to the very effective phonon assistance in the hole-tunneling. Above the (Ga,In)As emission vanishes completely, because of the lack of holes in the QW.
Surprisingly, the Ga(As,Sb) emission starts to occur in the spectrum even though the electrons should still tunnel to the (Ga,In)As QW very fast. This is due to the fact that electrons can occupy higher electronic states by thermal excitation and can return this way to the Ga(As,Sb) QW at higher temperatures. The same process is not possible for the holes. Eventually, the type-I PL in the Ga(As,Sb) is strongest at room temperature, although the electrons are spread over both wells and the holes are still dominantly in the Ga(As,Sb) well. The transition probability of the type-I transition is of course much bigger, due to the strong overlap of electron and hole wavefunctions, compared to the type-II transition of the charge transfer (CT) excitons.
![Transients of the type-II PL at different temperatures for the sample with $d = \SI{3.5}{nm}$. The curves are normalized to unity and were taken at the maximum of the type-II peak. The solid curve are the fitted transients obtained by our tri-exponential model (see below for details).[]{data-label="fig:fig5"}](Fig_5.eps){width="8.5cm"}
This pronounced behavior of the charge carriers as a function of ambient temperature should also strongly influence the recombination dynamics of the excitons. We therefore performed time resolved studies of the type-II recombination in the temperature range between and . The decay curves for the sample with $d = \SI{3.5}{nm}$ are shown in figure \[fig:fig5\]. These transients were taken at the maximum of the type-II PL. For temperatures between and the decay is still mono-exponential. Starting at $\tau_{290\,K} = (16 \pm 1)\,ns$ the decay time increases, however, to $\tau_{200\,K} = (37 \pm 5)\,ns$ and $\tau_{140\,K} = (88 \pm 5)\,ns$ for . The increasing decay time with decreasing temperature is typical for exciton recombination. This is explained by reduced electron-phonon-coupling and reduced nonradiative losses. These explanation should very much apply to our type-II recombination process. Interestingly, below the shape of the transients becomes distinctly different. The mono-exponential decay behavior disappears and a delayed increase can be seen. The time dependence consists of three parts now. In the beginning there is a rather fast decay of the type-II PL. This is followed by a rise of the curve. The transient reaches a maximum and declines later with a slow decay time.
A similar behavior was found by Morozov et al. investigating (Ga,In)As/Ga(As,Sb) QWs at low temperatures.[@Morozov2014] The respective lifetimes they measured were considerably shorter than ours, which might be explained by the absence of an internal barrier in their samples. The threepart decay curve was explained by considering the screening effect of the charge carriers in higher type-I states that reduces the band bending due to the type-II excitons. In the first time regime this screening is reduced as the type-I excitons recombine and the point where the transient reaches its local minimum is reached after the typical type-I decay time. The following incline in the transient is then caused by the reduction of the band bending due to the recombination of the type-II excitons and the resulting red-shift of the emission line. Finally, the the last part of the decay curve represents the type-II decay time. This model [@Morozov2014], however, cannot explain the transients observed here. This becomes particularly obvious by considering transients at different PL energies. The decay curves detected at different positions across our PL spectrum. The respective decay curves are depicted in figure \[fig:fig7\].
![Normalized decay curves of the type-II PL for the samples with $d = \SI{3.5}{nm}$ at a temperature of . The detection was shifted across the spectrum and the transients are shifted vertically for clarity. The black decay curve is the transient previously discussed in figures \[fig:fig5\]. The solid lines are fitted to the respective curve using our tri-exponential model.[]{data-label="fig:fig7"}](Fig_7.eps){width="8.5cm"}
It can be seen, that both the minimum and the maximum is shifted to later times the lower the detection energy was. The only difference between the curves is an energy relaxation process. It seems to be necessary to develop a model including relaxation processes. The relaxation itself comprises various steps. The excitation energy is above the GaAs bandgap. Electron and hole relaxation can take place. Furthermore, the QWs are made of ternary materials yielding a certain amount of microscopic alloy disorder. The disorder results in a potential landscape of localized states in the QWs. Such localized states result in a hopping mobility of the excitons, which is particularly important at low temperatures. We have shown in an earlier paper, that hopping relaxation in disordered systems can easily reach hundreds of ns.[@Niebling2008] To adequately describe the decay behavior at and below, we develop in the following a kinetic model taking three processes into account. First, the excitons ($n_{In}$) in the (Ga,In)As can either recombine radiatively in their well ($\tau_{In}$) or form type-II excitons by tunneling of the holes to the Ga(As,Sb) with the probability $w_T=1/\tau_T$. This yields the differential equation \[eq:eq1\].
$$\frac{d n_{In}}{dt} = -\frac{n_{In}}{\tau_{In}} -w_T \cdot n_{In}
\label{eq:eq1}$$
The hole tunneling feeds higher states ($n_H$) in the Ga(As,Sb) QW. The excitons can then relax into lowest states ($n_i$) with a characteristic time of $\tau_R$. Such single relaxation time is of course a simplification and just a mean time for the relaxation including hopping processes between deep localized states that have a certain distribution in energy.
The temporal evolution of the occupation of the higher type-II exciton states is described by equation \[eq:eq2\].
$$\frac{d n_H}{dt} = w_T \cdot n_{In} -\frac{n_H}{\tau_R}
\label{eq:eq2}$$
Equation \[eq:eq3\] describes then the temporal evolution of the lowest exciton states that are responsible for the type-II luminescence observed in our experiments. The corresponding lifetime is $\tau_i$.
$$\frac{d n_i}{dt} = \frac{n_H}{\tau_R} -\frac{n_i}{\tau_i}
\label{eq:eq3}$$
The solution of these three coupled differential equations yields a tri-exponential function of the form:
$$n(t) = A \cdot \exp(-\frac{t}{\tau_{eff}}) - B \cdot \exp(-\frac{t}{\tau_R}) + C \cdot \exp(-\frac{t}{\tau_i}).
\label{eq:eq4}$$
In equation \[eq:eq4\] $\tau_{eff}^{-1} = \tau_{In}^{-1} + \tau_T^{-1}$ denotes the effective decay time of excitons in the (Ga,In)As. The term with $\tau_R$ describes the exciton relaxation and hopping and is responsible for the delayed increase of our PL transients. As can be seen by the full lines in figure \[fig:fig5\] a perfect fit is possible to all the experimental transients. For the sample with $d = \SI{3.5}{nm}$ at we got a relaxation time $\tau_{R} = \SI{55}{ns}$ and a type-II recombination time $\tau_{i} = \SI{83}{ns}$. The times at T= are $\tau_{R} = \SI{27}{ns}$ and $\tau_{i} = \SI{49}{ns}$, respectively. The effective exciton time in (Ga,In)As reaches the experimental time resolution of our setup at . Nevertheless, we found a tendency with increasing temperature from $\tau_{eff}= \SI{20}{ns}$ at to $\tau_{eff} = \SI{4}{ns}$ for . Increasing the temperature makes all times faster due to the enhanced electron-phonon-coupling as already mentioned.
At a glance it seems to be surprising, that the radiative lifetime $\tau_i = \SI{88}{ns}$ at $T = \SI{140}{K}$ is higher than the value for $T = \SI{70}{K}$. This can be explained taking into account the potential fluctuation in our strongly disordered system. At low temperatures the CT-excitons are rather localized with a reduced lifetime the stronger the localization gets. This behavior has been found for disordered materials in general.[@Woscholski2016] Above the mobility edge the lifetime is obviously longer, but reduces also with increasing temperatures.
To prove the model, we have performed time-resolved measurements also at the sample with a thinner interlayer of $d = \SI{1.5}{nm}$. The transients of both samples are depicted in figure \[fig:fig6\]. One can directly see that both, the local minimum and the maximum occur at later times for the sample with thicker internal barrier. The black transient is the same curve as in figure \[fig:fig5\]. Our fit yields $\tau_{eff} = \SI{4}{ns}$, $\tau_{R} = \SI{55}{ns}$ and $\tau_{i} = \SI{65}{ns}$ for $d = \SI{1.5}{nm}$. By decreasing the internal barrier thickness $\tau_{eff}$ decreases substantially, since the hole tunneling gets faster. Even the CT-exciton recombination time $\tau_{i}$ decreases strongly as expected by the increasing dipole matrix element. The relaxation time $\tau_{R}$ remains constant as it is not influenced by changing the barrier thickness. This results vigorously support our model and make clear that the CT-exciton PL behavior is indeed strongly influenced by relaxation processes.
![Normalized decay curves of the type-II PL for the samples with $d = \SI{1.5}{nm}$ (red) and $d = \SI{3.5}{nm}$ (black). The detection was done at the respective maximum of the PL at an ambient temperature of . The fits using our tri-exponential model are given as the solid lines.[]{data-label="fig:fig6"}](Fig_6.eps){width="8.5cm"}
Finally, we want to test our model by evaluating the changes in the decay times by moving the detection across the PL spectrum. The decay curves detected at different positions across our PL spectrum are depicted in figure \[fig:fig7\]. The curves are shifted vertically and the measurement was done at . The black transient measured at a detection energy of $E_{det.} = \SI{1.088}{eV}$ is the same as the black curve in figures \[fig:fig5\] and \[fig:fig6\]. It can be seen that the minimum and the maximum of the transients are shifted to later times as the detection energy is decreased. The resulting times from the tri-exponential fit are summarized in table \[tab:tab2\]. For the transient taken at $E_{det.} = \SI{1.069}{eV}$ no unique fit was possible.
$E_{det.}$ (eV) $\tau_{eff}$ (ns) $\tau_{R}$ (ns) $\tau_{i}$ (ns)
----------------- ------------------- ----------------- -----------------
1.127 7.6 7.9 8
1.107 15 20 21
1.088 20 55 83
: Decay times for the transients given in figure \[fig:fig7\].
\[tab:tab2\]
It is interesting to note, that the relaxation time and the recombination time increase with decreasing the detection energy (see table\[tab:tab2\]). Both changes can be explained in the framework of our model. The higher the detection energy the less relaxation happened and the respective relaxation time is short. The CT-exciton recombination time is shorter the higher the detection energy, because the relaxation probability is a second loss channel, e.g. the lower the energy the longer the CT exciton lifetime. Even $\tau_{eff}$ is slightly shorter the higher the detection energy is. At $E_{det.} = \SI{1.088}{eV}$ we found $\tau_{eff} = \SI{20}{ns}$. This value reduces to $\tau_{eff} = \SI{7.6}{ns}$ for a detection energy of $E_{det.} = \SI{1.127}{eV}$. This behavior might be caused by a similar relaxation process in the ternary (Ga,In)As. Such an relaxation process would be less severe as the (Ga,In)As is less disordered and would lead to shorter times compared to the hopping times induced by the Ga(As,Sb) disorder.
Conclusion
==========
In summary, we have investigated the continuous-wave and time-resolved PL of (Ga,In)As/GaAs/Ga(As,Sb) heterostructures with an intermediate barrier of variable thickness between and . At room-temperature we could observe a bright luminescence from the type-II exciton recombination. Additionally, the type-I PL of the Ga(As,Sb) was observed. Decreasing the temperature below leads to a disappearance of the Ga(As,Sb) emission. This is because the electrons are no longer thermally excited to the Ga(As,Sb) but are only present in the energetically lowest states of the (Ga,In)As well. At even lower temperatures the (Ga,In)As emission appears, but is very weak for a type-I transition. The cause for this is the effective hole tunneling from the (Ga,In)As well into the Ga(As,Sb) well, which has a lower tunneling probability than the electron tunneling in opposite direction. So the (Ga,In)As PL can be observed, while the Ga(As,Sb) peak vanishes. Additionally, the time-resolved measurements reveal an interesting behavior as well. At high temperatures above the PL decay of the type-II excitons is mono-exponential with radiative lifetimes between $\tau_{290\,K} = (16 \pm 1)\,ns$ and $\tau_{140\,K} = (88 \pm 5)\,ns$. At and below the shape of the transients changes drastically, since the hopping relaxation between the low lying energy states with relatively low hopping probabilities and respective long times come into play. This behavior has been analyzed in the framework of a rate-equation model. The temporal evolution of the type-II PL can be explained taking carrier tunneling, relaxation and type-II recombination into account. We find that at low temperature the relaxation and hopping processes in the Ga(As,Sb) are important and determine the shape of the decay curves. It is shown that we were able to describe not only the temperature dependence, but also the behavior of samples with differently thick inner barriers as well as the changes of the transients as a function of detection wavelength.
The work is a project of the Sonderforschungsbereich 1083 funded by the Deutsche Forschungsgemeinschaft (DFG). S.G. gratefully acknowledges financial support of the DFG in the framework of the GRK 1782.
References {#references .unnumbered}
==========
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- 'O. Dor[é]{}'
- 'F.R. Bouchet'
- 'Y. Mellier'
- 'R. Teyssier'
title: 'Cluster physics from joint weak gravitational lensing and Sunyaev-Zel’dovich data'
---
Introduction
============
Whereas clusters of galaxies, as the largest gravitationnaly bound structures of the universe, form natural probe of cosmology, observations, numerical simulations as well as timing arguments provide compelling evidences that most of them are young and complex systems. Interaction with large-scale structures, merging processes and coupling of dark matter with the intra-cluster medium complicate the interpretation of observations and the modeling of each of its components. Since they are composed of dark mater (DM), galaxies and a hot dilute X-ray emitting gas (Intra cluster medium, ICM) accounting respectively for $\sim 85\%$, $\sim 15\%$ and $\sim 5\%$ of their mass, the physics of the ICM bounded in a dark matter gravitational potential plays a major role in cluster formation and evolution. This variety of components can be observed in many various ways. In particular, gravitational lensing effects (the weak-lensing regime here, WL) [@Me00; @BaSc01], Sunyaev-Zel’dovich (SZ) effect [@SuZe72; @Bi99] and X-ray emission (X) [@Sa88]. Whereas the former probes mostly the dark matter component, both the latter probe the baryons of the gravitationally bound ICM.\
Due to observational progress, increasingly high quality data are delivered which enables multi-wavelength investigation of clusters on arcminute scale (the most recent is the spectacular progress in SZ measurements, [@ReMo20; @DeBe98]) and we therefore think it is timely to explore how we should perform some joint analysis of these high quality data sets and exploit them at best their complementarity. This challenge has already been tackled by several groups [@ZaSq98; @GrCa99; @Re00; @ZaSq00; @Ca00; @Ho00]. Zaroubi and Reblinsky attempted a full deprojection by assuming isothermality and axial symmetry, using respectively a least square minimization or a Lucy-Richardson algorithm , Grego compare SZ derived gas mass to WL derived total mass by fitting a spheroidal $\beta$ model. But whereas these methods give reasonable results it has been illustrated, by Inagaki 1995 in the context of $H_0$ measurement from SZ and X-ray observations, that both non isothermality and asphericity analysis can trigger systematic errors as high as $20\ \%$. Therefore, we aim at exploring an original approach which allows to get rid of both isothermality and departure from sphericity. Based on a self-consistent use of both observables, and based on a perturbative development of general physical hypothesis, this method allow us to test some very general physical hypothesis of the gas (hydrostatic equilibrium, global thermodynamic equilibrium) and also provide naturally some X observation predictions.\
Observations only provide us with $2-D$ projected quantities (mass, gas pression,…). This quantities are related by some physical hypothesis which are explicited in $3-D$ equalities (hydrostatic equilibrium, equation of state). The point is that these $3-D$ equalities do not have any tractable equivalent relating projected $2-D$ quantities: in particular, projection along the line of sight does not provide an equation of state or a projected hydrostatic equilibrium equation. Therefore as soon as we want to compare this data (WL, SZ, X) we have to deproject the relevant physical quantities ($P_g, T_g, \rho_g$…). This can be done only using strong assumptions, either by using parametric models (a $\beta$ model [@CaFu76]) or by assuming mere geometrical hypothesis (the former necessarily encompassing the latter) [@FaHu81; @YoSu99]. We choose the geometric approach in order to use as general physical grounds as possible and to avoid as many theoretical biases as possible.\
This simplest choice might be naturally motivated first by looking at some images of observed clusters [@DeBe98; @GrCa99]. Their regularity is striking : some have almost circular or ellipsoidal appearance as we expect for fully relaxed system. Then since relaxed clusters are expected to be spheroidal in favored hierarchical structure formation scenario, it is natural to try to relate the observed quasi-circularity (quasi-sphericity) to the $3-D$ quasi-sphericity (quasi-spheroidality). We perform this using some linearly perturbed spherical (spheroidal) symmetries in a self-consistent approach.\
We proceed as follows: in section \[notation\] we defined our physical hypothesis and our notations. The method is precisely described in section \[method\]. We consider both the spherical as well as spheroidal cases and obtain a predicted X surface brightness map from a SZ decrement map and a WL gravitational distortion map. In section \[simulation\] a demonstration with simulated clusters is presented before discussing its application to genuine data as well as further developments in section \[discussion\].
Hypothesis, Sunyaev-Zel’dovich effect and the Weak lensing {#notation}
==========================================================
We now briefly describe our notations as well as our physical hypothesis.
General hypothesis {#hypo_gen}
------------------
Following considerations fully detailed in [@Sa88] the ICM can be regarded as a hot and dilute plasma constituted from ions and electrons, whose respective kinetic temperatures $T_p$ and $T_e$ will be considered as equal $T_{p}=T_{e} \equiv T_{g}$. This is the *global thermodynamic equilibrium hypothesis* which is expected to hold up to $r_{virial}$ ( see [@TeCh97; @ChAl98] for a precise discussion). Given the low density (from $n_e\sim 10^{-1} \rm{cm^{-3}}$ in the core to $\sim
10^{-5} \rm{cm^{-3}}$ in the outer part) and high temperature of this plasma ($\sim 10 \rm{keV}$), it can be treated as a perfect gas satisfying the equation of state : P\_[g]{} = = \_[g]{}T\_[g]{} \[state\] with $\beta \equiv \frac{\ k_{B}}{\mu_{e}\
m_{p}}$. Let us neglect then the gas mass with regards to the dark matter mass, and assume *stationarity* (no gravitational potential variation on time scale smaller than the hydrodynamic time scale, no recent mergers). Then the gas assumed to be in hydrostatic equilibrium in the dark matter gravitational potential satisfies: $$\begin{aligned}
\nabla(\rho_g\mathbf{v_g}) &=& 0 \\ \nabla P_{g} &=& -\rho_g\nabla
\Phi_{DM}\: .
\label{hydrostat2}\end{aligned}$$ At this point there is no need to assume isothermality.\
Sunyaev-Zel’dovich effect and weak lensing
------------------------------------------
Inverse Compton scattering of cosmic background (CMB) photons by the electrons in the ICM modifies the CMB spectrum [@ZeSu69; @SuZe72; @SuZe80a]. The amplitude of the SZ temperature decrement [$\Delta T_{SZ} \over T_{CMB}$]{} is directly proportional to the Comptonisation parameter $y$ which is given by : y = dl n\_e k\_B T\_e & = & dl p\_e\
= dl & = & dl P\_g . where $\alpha \equiv \frac{\sigma_{T}}{m_{e}c^{2}}$, $k_B$ is the Boltzmann’s constant, $\sigma_{T}$ is the Thomson scattering cross section and $dl$ is the physical line-of-sight distance. $m_e$, $n_e$, $T_e$ and $p_e$ are the mass, the number density, the temperature and the thermal pressure of electrons. $\rho_g $ and $T_g$ respectively denote the gas density and temperature, and $\mu_e$ is the number of electrons per proton mass. Some further corrections to this expression can be found in [@Re95; @Bi99].\
In parallel to this spectral distortions, the statistical determination of the shear field $\kappa$ affecting the images of background galaxies enable, in the weak lensing regime, to derive the dominant projected gravitational potential of the lens (the clustered dark matter) : $\phi_{DM}$ in our general hypothesis (see [@Me00] for details).
Method
======
Principle
---------
We now answer the question : how should we co-analyze these various data set ? Our first aim is to develop a method which allows us to get maps of projected thermodynamical quantities with as few physical hypothesis as possible.\
Our method is the following. Let us suppose we have for a given cluster a set of data a SZ and WL data which enables us to construct a $2-D$ map of projected gas pressure as well as a $2-D$ projected gravitational potential map. Let us suppose as well that these maps exhibit an approximate spherical symmetry as it is the case for a vast class of experimental observations as in figure \[szfig\]. More precisely, let us suppose that the projected gas pressure $y$ as well as the observed projected gravitational potential $\phi_{DM}$ can be well fitted by the following type of functions : y(R, ) & = & y\_0(R) + y\_1(R) m()\
\_[DM]{}(R, ) & = & \_[DM,0]{}(R) + \_[DM,1]{}(R) n() where $\varepsilon \ll 1$, $(R,\varphi)$ denotes polar coordinates in the image plane and $m$ and $n$ are some particular functions. This description means first of all that the images we see are linear perturbations from some perfect circularly symmetric images, and second that the perturbation might be described conveniently by the product of a radial function and an angular function. Equivalently we can assert that to first order in $\varepsilon$ our images are circularly symmetric but they admit some corrections to second order in $\varepsilon$.\
We then assume that these observed perturbed symmetries are a consequence of an intrinsic $3-D$ spherical symmetry linearly perturbed too. This point constitutes our key hypothesis. It means that to first order in a certain parameter ($\varepsilon$) our clusters are regular objects with a strong circular symmetry but they admit some second order linear perturbations away from this symmetry. As a consequence of these assumptions we will make use of this linearly perturbed symmetry to get a map of some complementary projected thermodynamical quantities, the gas density $D_g$ and the gas temperature $\zeta_g$, successively to first and second order in $\varepsilon$.\
Formulated this way, the problem yields a natural protocol :
- Looking at some maps with this kind of symmetry, we compute a zero-order map ($y_0(R)$, $\phi_0(R)$) with a perfect circular symmetry by averaging over some concentric annulus. A correction for the bias introduced by perturbations is included. These first order quantities allow us to derive some first order maps of $D_{g,0}(R)$ and $\zeta_{g,0}(R)$ with a perfect circular symmetry.
- We then take into account the first order corrections to this perfect symmetry ($y_1(R)m(\varphi)$, $\phi_1(R)m(\varphi)$) and infer from them first order correction terms to the zeroth order maps: $D_{g,1}(R,\varphi)$ and $\zeta_{g,1}(R,\varphi)$.
Even if for clarity’s sake we formulate our method assuming a perturbed circular symmetry, it applies equivalently to a perturbed elliptical symmetry as it will be shown below. In this more general case, we assume that the cluster exhibit a linearly perturbed spheroidal symmetry.\
The spherically symmetric case : from observations to predictions
-----------------------------------------------------------------
Let us now apply the method to the case where the projected gas density (SZ data) and the projected gravitational potential (WL data) exhibit some approximate circular symmetry. These observations lead us to suppose that the $3-D$ gas pressure, the gravitational potential, the gas density and the gas temperature can be well described by the following equations: {
[cccccc]{} P\_g(r,,) &=& P\_[g,0]{}(r) &+& & P\_[g,1]{}(r)f(,)\
\_[DM]{}(r, ,) &=& \_[DM,0]{}(r) &+& & \_[DM,1]{}(r)g(,)\
\_g(r,,) &=& \_[g,0]{}(r) &+& & \_[g,1]{}(r)h(,)\
T\_g(r,,) &=& T\_[g,0]{}(r) &+& & T\_[g,1]{}(r)k(,)
. where $(r,\theta,\varphi)$ are spherical coordinates centered on the cluster.
### The hydrostatic equilibrium
If we first apply the hydrostatic equilibrium equation $\nabla P_g =
-\rho_g \nabla \Phi_{DM}$ we get the following equations. To first order in $\varepsilon$ we have $$P'_{g,0}(r)\ =\ -\rho_{g,0}(r)\Phi'_{DM,0}(r)\: ,
\label{hydro1e}$$ and to second order in $\varepsilon$ : $$\left\{
\begin{array}{cccc}
P'_{g,1}(r)f(\theta,\varphi)\ &=&\ -\rho_{g,0}(r)\Phi'_{DM,1}(r)\
h(\theta,\varphi) & \\
& & - \rho_{g,1}(r)\Phi'_{DM,0}(r)\ g(\theta,\varphi) &\: (a)\\
P_{g,1}(r)\ \partial_{\theta}\ f(\theta,\varphi)\ &=&\ -\rho_{g,0}(r)\ \Phi_{DM,1}(r)\
\partial_{\theta} h(\theta,\varphi) &\: (b) \\
P_{g,1}(r)\ \partial_{\varphi} f(\theta,\varphi)\ &=&\ - \rho_{g,0}(r)\
\Phi_{DM,1}(r)\ \partial_{\varphi} h(\theta,\varphi) &\: (c)
\end{array}\right.
\label{hydro2e}$$ where “ ’ “ denotes the derivative with regards to $r$.\
Combining equations (\[hydro2e\].b) and (\[hydro2e\].c) we get f(,) = \_1 h(,) + \_2 where $\lambda_{1,2}$ are some constants. Then, using equation (\[hydro2e\].a) we can write f(,) = \_1 g(,) + \_2 where $\gamma_{1,2}$ are some constants as well. At this point, we can get rid of $\lambda_2$ and $\gamma_2$ by absorbing them in the order $1$ mere radial term ($\rho_{g,0}(r)$ and $\Phi_{DM,0}(r)$). This means we can consider $\lambda_2=0$ and $\gamma_2=0$ . Similarly we choose to rescale $\rho_{g,1}(r)$ and $\Phi_{DM,1}(r)$ so that we can take $\gamma_1 =
\lambda_1 = 1\ $. These simple equalities lead us to assume from now on : f(,) = h(,) = g(,). \[fgh\] This is in no way a restriction since it simply means that we absorb integration constants by redefining some terms. This is possible since the relevant part of $f$ (and thus $h$) will be fitted on observations as will be shown below. Taking equation (\[fgh\]) into account, equation (\[hydro2e\]) simplifies to : P’\_[g,0]{}(r) &=& -\_[g,0]{}(r)’\_[DM,0]{}(r)\[hydrosimple1\]\
P’\_[g,1]{}(r) &=& -\_[g,0]{}(r)’\_[DM,1]{}(r) - \_[g,1]{}(r)’\_[DM,0]{}(r) \[hydrosimple2\]\
P\_[g,1]{}(r) &=& -\_[g,0]{}(r)\_[DM,1]{}(r) . \[hydrosimple3\]
### The equation of state
We have now identified the angular part to the first order correction of $P_g$, $\Phi_{DM}$ and $\rho_g$. We still have to link those quantities to the angular dependent part of the temperature $T_g$, namely $k(\theta,\varphi)$. This is done naturally using the equation of state (\[state\]), which directly provide to first and second order in $\varepsilon$ : P\_[g,0]{}(r) &=& \_[g,0]{}(r) T\_[g,0]{}(r) \[state0\]\
P\_[g,1]{}(r) f(,) &=& \_[g,1]{}(r) T\_[g,0]{}(r)f(,)\
& + & \_[g,0]{}(r) T\_[g,1]{}(r)k(,) \[state1\] This last equation leads naturally to $f(\theta,\varphi)=k(\theta,\varphi)$ if we decide once again to absorb any multiplicative factor in the radial part. This way we see that our choice of separating the radial and angular part is in no way a restriction. We eventually get P\_[g,0]{}(r) &=& \_[g,0]{}(r) T\_[g,0]{}(r)\
P\_[g,1]{}(r) &=& \_[g,1]{}(r) T\_[g,0]{}(r) + \_[g,0]{}(r) T\_[g,1]{}(r) .
### The observations {#observations}
Given this description of the cluster hot gas, the experimental SZ and WL data which respectively provide us with the projected quantities $y(R,\varphi)$ and $\phi_{DM}(R,\varphi)$ write y(R,) & = & P\_[g,0]{}(r)dl + P\_[g,1]{}(r)f(,) dl \
& & y\_0(R) + y\_1(R)m()\
\_[DM]{}(R,) & = & \_[DM,0]{}(r)dl + \_[DM,1]{}(r)f(,) dl \
& & \_[DM,0]{}(R) + \_[DM,1]{}(R)m() . Note that in order to get this set of definitions we choose the polar axis of the cluster along the line of sight so that the same azimuthal angle $\varphi$ is used for $2-D$ and $3-D$ quantities.
Our aim is now to derive both a projected gas density map and projected temperature map that we define this way : D\_g(R,) & = & \_g(r,) dl\
& = & \_[g,0]{}(r)dl + \_[g,1]{}(r)f(,) dl\
& & D\_[g,0]{}(R) + D\_[g,1]{}(R, ) \[dgdef\]\
\_g(R,) & = & T\_g(r,) dl\
& = & T\_[g,0]{}(r) dl + T\_[g,1]{}(r)f(,) dl\
& & \_[g,0]{}(R) + \_[g,1]{}(R,) . \[zetagdef\]
### A projected gas density map to first order…
Now that we have expressed our observables in terms of $3-D$ physical quantities, it is easy to infer a gas density map successively to first and second order in $\varepsilon$. To first order the hydrostatic equilibrium condition (\[hydro1e\]) states that P’\_[g,0]{}(r) = -\_[g,0]{}(r)’\_[DM,0]{}(r) . In order to use it we need to deproject the relevant quantities. From the well known spherical deprojection formula [@BiTr87] based on Abel’s transform we have : $$\begin{aligned}
\alpha\ P_{g,0}(r) & = & -{1\over \pi}\int_r^{\infty} y'_0(R){dR
\over(R^2-r^2)^{1\over2}} \\ & = & -{1\over \pi} \int_0^{\infty}\
y'_0(r\cosh u)du
\label{pg0}\end{aligned}$$ where $ R=r\cosh u$. Thus, we can write $$\begin{aligned}
\alpha\ P'_{g,0}(r) & = & -{1\over \pi} \int_0^{\infty}\ \cosh u\
y''(r\cosh u)du \\ & = & -{1\over \pi} \int_r^{\infty}\ {1\over
r}{R\over (R^2-r^2)^{1 \over 2}}\ y''_0(R) dR \; .
\label{pg'0}\end{aligned}$$ Similarly, $$\Phi'_{DM,0}(r) = - {1\over \pi} \int_r^{\infty}\ {1\over r}{R\over
(R^2-r^2)^{1 \over 2}}\ \phi_0'' (R) dR \; .$$ We then get for the projected gas density\
& & = -[2 ]{} \_R\^ [r dr(r\^2-R\^2)\^[12]{}]{} ( ) .
### …and a projected gas temperature map to first order
Once we built this projected gas density map, we can recover the projected gas temperature map. If we apply the equation of state (\[state0\]) we get : $$\begin{aligned}
\zeta_{g,0}(R) & = & {1\over \beta} \int {P_{g,0}(r) \over
\rho_{g,0}(r)}dl \\ & = & -{1\over \beta } \int
{P_{g,0}(r)\over P'_{g,0}(r)}\Phi'_{DM,0}(r) dl \\ & = &
-{1\over \pi\beta} \int_R^{\infty}\ {P_{g,0}(r)\over
P'_{g,0}(r)}\Phi'_{DM,0}(r){rdr\over (r^2-R^2)^{1 \over 2}}\:
.\end{aligned}$$
Since all the required functions ($P_{g,0}$, $P'_{g,0}$, $\Phi'_{DM,0}$) have been derived in the previous section (equation (\[pg0\]) and (\[pg’0\])) we can get this way a projected gas temperature map.
### Corrections from departure to spherical symmetry : a projected gas density map to second order…
We now reach the core of our method, namely we aim at deriving the quantity $D_{g,1}$ defined by (\[dgdef\]), the second order correction to the perfectly circular term : D\_g(R,) & = & D\_[g,0]{}(R) + D\_[g,1]{}(R, )\
& = & \_[g,0]{}(r)dl + \_[g,1]{}(r)f(,) dl . If we derive equation (\[hydrosimple3\]) and combine it with equation (\[hydrosimple2\]) we note that ’\_[g,0]{}(r) \_[DM,1]{}(r) &=& \_[g,1]{}(r) ’\_[DM,0]{}(r). \[approxrho1\] Therefore we can write \_[g,1]{}(r)f(,) dl = \_[DM,1]{}(r)f(,) dl . \[intd1g\] At this point we want to express this quantity either in terms of WL data or in terms of SZ data depending on the quality of them, or even better in terms of an optimal combination of them.\
On one hand, WL data provide us with a straightforward access to the function $\phi_1(R)m(\varphi)\ =\ \int \Phi_{DM,1}(r)f(\theta,\varphi) dl$ thus we choose to approximate (\[intd1g\]) by\
& & \_1(R)m()\
& & (\_[DM]{}(R,) - \_0(R)) \[rho1phi\] where we used the definitions of section (\[observations\]) and where $R$ corresponds to the radius observed in the image plane, the radius $r$ equal to the distance between the line of sight and the center of the cluster. We will discuss this approximation in more details in section (\[approx\]) and validate it through a practical implementation on simulations in section (\[simulation\]). But we already can make the following statements: would the line of sight follows a line of constant $r$ throughout the domain of the perturbation, this expression would be rigorously exact. Moreover it turns out to be a good approximation because of the finite extent of the perturbation.\
On the other hand SZ data provide us with a measurement of the function $y_1(R)m(\varphi)\ =\ \int P_{g,1}(r)f(\theta,\varphi) dl$ therefore we can use equation (\[hydrosimple3\]) and (\[hydrosimple1\]) to write\
&& P\_[g,1]{}(r)f(,) dl\
&& y\_1(R)m()\
&& (y(R,) - y\_0(R)) . \[rho1y\] Here again we used the same notation and approximation as in equation (\[rho1phi\]). Note however that as soon as we assumed isothermality, the ratio $\rho'_{g,0}/P'_{g,0} \displaystyle$ is constant therefore this last step is exact. Were we not assuming isothermality, the departure from isothermality is expected to be weak thus this last approximation should be reasonable.
This last two alternative steps are crucial to our method since these approximations link the non spherically symmetric components of various quantities. They are reasonable as will be discussed in section (\[approx\]) and will be numerically tested in section (\[simulation\]).\
Of course, only well-known quantities appear in equation (\[rho1phi\]) and (\[rho1y\]): $y$, $y_0$, $\phi_{DM}$ and $\phi_0$ are direct observational data whereas $P_{g,0}(r)$ and $\rho_{g,0}(r)$ are zeroth order quantities previously derived.
### …and a projected gas temperature map to second order
The projected temperature map can be obtained the same way as before. Using first the equation of state we can write :\
& & [1]{}([P\_[g,0]{}(r)\_[g,0]{}(r)]{} + P\_[g,1]{}(r) [\_[g,0]{}(r) - \_[g,1]{}(r)\^2\_[g,0]{}(r)]{} f(,) ). Hence, since (R,) & = & \_0(R,) + \_1(R,)\
& = & T\_[g,0]{}(r) dl + T\_[g,1]{}(r)f(,) dl we have \_1(R,) = P\_[g,1]{}(r) f(,) dl. Here we choose to approximate the last integral as previously discussed in order to make use of observational SZ data. Therefore we rewrite this last equation as :\
&& [\_[g,0]{}(R) - \_[g,1]{}(R) \^2\_[g,0]{}(R)]{}y\_1(R)m()\
&& [\_[g,0]{}(R) - \_[g,1]{}(R) \^2\_[g,0]{}(R)]{} (y(R,) - y\_0(R)). We obtain this way an expression to second order for the projected temperature in terms of either observed quantities or previously derived functions.\
### Why the previous approximation is reasonable on intuitive grounds? {#approx}
Our previous approximations can be justified on intuitive grounds even if we will take care of validating it numerically in section (\[simulation\]) below. It relies on the fact that perturbations have by definition a finite extent, the first order correction to the perfectly circular (spherical) term is non zero only within a finite range. The typical size and the amplitude of the perturbation can be easily scaled from the SZ and WL data set. This guarantees the validity of our assumptions on observational grounds. The key point is that the perturbation itself has a kind of axial symmetry, whose axis goes through the center of the cluster and the peak of the perturbation. This is reasonable if the perturbation originates in an incoming filament but not for a substructure. The latter would therefore have to be treated separately by superposition (see section (\[discussion\])). This leads naturally to the statement that the typical angle we observe in the image plane is equal to the one we would observe if the line of sight were perpendicular to its actual direction, the perturbation as intrinsically the same angular extent in the directions along the line of sight and perpendicular to it. This is illustrated schematically in figure (\[fig\_approx\]).
Given this description we are now in a position to discuss the validity of our approximation. It consists in approximating the line of sight integral $\int g(r)\Phi_{DM,1}(r)f(\theta,\varphi) dl
\displaystyle$ by $g(R) \int \Phi_{DM,1}(r)f(\theta,\varphi)
dl\displaystyle$ where $g$ is any radial function. This approximation would be exact if $g(r)$ were constant in the relevant domain, if the line of sight had a constant $r$. As mentioned before this is the case in equation \[rho1y\] if we assume isothermality. But the functions $g(r)$ we might deal with may scale roughly as $r^2$, as $\rho'_{g,0}(r)/P_{g,0}(r)$ in equation (\[rho1phi\]), thus it is far from being constant. The consequent error committed can be estimated by the quantity $\Delta r g'(r)$ where $\Delta r$ is the maximum $r$ discrepancy between the value assumed, $g(R)$, and the actual value as it is schematically illustrated in figure (\[fig\_approx2\]). In the worst case, $g'(r)$ scales as $r$. Then, using the obvious notations defined in this figure we get (r)\_[max]{} = R(1-1/(- [2]{})) . \[detar\] Naturally this quantity is minimal for $\theta \simeq
90 ^o$ and diverges for $\theta \simeq 0^o$ when $\Delta \theta = 0^o
$ : the error is minimal when the line of sight is nearly tangential ($\theta \simeq 90 ^o$) and so almost radial in this domain, and maximal when it is radial ($\theta = 0 ^o$). This in principle is a very bad behavior, but the fact is that the closer $\theta$ is from $0^o$ the weaker the integrated perturbation is since it gets always more degenerate along the line of sight, the integrated perturbations tend to a radial behavior and will therefore be absorbed in the $\Phi_{DM,0}(r)$ term. The extreme situation, when $\theta
= 0^o$ will trigger a mere radial image as long as the perturbation exhibits a kind of axial symmetry. This error is impossible to alleviate since we are dealing with a fully degenerate situation but will not flaw the method at all since the integrated perturbation will be null. This approximation will be validated numerically below.
How to obtain a X prediction ?
------------------------------
The previously derived map offers a great interest that we now aim at exploiting, namely the ability of precise X prediction. Indeed, for a given X spectral emissivity model, the X-ray spectral surface brightness is S\_X(E) = [1 4(1+z)\^4]{} n\_e\^2(E, T\_e)dl where $\Lambda$ is the spectral emissivity, $z$ is the redshift of the cluster and $E$ is the energy on which the observed band is centered. Hence we can write, assuming a satisfying knowledge of $z$ and $\Lambda$ : S\_X(E) & & n\_e\^2T\_e\^[1/2]{} dl\
& & \_g\^2T\_g\^[1/2]{} dl\
& & \_[g,0]{}\^2 T\_[g,0]{}\^[1/2]{} dl + 2 \_[g,0]{}T\_[g,0]{}\^[1/2]{}\_[g,1]{} f(,) dl\
& & + \_[g,0]{}\^2 T\_[g,0]{}\^[-1/2]{} T\_[g,1]{} f(,) dl where we omitted to write the $(r)$s for clarity’s sake. If we now make use of the same approximation as used and discussed before, we can express directly this quantity in terms of observations $y$ and $\phi$. We get indeed S\_X(E) & & \_[g,0]{}\^2 T\_[g,0]{}\^[1/2]{} dl\
& & + 2 \_[g,0]{}(R)T\_[g,0]{}\^[1/2]{}(R) \_[g,1]{} f(,) dl\
& & + \_[g,0]{}\^2(R) T\_[g,0]{}\^[-1/2]{}(R) T\_[g,1]{}f(,)\
& & \_[g,0]{}\^2 T\_[g,0]{}\^[1/2]{} dl + 2 \_[g,0]{}(R)T\_[g,0]{}\^[1/2]{}(R) D\_[g,1]{}(R,)\
& & + \_[g,0]{}\^2(R) T\_[g,0]{}\^[-1/2]{}(R) \_[g,1]{}(R,) . \[x1\] Both the first order terms $T_{g,0}$ and $\rho_{g,0}$, and the second order corrections $D_{g,1}$ and $\zeta_{g,1}$ have been derived in the previous sections. We are thus able to generate self-consistently a X luminosity map from our previously derived maps. This is a very nice feature of this method. We will further discuss the approximation and its potential bias in the next section.\
This derivation opens the possibility of comparing on the one hand SZ and WL observations with, on the other hand, precise X-ray measurements as done by XMM or CHANDRA. Note that in the instrumental bands of most of X-ray satellites the $T_g$ dependence is very weak and can be neglected. This can be easily taken into account by eliminating the $T_g$ dependence in the previous formula. Even if the interest of such a new comparison is obvious we will discuss it more carefully in the two following sections. In principle, one could also easily make some predictions concerning the density weighted X-ray temperature defined by the ratio $\int n_g^2T_g^{} dl / \int n_g^2 dl
\displaystyle$ but the fact is that since the gas pressure and so the SZ effect tends to have a very weak gradient we are not able by principle to reproduce all the interesting features of this quantity, namely the presence of shocks.
Application on simulations {#simulation}
==========================
In order to demonstrate the ability of the method in a simplified context we used some outputs of the recently developed N-body + hydrodynamics code RAMSES simulating the evolution of a $\Lambda$-CDM universe. The RAMSES code is based on Adaptative Mesh Refinement (AMR) technics in order to increase the spatial resolution locally using a tree of recursively nested cells of smaller and smaller size. It reaches a formal resolution of $12\ \mathrm{kpc
h^{-1}}$ in the core of galaxy clusters (see Refregier and Teyssier 2000 and Teyssier 2001, [*in preparation*]{}, for details). We use here the structure of $2$ galaxy cluster extracted of the simulation to generate our needed observables, X-ray emission measure, SZ decrement and projected density (or projected gravitational potential).
The relevant observables, projected mass density, SZ decrement and for comparison purpose only the X-ray emission measure, of the 2 clusters are depicted using a logarithmic scaling in figure \[cl1\] and \[cl2\] (upper panels). This clusters have been extracted of the simulation at $z = 0.0 $ and thus tends to be more relaxed. They are ordinary clusters of virial mass (defined by $\delta_{334}$ in our particular cosmology) $4.50\ 10^{14}\
h^{-1}\ \mathrm{M_{\sun}}$ and $4.15\ 10^{14}\ h^{-1}\
\mathrm{M_{\sun}}$. Both exhibit rather regular shape, they have not undergo recently a major merge. The depicted boxes are respectively $3.5\ h^{-1} \mathrm{Mpc}$ and $4.0\ h^{-1} \mathrm{Mpc}$ wide. We smooth the outputs using a gaussian of width $120\ h^{-1}\mathrm{kpc}$ thus degrading the resolution. We did not introduce any instrumental noise. This clusters are to a good approximation isothermal thus for the sake of simplicity we will assume that $T_g$ is constant making the discussion on $T_{g,0}$ and $T_{g,1}$ useless at this point. We apply the method previously described using perturbed spherical symmetry. We deduce by averaging over concentric annuli a zeroth order circular description of the gas density and then add to it some first order corrections. Note that since we assume isothermality SZ data give us straightforwardly a projected gas density modulo a temperature $T_{g,0}$ coefficient, thus we use the formulation of equation (\[rho1y\]), exact in this context. This constant temperature is fixed using the hydrostatic equilibrium and the WL data.\
In figure \[cl1\] and \[cl2\] (lower panels) we show the predicted X-ray emission measure to zeroth and first order as well as a map of relative errors. Note that to first order the shape of the emission measure is very well reproduced. The cross-correlation coefficients between the predicted and simulated X-ray emission measure are $0.978$ and $0.986$. Of course this is partly due to the assumed good quality of the assumed SZ data but nonetheless, it demonstrates the validity of our perturbative approach as well as of our approximation. The approximation performed in equation (\[x1\]), the multiplication by the function $\rho_{g,0}(R)$ will naturally tends to cut out the perturbations at high $R$. This is the reason why the further perturbation are slightly less well reproduced and the relative errors tend to increase with $R$. Nevertheless, since the emission falls rapidly with $R$ as visible on the lower figures (note the logarithmic scaling) the total flux is well conserved, respectively to $0.9\ \%$ and $9\
\%$. This last number might illustrate that the large extent of the perturbations in the second case may limit our method. An ellipsoidal fit could have help decrease this value. Note that moreover the clump visible mainly in X-ray emission measure of figure \[cl2\] is not reproduce. This is natural because it does not appear through the SZ effect since the pression remains uniform throughout clumps. If resolved by WL, this substructure should anyway be treated separately, by considering the addition of a second very small structure. Note that the first cluster showed exhibits a spherical core elongated in the outter region thus it is not actually as ellipsoidal as it looks which may explain why our perturbed spherical symmetry works well.\
Discussion
==========
Hypothesis …and non hypothesis
------------------------------
Our approach makes several assumptions. Some general and robust hypothesis have been introduced and discussed in section \[hypo\_gen\]. Note that we do not need to assume isothermality. Our key hypothesis consists in assuming the validity of a perturbative approach and in the choice of the nature of this perturbations, with a radial/angular part separation. Theoretical predictions, observations and simulations show that relaxed clusters are regular and globally spheroidal objects, which is what initially motivated our approach. Then in our demonstration on simulations, this turns out to be reasonable. Such an approach can not deal properly with sharp features as shocks waves due to infalling filaments. Then assuming the validity of the angular and radial separation, leads to the equality of this angular parts for all relevant physical quantities ($P_g$, $T_g$, $\phi_{DM}$…), using to first order in $\varepsilon$ the hydrostatic equilibrium and the equation of state. If this is not satisfied in practice then we could either question the validity of this separation or the physics of the cluster. Our experience with simulation shows that for reasonably relaxed clusters, not going through a major merge, the angular part of the perturbation is constant amongst observables. Thus it looks like the separation (and thus the equality of the angular perturbation) is a good hypothesis in general and its failure is a sign of non-relaxation, non-validity of our general physical hypothesis.
Then an important hypothesis lies in the validity of the approximation used. Note first that even if its form is general, its validity depends on the quantity which is assumed to be constant along the integral. In the case of the gas density obtained from the SZ map, it is an exact statement as soon as we assume the isothermality and since clusters in general are not too far from isothermality, this hypothesis is reasonable.
Now, some worth to remember “non hypothesis” are the isothermality and the sphericity (or ellipsoidality). This might be of importance. Indeed, in evaluating the Hubble constant from joint SZ and X-ray measurement it has been evaluated in [@InSu95; @RoSt97; @PuGr00] that, both the asphericity and the non-isothermality of the relevant cluster can yield some important bias (up to $20 \%$). Even if this measure is not our concern here, it is interesting to note that this hypothesis are not required here.
The equivalent spheroidal symmetry case
---------------------------------------
So far, we have work and discussed the perturbed spherical symmetry case. If we turn to spheroidal symmetry the problem is very similar as long as we assume the knowledge of the inclination angle $i$ between the polar axis of the system and the line of sight. This is what we recall in appendix B which is directly inspired from [@FaRy84]: once the projection is nicely parametrised we get for the projected quantity , for the pressure : y() & = & 2[B\_e R]{} \_\^ [P\_[g,0]{}(t)tdt (t\^2-\^2)\^[12]{}]{}\
P\_[g,0]{}(t) & = & -[12]{}[ R B\_e]{} \_t\^P\_[g,0]{}’() [d(\^2-t\^2)\^[1/2]{}]{} . following the notations of appendix B. Since we are dealing with the same Abel integral we can proceed in two steps as we did before.
Even if the inclination angle is *a priori* not accessible directly through single observations it has been demonstrated that it is possible to evaluate it using the deprojection of an axially symmetric distribution of either X-ray/SZ maps or SZ/surface density maps [@ZaSq98; @ZaSq00]. Our approach in this work try to avoid to explicit the full 3-D structure rather than building it, and this is done in a simple self-consistent way therefore we will not get into the details of this procedure that will be discussed in a coming work (Doré 2001, [*in preparation*]{}). Note also that axially symmetric configuration elongated along the line of sight may appear as spherical. This is a difficult bias to alleviate without any prior for the profile. In our case, our method will be biased in the sense that the deprojected profile will be wrong. Nevertheless, we might hope to reproduce properly the global quantities, like abundance of DM or gas and so to alleviate some well known systematics (see previous section), in measuring the baryon fraction.
Conclusion and outlook
======================
It this paper we have presented and demonstrated the efficiency of an original method allowing to perform in a self-consistent manner the joint analysis of SZ and WL data. Using it on noise free simulation we demonstrated how well it can be used to make some x-ray surface brightness prediction, or equivalently emission measure. Our choice in this approach has been to hide somehow the deprojection by using some appropriate approximations. Thus we do not resolved fully the 3-D structure of clusters, but note that the work presented here is definitely a first step towards a full deprojection (Doré 2001, [*in preparation*]{}). Some further refinements of the methods are under progress as well.\
When applying the method to true data, the instrumental noise issue is an important matter of concern. Indeed, whereas the strong advantage of a parametric approach, using a $\beta$-model, is that it allows to adjust the relevant parameters, $r_c$ and $\beta$, on the projected quantities (the image) itself, which is rather robust to noise, it might be delicate to determine the profiles and its derivate by a direct deprojection. Nevertheless, our perturbative approach, as it first relies on a zeroth order quantity found by averaging over some annulus, a noise killing step (at least far from the center), and then work on some mere projected perturbation should be quite robust as well. Consequently we hope to apply it very soon on true data. Furthermore, in this context it should allow a better treatment of systematics (asphericity, non isothermality,…) plaguing any measure of the baryon fraction $f_b$ or the Hubble constant $\mathrm{H}_0$ using X-ray and SZ effect [@InSu95]. These points will be discussed somewhere else (Doré 2001, [*in preparation*]{} ).
Acknowledgment {#acknowledgment .unnumbered}
==============
O.D. is grateful to G. Mamon, M. Bartelmann, S. Zaroubi and especially S. Dos Santos for valuable discussions. We thank J. Calrstrom for allowing the use of some of their SZ images.
M. Bartelmann and P. Schneider, , 340, 291, 2001 J. Binney and S. Tremaine, Princeton University Press, 1987 M. Birkinshaw, , 310, 97, 1999 A. Cavaliere and R. Fusco-Femaino, [å]{}, 49, 137, 1976 F.J. Castander , In F. Durret, D. Gerbal, editors, [*Constructing the Universe with clusters of galaxies*]{}, 2000 J. Chièze, J. Alimi and R. Teyssier, , 495, 630, 1998 F.-X. Désert , [*New Astronomy*]{}, 3, 655-669, 1998 O. Doré , In F. Durret, D. Gerbal, editors, [*Constructing the Universe with clusters of galaxies*]{}, 2000 A.C. Fabian , , 248, 47, 1981 D. Fabricant, G. Rybicki and P. Gorenstein, , 286, 186, 1984 G. Holder and J. Carlstrom, In de Oliveira-Costa A., Tegmark M., editors, [*ASP Conf. Ser. 181: Microwave Foregrounds*]{}, 1999 G. Holder , In F. Durret, D. Gerbal, editors, [*Constructing the Universe with clusters of galaxies*]{}, 2000 L. Grego , , 539, 39, 2000 Y. Inagaki, T. Suginohara, Y. Suto, , 47, 411, 1995 Y. Mellier, , 37, 127, 2000 D. Puy , astro-ph/0009114 Y. Rephaeli, , 33, 541, 1995 K. Reblinsky and M. Bartelmann, astro-ph/9909155 K. Reblinsky, [*PhD thesis*]{} at Ludwig Maximilians Universität München, 2000 E.D. Reese , , 533, 38, 2000 A. Refregier and R. Teyssier, astro-ph/0012086, submitted to [*Phys. Rev. D*]{} K. Roettiger, J. Stone and R. F. Mushotzky, , 482, 588, 1997 C.L. Sarazin, [*X-ray emission from clusters of galaxies*]{}, Cambridge University Press, 1988 R. Sunyaev and I. Zel’dovich, [*Comments Astrophys. Space Phys.*]{}, 4 ,173, 1972 R. Sunyaev and I. Zel’dovich, , 18, 537,1980 R. Teyssier, R. Chièze and J. Alimi, , 480, 36, 1997 K. Yoshikawa and Y. Suto, , 513, 549, 1999 S. Zaroubi [*et al*]{}, , 500, L87+, 1998 S. Zaroubi , astro-ph/0010508 I. Zel’dovich and R. Sunyaev, [*Astrophys. Space Science*]{}, 4, 301, 1969
Annexe : Deprojection in spheroidal symmetry {#annexe-deprojection-in-spheroidal-symmetry .unnumbered}
============================================
In this appendix we recall some useful results concerning spheroid projection derived by Fabricant, Gorenstein and Rybicki [@FaRy84]. In the context of spheroidal systems, cartesian coordinates system are the most convenient for projection. Thus, if the observer’s coordinate system $(x,y,z)$ is chosen such that the line of sight is along the $z$ axis and such that the polar axis of the spheroidal system $z'$ lies in the $x-z$ plane at an inclination angle $i$ to the z-axis, then, in the cartesian coordinate system $(x',y',z')$ the general physical quantities relevant to our problem depends only on the parameter $t$ defined by t\^2 & = & [x’\^2+y’\^2B\_e\^2]{} + [z’\^2 A\_e\^2]{}\
& = & [(xi + yzi)\^2 + y\^2 B\_e\^2]{} + [(zi -xi)\^2 A\_e\^2]{}. If we project a physical quantity $ G(t)$ on the observer sky plane $x-y$ then, I(x,y) & = & I()\
& = & \_[-]{}\^[+]{} G(t) dl\
& = & 2[B\_e R]{} \_\^ [G(t) tdt (t\^2-\^2)\^[12]{}]{} where \^2 & & [x\^2 (RA\_e)\^2]{} + [y\^2 (B\_e)\^2]{}\
R & & . Of course this result shows that if we were to observe a spheroidal system we would map ellipses with an axial ratio equal to $\displaystyle {B \over A} = {1\over R}
{B_e\over A_e}$. But the main result of this appendix is that we obtain at the end an Abel integral similar to the one obtained in the case of spherical system, where the radius as been replaced by the parameter $t$. This simple fact justifies the very analogous treatment developed in this paper for spherical and spheroidal systems.
|
{
"pile_set_name": "ArXiv"
}
|
---
bibliography:
- 'IEEEabrv.bib'
- 'mo.bib'
- 'meta.bib'
- 'frameworks.bib'
- 'appli.bib'
---
Introduction {#sec:intro}
============
Evolutionary Multiobjective Optimization (EMO) is one of the most challenging areas in the field of multicriteria decision making. Generally speaking, a Multiobjective Optimization Problem (MOP) can be defined by a vector function $f$ of $n\geq2$ objective functions $(f_1,f_2,\dots,f_n)$, a set $X$ of feasible solutions in the *decision space*, and a set $Z$ of feasible points in the *objective space*. Without loss of generality, we assume that $Z \subseteq \mathbb{R}^n$ and that all $n$ objective functions are to be minimized. To each decision vector $x \in X$ is assigned an objective vector $z \in Z$ on the basis of the vector function $f : X \rightarrow Z$ with $z = f(x)$. A dominance relation is then usually assumed so that a partial order is induced over $X$. Numerous dominance relations exist in the literature and will be discussed later in the paper. Let us consider the well-known concept of *Pareto dominance*, for which a given objective vector $z \in Z$ is said to *dominate* another objective vector $z' \in Z$ if $\forall i \in \{1,2,\dots,n\}$, $z_i \leq z_i'$ and $\exists j \in \{1,2,\dots,n\} $ such as $z_j < z_j'$. An objective vector $z \in Z$ is said to be *nondominated* if there does not exist any other objective vector $z' \in Z$ such that $z'$ dominates $z$. By extension, we will say that a decision vector $x \in X$ *dominates* a decision vector $x' \in X$ if $f(x)$ dominates $f(x')$, and that a decision vector $x \in X$ is *nondominated* (or *efficient*, *Pareto optimal*) if $f(x)$ maps to a nondominated point. The set of all efficient solutions is called *efficient* (or *Pareto optimal*) *set* and its mapping in the objective space is called *Pareto front*. In practice, different resolution scenarios exist and strongly rely on the cooperation between the search process and the decision making process. Indeed, a distinction can be made between the following forms such a cooperation might take. For instance, the Decision Maker (DM) may be interested in identifying the whole set of efficient solutions, in which case the choice of the most preferred solution is made [*a posteriori*]{}. However, when preference information can be provided [*a priori*]{}, the search may lead to the potential best compromise solution(s) over a particular preferred region of the Pareto front. A third class of methods consists of a progressive, interactive, cooperation between the DM and the solver. However, in any case, the overall goal is often to identify a set of good-quality solutions. But generating such a set is usually infeasible, due to the complexity of the underlying problem or to the large number of optima. Therefore, the overall goal is often to identify a good approximation of it. Evolutionary algorithms are commonly used to this end, as they are particularly well-suited to find multiple efficient solutions in a single simulation run. The reader is referred to [@Deb:01; @CLV:07] for more details about EMO.
As pointed out by different authors (see [*e.g.*]{} [@CLV:07; @ZLB:04]), approximating an efficient set is itself a bi-objective problem. Indeed, the approximation to be found must have both good convergence and distribution properties, as its mapping in the objective space has to be ($i$) close to, and ($ii$) well-spread over the (generally unknown) optimal Pareto front, or a subpart of it. As a consequence, the main difference between the design of a single-objective and of a multiobjective search method deals with these two goals. Over the last two decades, major advances, from both algorithmic and theoretical aspect, have been made in the EMO field. And a large number of algorithms have been proposed. Among existing approaches, one may cite VEGA [@Sch:85], MOGA [@FF:93], NSGA [@SD:94], NSGA-II [@DA+:02], NPGA [@HNG:94], SPEA [@ZT:99], SPEA2 [@ZLT:01] or PESA [@CKO:00]. All these methods are presented and described in [@CLV:07]. Note that another topic to mention while dealing with EMO relates to performance assessment. Various quality indicators have been proposed in the literature for evaluating the performance of multiobjective search methods. The reader is referred to [@ZT+:03] for a review.
In [@ZLB:04], Zitzler et al. notice that initial EMO approaches were mainly focused on moving toward the Pareto front [@Sch:85; @Fou:85]. Afterwards, diversity preservation mechanisms quickly emerged [@FF:93; @SD:94; @HNG:94]. Then, at the end of the nineteens, the concept of elitism, related to the preservation of nondominated solutions, became very popular and is now employed in most recent EMO methods [@ZT:99; @ZLT:01; @KC:00]. Specific issues of *fitness assignment*, *diversity preservation* and *elitism* are commonly approved in the community and are also presented under different names in, for instance, [@CLV:07; @ZLB:04]. Based on these three main notions, several attempts have been made in the past for unifying EMO algorithms. In [@LZT:00], the authors focus on elitist EMO search methods. This study has been later extended in [@ZLB:04] where the algorithmic concepts of fitness assignment, diversity preservation and elitism are largely discussed. More recently, Deb proposed a robust framework for EMO [@Deb:08] based on NSGA-II (Non-dominated Sorting Genetic Algorithm) [@DA+:02]. The latter approach is decomposed into three main EMO-components related to elite preservation, nondominated solutions emphasis and diversity maintaining. However, this model is strictly focused on NSGA-II, whereas other state-of-the-art methods can be decomposed in the same way. Indeed, a lot of components are shared by many EMO algorithms, so that, in somehow, they can all be seen as variants of the same unified model, as it will be highlighted in the remainder of the paper. Furthermore, some existing models have been used as a basis for the design of tools to help practitioners for MOP solving. For instance, following [@ZLB:04; @LZT:00], the authors proposed a software framework for EMO called PISA [@BL+:03]. PISA is a platform and programming language independent interface for search algorithms that consists of two independent modules (the variator and the selector) communicating via text files. Note that other software frameworks dealing with the design of metaheuristics for EMO have been proposed, including jMetal [@DN+:06], the MOEA toolbox for Matlbax [@TL+:01], MOMHLib++ [@MOMHLib++] and Shark [@Shark]. These packages will be discussed later in the paper.
The purpose of the present work is twofold. Firstly, a unified view of EMO is given. We describe the basic components shared by many algorithms, and we introduce a general purpose model as well as a classification of its fine-grained components. Next, we confirm its high genericity and modularity by treating a number of state-of-the-art methods as simple instances of the model. NSGA-II [@DA+:02], SPEA2 [@ZLT:01] and IBEA [@ZK:04] are taken as examples. Afterwards, we illustrate how this general-purpose model has been used as a starting point for the design and the implementation of an open-source software framework dedicated to the reusable design of EMO algorithms, namely ParadisEO-MOEO[^1]. All the implementation choices have been strongly motivated by the unified view presented in the paper. This free C++ white-box framework has been widely experimented and has enabled the resolution of a large diversity of MOPs from both academic and real-world applications. In comparison to the literature, we expect the proposed unified model to be more complete, to provide a more fine-grained decomposition, and the software framework to offer a more modular implementation than previous similar attempts. The reminder of the paper is organized as follows. In Sect. \[sec:model\], a concise, unified and up-to-date presentation of EMO techniques is discussed. Next, a motivated presentation of the software framework introduced in this paper is given in Sect. \[sec:paradiseo\], and is followed by a detailed description of the design and the implementation of EMO algorithms under ParadisEO-MOEO. Finally, the last section concludes the paper.
The Proposed Unified Model {#sec:model}
==========================
An Evolutionary Algorithm (EA) [@ES:03] is a search method that belongs to the class of metaheuristics [@Tal:09], and where a population of solutions is iteratively improved by means of some stochastic operators. Starting from an initial population, each individual is evaluated in the objective space and a selection scheme is performed to build a so-called parent population. An offspring population is then created by applying variation operators. Next, a replacement strategy determines which individuals will survive. The search process is iterated until a given stopping criterion is satisfied.
As noticed earlier in the paper, in the frame of EMO, the main expansions deal with the issues of *fitness assignment*, *diversity preservation* and *elitism*. Indeed, contrary to single-objective optimization where the fitness value of a solution corresponds to its single objective value in most cases, a multiobjective fitness assignment scheme is here required to assess the individuals performance, as the mapping of a solution in the objective space is now multi-dimensional. Moreover, trying to approximate the efficient set is not only a question of convergence. The final approximation also has to be well spread over the objective space, so that a diversity preservation mechanism is usually required. This fitness and diversity information is necessary to discriminate individuals at the selection and the replacement steps of the EA. Next, the main purpose of elitism is to avoid the loss of best-found nondominated solutions during the stochastic search process. These solutions are frequently incorporated into a secondary population, the so-called *archive*. The update of the archive contents possibly appear at each EA iteration.
As a consequence, whatever the MOP to be solved, the common concepts for the design of an EMO algorithm are the following ones:
1. Design a representation.
2. Design a population initialization strategy.
3. Design a way of evaluating a solution.
4. Design suitable variation operators.
5. Decide a fitness assignment strategy.
6. Decide a diversity preservation strategy.
7. Decide a selection strategy.
8. Decide a replacement strategy.
9. Decide an archive management strategy.
10. Decide a continuation strategy.
When dealing with any kind of metaheuristics, one may distinguish problem-specific and generic components. Indeed, the former four common-concepts presented above strongly depends of the MOP at hand, while the six latter ones can be considered as problem-independent, even if some problem-dependent strategies can also be envisaged in some particular cases. Note that concepts of representation and evaluation are shared by any metaheuristic, concepts of population initialization and stopping criterion are shared by any population-based metaheuristic, concepts of variation operators, selection and replacement are shared by any EA, whereas concepts of fitness, diversity and archiving are specific to EMO.
Components Description
----------------------
This section provides a description of components involved in the proposed unified model. EMO-related components are detailed in more depth.
### Representation
Solution representation is the starting point for anyone who plans to design any kind of metaheuristic. A MOP solution needs to be represented both in the decision space and in the objective space. While the representation in the objective space can be seen as problem-independent, the representation in the decision space must be relevant to the tackled problem. Successful applications of metaheuristics strongly requires a proper solution representation. Various encodings may be used such as binary variables, real-coded vectors, permutations, discrete vectors, and more complex representations. Note that the choice of a representation will considerably influence the way solutions will be initialized and evaluated in the objective space, and the way variation operators will be applied.
### Initialization
Whatever the algorithmic solution to be designed, a way to initialize a solution (or a population of solutions) is expected. While dealing with any population-based metaheuristic, one has to keep in mind that the initial population must be well diversified in order to prevent a premature convergence. This remark is even more true for MOPs where the goal is to find a well-converged and a well-spread approximation. The way to initialize a solution is closely related to the problem under consideration and to the representation at hand. In most approaches, the initial population is generated randomly or according to a given diversity function.
### Evaluation
The problem at hand is to optimize a set of objective functions simultaneously over a given search space. Then, each time a new solution integrates the population, its objective vector must be evaluated, [*i.e.*]{} the value corresponding to each objective function must be set.
### Variation
The purpose of variation operators is to modify the representation of solutions in order to move them in the search space. Generally speaking, while dealing with EAs, these problem-dependent operators are stochastic. Mutation operators are unary operators acting on a single solution whereas recombination (or crossover) operators are mostly binary, and sometimes n-ary.
### Fitness Assignment
In the single-objective case, the fitness value assigned to a given solution is most often its unidimensional objective value. While dealing with MOPs, fitness assignment aims to guide the search toward Pareto optimal solutions for a better convergence. Extending [@CLV:07; @ZLB:04], we propose to classify existing fitness assignment schemes into four different families:
- *Scalar approaches*, where the MOP is reduced to a single-objective optimization problem. A popular example consists of combining the $n$ objective functions into a single one by means of a weighted-sum aggregation. Other examples are $\epsilon$-constraint or achievement function-based methods [@Mie:99].
- *Criterion-based approaches*, where each objective function is treated separately. For instance, in VEGA (Vector Evaluated GA) [@Sch:85], a parallel selection is performed where solutions are discerned according to their values on a single objective function, independently to the others. In lexicographic methods [@Fou:85], a hierarchical order is defined between objective functions.
- *Dominance-based approaches*, where a dominance relation is used to classify solutions. For instance, *dominance-rank* techniques computes the number of population items that dominate a given solution [@FF:93]. Such a strategy take part in, [*e.g.*]{}, Fonseca and Fleming MOGA (Multiobjective GA) [@FF:93]. In *dominance-count* techniques, the fitness value of a solution corresponds to the number of individuals that are dominated by these solutions [@ZT:99]. Finally, *dominance-depth* strategies consists of classifying a set of solutions into different classes (or fronts) [@Gol:89]. Hence, a solution that belongs to a class does not dominate another one from the same class; so that individuals from the first front all belong to the best nondominated set, individuals from the second front all belong to the second best nondominated set, and so on. The latter approach is used in NSGA (Non-dominated Sorting GA) [@SD:94] and NSGA-II [@DA+:02]. However, note that several schemes can also be combined, what is the case, for example, in [@ZT:99]. In the frame of dominance-based approaches, the most commonly used dominance relation is based on Pareto-dominance as given in Sect. \[sec:intro\]. But some recent techniques are based on other dominance operators such as $\epsilon$-dominance in [@DMM:05] or g-dominance in [@MS+:08].
- *Indicator-based approaches*, where the fitness values are computed by comparing individuals on the basis of a quality indicator $I$. The chosen indicator represents the overall goal of the search process. Generally speaking, no particular diversity preservation mechanism is in usual necessary, with regards to the indicator being used. Examples of indicator-based EAs are IBEA (Indicator-Based EA) [@ZK:04] or SMS-EMOA (S-Metric Selection EMO Algorithm) [@BNE:07].
### Diversity Assignment
As noticed in the previous section, aiming at approximating the efficient set is not only a question of convergence. The final approximation also has to be well spread over the objective space. However, classical dominance-based fitness assignment schemes often tend to produce premature convergence by privileging nondominated solutions, what does not guarantee a uniformly sampled output set. In order to prevent that issue, a diversity preservation mechanism, based on a given distance measure, is usually integrated into the algorithm to uniformly distribute the population over the trade-off surface. In the frame of EMO, a common distance measure is based on the euclidean distance between objective vectors. But, this measure can also be defined in the decision space or can even combined both spaces. Popular examples of EMO diversity assignment techniques are sharing or crowding. The notion of *sharing* (or *fitness sharing*) has initially been suggested by Goldberg and Richardson [@GR:87] to preserve diversity among the solutions of an EA population. It has first been employed by Fonseca and Fleming [@FF:93] in the frame of EMO. This *kernel* method consists of estimating the distribution density of a solution using a so-called *sharing function* that is related to the sum of distances to its neighborhood solutions. A sharing distance parameter specifies the similarity threshold, [*i.e.*]{} the size of *niches*. The distance measure between two solutions can be defined in the decision space, in the objective space or can even combined both. Nevertheless, a distance metric partly or fully defined in the parameter space strongly depends of the tackled problem. Another diversity assignment scheme is the concept of *crowding*, firstly suggested by Holland [@Hol:75] and used by De Jong to prevent *genetic drift* [@Dej:75]. It is employed by Deb et al. [@DA+:02] in the frame of NSGA-II. Contrary to sharing, this scheme allows to maintain diversity without specifying any parameter. It consists in estimating the density of solutions surrounding a particular point of the objective space.
### Selection {#sec:select}
The selection step is one of the main search operators of EAs. It consists of choosing some solutions that will be used to generate the offspring population. In general, the better is an individual, the higher is its chance of being selected. Common strategies are deterministic or stochastic tournament, roulette-wheel selection, random selection, etc. An existing EMO-specific elitist scheme consists of including solutions from the archive in the selection process, so that nondominated solutions also contributes to the evolution engine. Such an approach has successfully been applied in various elitist EMO algorithms including SPEA [@ZT:99], SPEA2 [@ZLT:01] or PESA [@CKO:00]. In addition, in order to prohibit the crossover of dissimilar parents, mating restriction [@Gol:89] can also be mentioned as a candidate strategy to be integrated into EMO algorithms.
### Replacement
Selection pressure is also affected at the replacement step where survivors are selected from both the current and the offspring population. In generational replacement, the offspring population systematically replace the parent one. An elitist strategy consists of selecting the $N$ best solutions from both populations, where $N$ stands for the appropriate population size.
### Elitism
Another essential issue about MOP solving is the notion of *elitism*. It mainly consists of maintaining an external set, the so-called *archive*, that allows to store either all or a subset of nondominated solutions found during the search process. This secondary population mainly aims at preventing the loss of these solutions during the stochastic optimization process. The update of the archive contents with new potential nondominated solutions is mostly based on the Pareto-dominance criteria. But, in the literature, other dominance criterion are found and can be used instead of the Pareto-dominance relation. Examples are weak-dominance, strict-dominance, $\epsilon$-dominance [@HP:94], etc. When dealing about archiving, one may distinguished four different techniques depending on the problem properties, the designed algorithm and the number of desired solutions: ($i$) *no archive*, ($ii$) an *unbounded archive*, ($iii$) a *bounded archive* or ($iv$) a *fixed-size archive*. Firstly, if the current approximation is maintained by, or contained in the main population itself, there can be no archive at all. On the other hand, if an archive is maintained, it usually comprises the current nondominated set approximation, as dominated solutions are removed. Then, an unbounded archive can be used in order to save the whole set of nondominated solutions found until the beginning of the search process. However, as some continuous optimization problems may contain an infinite number of nondominated solutions, it is simply not possible to save them all. Therefore, additional operations must be used to reduce the number of stored solutions. Then, a common strategy is to bound the size of the archive according to some fitness and/or diversity assignment scheme(s). Finally, another archiving technique consists of a fixed size storage capacity, where a bounding mechanism is used when there is too much nondominated solutions, and some dominated solutions are integrated in the archive if the nondominated set is too small, what is done for instance in SPEA2 [@ZLT:01]. Usually, an archive is used as an external storage only. However, archive members can also be integrated during the selection phase of an EMO algorithm [@ZT:99], see Sect. \[sec:select\].
### Stopping criteria
Since an iterative method computes successive approximations, a practical test is required to determine when the process must stop. Popular examples are a given number of iterations, a given number of evaluations, a given run time, etc.
State-of-the-art EMO Methods as Instances of the Proposed Model {#sec:instances}
---------------------------------------------------------------
By means of the unified model proposed in this paper, we claim that a large number of state-of-the-art EMO algorithms proposed in the last two decades are based on variations of the problem-independent components presented above. In Table \[tab:instances\], three EMO approaches, namely NSGA-II [@DA+:02], SPEA2 [@ZLT:01] and IBEA [@ZK:04], are regarded as simple instances of the unified model proposed in this paper. Of course, only problem-independent components are presented. NSGA-II and SPEA2 are two of the most frequently encountered EMO algorithms of the literature, either for tackling an original MOP or to serve as references for comparison. Regarding IBEA, it is a good illustration of the new EMO trend dealing with indicator-based search that started to become popular in recent years. We can see in the table that these three state-of-the-art algorithms perfectly fit into our unified model for EMO, what strongly validates the proposed approach. But other examples can be found in the literature. For instance, the only components that differ from NSGA [@SD:94] to NSGA-II [@DA+:02] is the diversity preservation strategy, that is based on sharing in NSGA and on crowding in NSGA-II. Another example is the $\epsilon$-MOEA proposed in [@DMM:05]. This algorithm is a modified version of NSGA-II where the Pareto-dominance relation used for fitness assignment has been replaced by the $\epsilon$-dominance relation. Similarly, the g-dominance relation proposed in [@MS+:08] is experimented by the authors on a NSGA-II-like EMO technique where the dominance relation has been modified in order to take the DM preferences into account by means of a reference point.
\[tab:instances\]
Design and Implementation under ParadisEO-MOEO {#sec:paradiseo}
==============================================
In this section, we provide a general presentation of ParadisEO, a software framework dedicated to the design of metaheuristics, and a detailed description of the ParadisEO module specifically dedicated to EMO, namely ParadisEO-MOEO. Historically, ParadisEO was especially dedicated to parallel and distributed metaheuristics and was the result of the PhD work of Sébastien Cahon, supervised by Nouredine Melab and El-Ghazali Talbi [@CMT:04]. The initial version already contained a few number of EMO-related features, mainly with regard to archiving. This work has been partially extended and presented in [@LB+:07]. But since then, the ParadisEO-MOEO module has been completely redesigned in order to confer an even more fine-grained decomposition in accordance with the unified model presented above.
Motivations
-----------
In practice, there exists a large diversity of optimization problems to be solved, engendering a wide number of possible models to handle in the frame of a metaheuristic solution method. Moreover, a growing number of general-purpose search methods are proposed in the literature, with evolving complex mechanisms. From a practitioner point of view, there is a popular demand to provide a set of ready-to-use metaheuristic implementations, allowing a minimum programming effort. On the other hand, an expert generally wants to be able to design new algorithms, to integrate new components into an existing method, or even to combine different search mechanisms. As a consequence, an approved approach for the development of metaheuristics is the use of frameworks. A metaheuristic software framework may be defined by a set of components based on a strong conceptual separation of the invariant part and the problem-specific part of metaheuristics. Then, each time a new optimization problem is tackled, both code and design can directly be reused in order to redo as little code as possible.
ParadisEO and ParadisEO-MOEO
----------------------------
ParadisEO[^2] is a white-box object-oriented software framework dedicated to the flexible design of metaheuristics for optimization problems of both discrete and combinatorial nature. Based on EO (Evolving Objects)[^3] [@KM+:01], this template-based, ANSI-C++ compliant computation library is portable across both Unix-like and Windows systems. Moreover, it tends to be used both by non-specialists and optimization experts. ParadisEO is composed of four connected modules that constitute a global framework. Each module is based on a clear conceptual separation of the solution methods from the problems they are intended to solve. This separation confers a maximum code and design reuse to the user. The first module, ParadisEO-EO, provides a broad range of components for the development of population-based metaheuristics, including evolutionary algorithms or particle swarm optimization techniques. Second, ParadisEO-MO contains a set of tools for single-solution based metaheuristics, [*i.e.*]{} local search, simulated annealing, tabu search, etc. Next, ParadisEO-MOEO is specifically dedicated to the reusable design of metaheuristics for multiobjective optimization. Finally, ParadisEO-PEO provides a powerful set of classes for the design of parallel and distributed metaheuristics: parallel evaluation of solutions, parallel evaluation function, island model and cellular model. In the frame of this paper, we will exclusively focus on the module devoted to multiobjective optimization, namely ParadisEO-MOEO.
ParadisEO-MOEO provides a flexible and modular framework for the design of EMO metaheuristics. Its implementation is based on the unified model proposed in the previous section and is conceptually divided into fine-grained components. On each level of its architecture, a set of abstract classes is proposed and a wide range of instantiable classes, corresponding to different state-of-the-art strategies, are also provided. Moreover, as the framework aims to be extensible, flexible and easily adaptable, all its components are generic so that its modular architecture allows to quickly and conveniently develop any new scheme with a minimum code writing. The underlying goal here is to follow new strategies coming from the literature and, if need be, to provide any additional components required for their implementation. ParadisEO-MOEO constantly evolves and new features might be regularly added to the framework in order to provide a wide range of efficient and modern concepts and to reflect the most recent advances of the EMO field.
Main Characteristics
--------------------
A framework is usually intended to be exploited by a large number of users. Its exploitation could only be successful if a range of user criteria are satisfied. Therefore, the main goals of the ParadisEO software framework are the following ones:
- [*Maximum design and code reuse.*]{} The framework must provide a whole architecture design for the metaheuristic approach to be used. Moreover, the programmer may redo as little code as possible. This aim requires a clear and maximal conceptual separation of the solution methods and the problem to be solved. The user might only write the minimal problem-specific code and the development process might be done in an incremental way, what will considerably simplify the implementation and reduce the development time and cost.
- [*Flexibility and adaptability.*]{} It must be possible to easily add new features or to modify existing ones without involving other components. Users must have access to source code and use inheritance or specialization concepts of object-oriented programming to derive new components from base or abstract classes. Furthermore, as existing problems evolve and new others arise, the framework components must be conveniently specialized and adapted.
- [*Utility.*]{} The framework must cover a broad range of metaheuristics, fine-grained components, problems, parallel and distributed models, hybridization mechanisms, etc.
- [*Transparent and easy access to performance and robustness.*]{} As the optimization applications are often time-consuming, the performance issue is crucial. Parallelism and distribution are two important ways to achieve high performance execution. Moreover, the execution of the algorithms must be robust in order to guarantee the reliability and the quality of the results. Hybridization mechanisms generally allow to obtain robust and better solutions.
- [*Portability.*]{} In order to satisfy a large number of users, the framework must support many material architectures (sequential, parallel, distributed) and their associated operating systems (Windows, Linux, MacOS).
- [*Usability and efficiency.*]{} The framework must be easy to use and must not contain any additional cost in terms of time or space complexity in order to keep the efficiency of a special-purpose implementation. On the contrary, the framework is intented to be less error-prone than a specifically developed metaheuristic.
The ParadisEO platform honors all the above-mentioned criteria and aims to be used by both non-specialists and optimization experts. Furthermore, The ParadisEO-MOEO module must cover additional goals related to EMO. Thus, in terms of design, it might for instance be a commonplace to extend a single-objective optimization problem to the multiobjective case without modifying the whole metaheuristic implementation.
Existing Software Frameworks for Evolutionary Multiobjective Optimization
-------------------------------------------------------------------------
Many frameworks dedicated to the design of metaheuristics have been proposed so far. However, very few are able to handle MOPs, even if some of them provide components for a few particular EMO strategies, such as ECJ [@ECJ], JavaEVA [@SU:05] or Open BEAGLE [@GP:06]. Table \[tab:frameworks\] gives a non-exhaustive comparison between a number of existing software frameworks for EMO, including jMetal [@DN+:06], the MOEA toolbox for Matlab a [@TL+:01], MOMHLib++ [@MOMHLib++], PISA [@BL+:03] and Shark [@Shark]. Note that other software packages exist for multiobjective optimization [@PVS:08], but some cannot be considered as frameworks and others do not deal with EMO. The frameworks presented in Table \[tab:frameworks\] are distinguished according to the following criteria: the kind of MOPs they are able to tackle (continuous and/or combinatorial problems), the availability of statistical tools (including performance metrics), the availability of hybridization or parallel features, the framework type (black box or white box), the programming language and the license type (free or commercial).
Firstly, let us mentioned that every listed software framework is free of use, except the MOEA toolbox designed for the commercial-software Matlab. They can all handle continuous problem, but only a subpart is able to deal with combinatorial MOPs. Moreover, some cannot be considered as white-box frameworks since their architecture is not decomposed into components. For instance, to design a new algorithm under PISA, it is necessary to implement it from scratch, as no existing element can be reused. Similarly, even if Shark can be considered as a white-box framework, its components are not as fine-grained as the ones of ParadisEO. On the contrary, ParadisEO is an open platform where anyone can contribute and add his/her own features. Finally, only a few ones are able to deal with hybrid and parallel metaheuristics at the same time. Hence, with regards to the taxonomy proposed in [@Tal:02], only relay hybrid metaheuristics can be easily implemented within jMetal, MOMHLib++ and Shark, whereas ParadisEO provides tools for the design of all classes of hybrid models, including teamwork hybridization. Furthermore, in opposition to jMetal and MOMHLib++, ParadisEO offers easy-to-use models for the design of parallel and distributed EMO algorithms. Therefore, ParadisEO seems to be the only existing software framework that achieves all the aforementioned goals.
Implementation
--------------
This section gives a detailed description of the base classes provided within the ParadisEO framework to design an EMO algorithm[^4]. The flexibility of the framework and its modular architecture based on the three main multiobjective metaheuristic design issues (fitness assignment, diversity preservation and elitism) allows to implement efficient algorithms in solving a large diversity of MOPs. The granular decomposition of ParadisEO-MOEO is based on the unified model proposed in the previous section.
As an EMO algorithm differs of a mono-objective one only in a number of points, some ParadisEO-EO components are directly reusable in the frame of ParadisEO-MOEO. Therefore, in the following, note that the names of ParadisEO-EO classes are all prefixed by `eo` whereas the names of ParadisEO-MOEO classes are prefixed by `moeo`. ParadisEO is an object-oriented platform, so that its components will be specified by the UML standard [@UML]. But, due to space limitation, only a subpart of UML diagrams are given, but the whole inheritance diagram as well as class documentation and many examples of use are available on the ParadisEO website. Moreover, a large part of ParadisEO components are based on the notion of *template* and are defined as class templates. This concept and many related functions are featured within the C++ programming language and allows classes to handle generic types, so that they can work with many different data types without having to be rewritten for each one.
In the following, both problem-dependent and problem-independent components are detailed. Hence, basic (representation, evaluation, initialization and stopping criteria), EMO-specific (fitness, diversity and elitism) and EA-related (variation, selection, replacement) components are outlined. Finally, the way to build a whole EMO algorithm is presented and a brief discussion concludes the section.
### Representation {#sec:representation}
A solution needs to be represented both in the decision space and in the objective space. While the representation in the objective space can be seen as problem-independent, the representation in the decision space must be relevant to the tackled problem. Using ParadisEO-MOEO, the first thing to do is to set the number of objectives for the problem under consideration and, for each one, if it has to be minimized or maximized. Then, a class inheriting of `moeoObjectiveVector` has to be created for the representation of an objective vector, as illustrated in Fig. \[fig:uml:objectivevector\]. Besides, as a big majority of MOPs deals with real-coded objective values, a class modeling real-coded objective vectors is already provided. Note that this class can also be used for any MOP without loss of generality.
![UML diagram for the representation of a solution in the objective space.[]{data-label="fig:uml:objectivevector"}](fig/uml/objectivevector.eps)
Next, the class used to represent a solution within ParadisEO-MOEO must extend the `MOEO` class in order to be used for a specific problem. This modeling tends to be applicable for every kind of problem with the aim of being as general as possible. Nevertheless, ParadisEO-MOEO also provides easy-to-use classes for standard vector-based representations and, in particular, implementations for vectors composed of bits, of integers or of real-coded values that can thus directly be used in a ParadisEO-MOEO-designed application. These classes are summarized in Fig. \[fig:uml:moeo\].
![UML diagram for the representation of a solution.[]{data-label="fig:uml:moeo"}](fig/uml/moeo.eps)
### Initialization
A number of initialization schemes already exists in a lot of libraries for standard representations, what is also the case within ParadisEO. But some situations could require a combination of many operators or a specific implementation. Indeed, the framework provides a range of initializers all inheriting of `eoInit`, as well as an easy way to combine them thanks to a `eoCombinedInit` object.
### Evaluation
The way to evaluate a given solution must be ensured by components inheriting of the `eoEvalFunc` abstract class. It basically takes a `MOEO` object and sets its objective vector. Generally speaking, for real-world optimization problems, evaluating a solution in the objective space is by far the most computationally expensive step of any metaheuristic. A possible way to overcome this trouble is the use of parallel and distributed models, that can largely be simplify in the frame of ParadisEO thanks to the ParadisEO-PEO module of the software library. The reader is referred to [@CMT:04] for more information on how to parallelize the evaluation step of a metaheuristic within ParadisEO-PEO.
### Variation
All variation operators must derive from the `eoOp` base class. Four abstract classes inherit of `eoOp`, namely `eoMonOp` for mutation operators, `eoBinOp` and `eoQuadOp` for recombination operators and `eoGenOp` for other kinds of variation operators. Various operators of same arity can also be combined using some helper classes. Note that variation mechanisms for some classical (real-coded, vector-based or permutation-based) representations are already provided in the framework. Moreover, an hybrid mechanism can easily be designed by using a mono-objective local search as mutation operator, as they both inherit of the same class, see ParadisEO-MO [@BJT:08]. The set of all variation operators designed for a given problem must be embedded into a `eoTranform` object.
### Fitness Assignment {#sec:fitness}
Following the taxonomy introduced in Sect. \[sec:model\], the fitness assignment schemes are classified into four main categories, as illustrated in the UML diagram of Fig. \[fig:uml:fitness\]: scalar approaches, criterion-based approaches, dominance-based approaches and indicator-based approaches. Non-abstract fitness assignment schemes provided within ParadisEO-MOEO are the *achievement scalarizing functions*, the *dominance-rank*, *dominance-count* and *dominance-depth* schemes, as well as the *indicator-based* fitness assignment strategy proposed in [@ZK:04]. Moreover, a dummy fitness assignment strategy has been added in case it would be useful for some specific implementation.
![UML diagram for fitness assignment.[]{data-label="fig:uml:fitness"}](fig/uml/fitness.eps)
### Diversity Assignment
As illustrated in Fig. \[fig:uml:diversity\], the diversity preservation strategy to be used must inherit of the `moeoDiversityAssignment` class. Hence, in addition to a dummy technique, a number diversity assignment schemes are already available, including sharing [@GR:87], crowding [@Hol:75] and a nearest neighbor scheme [@ZLT:01].
![UML diagram for diversity assignment.[]{data-label="fig:uml:diversity"}](fig/uml/diversity.eps)
### Selection {#selection}
There exists a large number of selection strategies in the frame of EMO. Four ones are provided within ParadisEO-MOEO. First, the *random selection* consists of selecting a parent randomly among the population members, without taking nor the fitness or the diversity information into account. Second, the *deterministic tournament selection* consists of performing a tournament between $m$ randomly chosen population members and in selecting the best one. Next, the *stochastic tournament selection* consists of performing a binary tournament between randomly chosen population members and in selecting the best one with a probability $p$ or the worst one with a probability $(1-p)$. Finally, the *elitist selection* consists of selecting a population member based on some selection scheme with a probability $p$, or in selecting an archive member using another selection scheme with a probability $(1-p)$. Thus, nondominated (or most-preferred) solutions also contribute to the evolution engine by being used as parents. A selection method needs to be embedded into a `eoSelect` object to be properly used. Of course, everything is done to easily implement a new selection scheme with a minimum programming effort.
### Replacement
A large majority of replacement strategies depends on the fitness and/or the diversity value(s) and can then be seen as EMO-specific. Three replacement schemes are provided within ParadisEO-MOEO, but this list is not exhaustive as new ones can easily be implemented due to the genericity of the framework. First, the *generational replacement* consists of keeping the offspring population only, while all parents are deleted. Next, the *one-shot elitist replacement* consists of preserving the $N$ best solutions, where $N$ stands for the population size. At last, the *iterative elitist replacement* consists of repeatedly removing the worst solution until the required population size is reached. Fitness and diversity information of remaining individuals is updated each time there is a deletion.
### Elitism
As shown in Fig. \[fig:uml:archive\], in terms of implementation, an archive is represented by the `moeoArchive` abstract class and is a population using a particular dominance relation to update its contents. An abstract class for fixed-size archives is given, but implementations of an unbounded archive, a general-purpose bounded archive based on a fitness and/or a diversity assignment scheme(s) as well as the SPEA2 archive are provided. Furthermore, as shown in Fig. \[fig:uml:comparator.objectivevector\], ParadisEO-MOEO offers the opportunity to use different dominance relation to update an archive contents by means of a `moeoObjectiveVectorComparator` object, including Pareto-dominance, weak-dominance, strict-dominance, $\epsilon$-dominance [@HP:94], and g-dominance [@MS+:08].
![UML diagram for archiving.[]{data-label="fig:uml:archive"}](fig/uml/archive.eps)
![UML diagram for for dominance relation (used for pairwise objective vector comparison).[]{data-label="fig:uml:comparator.objectivevector"}](fig/uml/comparator.objectivevector.eps)
### Stopping Criteria, Checkpointing and Statistical Tools {#sec:stat}
In the frame of ParadisEO, many stopping criteria extending `eoContinue` are provided. For instance, the algorithm can stop after a given number of iterations, a given number of evaluations, a given run time or in an interactive way as soon as the user decides to. Moreover, different stopping criterion can be combined, in which case the process stops once one of the embedded criteria is satisfied. In addition, many other procedures may be called at each iteration of the main algorithm. The `eoCheckPoint` class allows to perform some systematic actions at each algorithm iteration in a transparent way by being integrated into the global `eoContinue` object. The checkpointing engine is particularly helpful for fault tolerance mechanisms and to compute statistical tools. Indeed, some statistical tools are also provided within ParadisEO-MOEO. Then, it is for instance possible to save the contents of the current approximation set at each iteration, so that the evolution of the current nondominated front can be observed or study using graphical tools such as GUIMOO[^5]. Furthermore, an important issue in the EMO field relates to the algorithm performance analysis and to set quality metrics [@ZT+:03]. A couple of metrics are featured within ParadisEO-MOEO, including the hypervolume metric in both its unary [@ZT:99] and its binary [@ZT+:03] form, the entropy metric [@BST:02], the contribution metric [@MTR:00] as well as the additive and the multiplicative $\epsilon$-indicators [@ZT+:03]. Another interesting feature is the possibility to compare the current archive with the archive of the previous generation by means of a binary metric, and to print the progression of this measure iteration after iteration.
### EMO Algorithms
Now that all the basic, EA-related and EMO-specific components are defined, an EMO algorithm can easily be designed using the fine-grained classes of ParadisEO. As the implementation is conceptually divided into components, different operators can be experimented without engendering significant modifications in terms of code writing. As seen before, a wide range of components are already provided. But, keep in mind that this list is not exhaustive as the framework perpetually evolves and offers all that is necessary to develop new ones with a minimum effort. Indeed, ParadisEO is a white-box framework that tends to be flexible while being as user-friendly as possible. Fig. \[fig:uml:sketch\] illustrates the use of the `moeoEasyEA` class that allows to define an EMO algorithm in a common fashion, by specifying all the particular components required for its implementation. All classes use a template parameter [*MOEOT (Multiobjective Evolving Object Type)*]{} that defines the representation of a solution for the problem under consideration. This representation might be implemented by inheriting of the `MOEO` class as described in Sect. \[sec:representation\]. Note that the archive-related components does not appear in the UML diagram, as we chose to let the use of an archive as optional. The archive update can easily be integrated into the EA by means of the checkpointing process. Similarly, the initialization process does not appear either, as an instance of `moeoEasyEA` starts with an already initialized population.
![A possible instantiation for the design of an EMO algorithm.[]{data-label="fig:uml:sketch"}](fig/uml/sketch.eps)
In order to satisfy both the common user and the more experimented one, ParadisEO-MOEO also provides even more easy-to-use EMO algorithms, see Fig. \[fig:uml:algo\]. These classes propose different implementations of some state-of-the-art methods by using the fine-grained components of ParadisEO. They are based on a simple combination of components, as described in Sect. \[sec:instances\]. Hence, MOGA [@FF:93], NSGA [@SD:94], NSGA-II [@DA+:02], SPEA2 [@ZLT:01], IBEA [@ZK:04] and SEEA [@LJT:08] are proposed in a way that a minimum number of problem- or algorithm-specific parameters are required. For instance, to instantiate NSGA-II for a new continuous MOP, it is possible to use standard operators for representation, initialization and variation, so that the evaluation is the single component to be implemented. These easy-to-use algorithms also tends to be used as references for a fair performance comparison in the academic world, even if they are also well-suited for a straight use to solve real-world MOPs. In a near future, other easy-to-use EMO metaheuristics will be proposed while new fined-grained components will be implemented into ParadisEO-MOEO.
![UML diagram for easy-to-use EMO algorithms.[]{data-label="fig:uml:algo"}](fig/uml/algo.eps)
Discussion
----------
ParadisEO-MOEO has been used and experimented to solve a large range of MOPs from both academic and real-world fields, what validates its high flexibility. Indeed, various academic MOPs have been tackled within ParadisEO-MOEO, including continuous test functions (like the ZDT and DTLZ functions family defined in [@DT+:05]), scheduling problems (permutation flow-shop scheduling problem [@LB+:07b]), routing problems (multiobjective traveling salesman problem, bi-objective ring star problem [@LJT:08]), and so on. Moreover, it has been successfully employed to solve real-world applications in structural biology [@BJ+:08], feature selection in cancer classification [@TJ+:08], materials design in chemistry [@SJ+:08], etc. Besides, a detailed documentation as well as some tutorial lessons and problem-specific implementations are freely available on the ParadisEO website[^6]. And we expect the number of MOP contributions to largely grow in a near future. Furthermore, note that the implementation of EMO algorithms is just an aspect of the features provided by ParadisEO. Hence, hybrid mechanisms can be exploited in a natural way to make cooperating metaheuristics belonging to the same or to different classes. Moreover, the three main parallel models are concerned: algorithmic-level, iteration-level and solution-level and are portable on different types of architecture. Indeed, the whole framework allows to conveniently design hybrid as well as parallel and distributed metaheuristics, including EMO methods. For instance, in the frame of ParadisEO, hybrid EMO algorithms have been experimented in [@LJT:08], a multiobjective cooperative island model has been designed in [@TCM:07], and costly evaluation functions have been parallelized in [@BJ+:08]. The reader is referred to [@CMT:04] for more information about ParadisEO hybrid and parallel models.
Concluding Remarks
==================
The current paper presents two complementary contributions: the formulation of a unified view for evolutionary multiobjective optimization and the description of a software framework for the development of such algorithms. First, we identified the common concepts shared by many evolutionary multiobjective optimization techniques, separating the problem-specific part from the invariant part involved in this class of resolution methods. We emphasized the main issues of fitness assignment, diversity preservation and elitism. Therefore, we proposed a unified conceptual model, based on a fine-grained decomposition, and we illustrated its robustness and its reliability by treating a number of state-of-the-art algorithms as simple instances of the model. Next, this unified view has been used as a starting point for the design and the implementation of a general-purpose software package called ParadisEO-MOEO. ParadisEO-MOEO is a free C++ white-box object-oriented framework dedicated to the flexible and reusable design of evolutionary multiobjective optimization algorithms. It is based on a clear conceptual separation between the resolution methods and the problem they are intended to solve, thus conferring a maximum code and design reuse. This global framework has been experimentally validated by solving a comprehensive number of both academic and real-world multiobjective optimization problems.
However, we believe that a large number of components involved in evolutionary multiobjective optimization are shared by many other search techniques. Thereafter, we plan to generalize the unified model proposed in this paper to other existing metaheuristic approaches for multiobjective optimization. Hence, multiobjective local search or scatter search methods might be interesting extensions to explore in order to investigate their ability and their modularity for providing such a flexible model as the one presented in this paper. Afterwards, the resulting general-purpose models and their particular mechanisms would be integrated into the ParadisEO-MOEO software framework.
Acknowledgment {#acknowledgment .unnumbered}
==============
The authors would like to gratefully acknowledge Thomas Legrand, Jérémie Humeau, and Abdel-Hakim Deneche for their helpful contribution on the implementation part of this work, as well as Sébastien Cahon and Nouredine Melab for their work on the preliminary version of the ParadisEO-MOEO software framework presented in this paper. This work was supported by the ANR DOCK project.
[^1]: ParadisEO-MOEO is available at <http://paradiseo.gforge.inria.fr>
[^2]: <http://paradiseo.gforge.inria.fr>
[^3]: <http://eodev.sourceforge.net>
[^4]: The classes presented in this paper are described as in version $1.2$ of ParadisEO.
[^5]: GUIMOO is a Graphical User Interface for Multiobjective Optimization available at [http://guimoo.gforge.inria.fr/]{}
[^6]: <http://paradiseo.gforge.inria.fr>
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- 'E. Pancino'
- 'D. Romano'
- 'B. Tang'
- 'G. Tautvaišien. e'
- 'A. R. Casey'
- 'P. Gruyters'
- 'D. Geisler'
- 'I. San Roman'
- 'S. Randich'
- 'E. J. Alfaro'
- 'A.Bragaglia'
- 'E. Flaccomio'
- 'A. J. Korn'
- 'A. Recio-Blanco'
- 'R. Smiljanic'
- 'G. Carraro'
- 'A. Bayo'
- 'M. T. Costado'
- 'F. Damiani'
- 'P. Jofré'
- 'C. Lardo'
- 'P. de Laverny'
- 'L. Monaco'
- 'L. Morbidelli'
- 'L. Sbordone'
- 'S. G. Sousa'
- 'S. Villanova'
bibliography:
- 'MgAl.bib'
date: 'Received Month DD, YYYY; accepted Month DD, YYYY'
title: |
The Gaia-ESO Survey.\
Mg-Al anti-correlation in iDR4 globular clusters[^1]
---
Introduction
============
The phenomenon of multiple populations in globular clusters (GCs) has been intensively studied in the last 20-30 years, but we still lack a clear explanation of its origin [@gratton12]. The abundance variations pattern pinpoints the CNO-cycle burning of hydrogen as the major source of the phenomenon, because most of the elements that are observed to vary in GCs are used as catalysts in various CNO sub-cycles, where they are depleted or accumulated depending on the particular reaction rates. However, a hot debate is still ongoing on which types of polluters convey the processed material into the GC insterstellar gas reservoir, and how it is recycled to pollute a fraction of the GC stars [see @dercole08; @decressin07; @larsen12a; @renzini15; @bastian15 for references].
The Mg-Al anti-correlation is of particular importance, because unlike the C-N and Na-O ones, its extension varies significantly from one GC to the other, to the point of disappearing completely in some GCs. Mg and Al are involved in the hot Mg-Al cycle, that requires high temperatures [$\sim$10$^8$ K, @denissenkov15; @renzini15] and therefore its study can place very strong constraints on the type of star that is responsible for the peculiar chemistry observed in GCs. Another advantage of studying Mg and Al is that they suffer much less internal mixing compared to C and N, or even Na and O, therefore the observed abundances do not depend on the evolutionary status of a star.
------------------ ----------------------- ----------------- --------------------- ----------------------- ----------------- -------------
Cluster \[Fe/H\]$_{\rm{H96}}$ RV$_{\rm{H96}}$ $\log(M/M_{\odot})$ \[Fe/H\]$_{\rm{GES}}$ RV$_{\rm{GES}}$ N$_{\star}$
(dex) (km s$^{-1}$) (dex) (dex) (km s$^{-1}$)
NGC 104 (47 Tuc) –0.72 –18.0 6.05$\pm$0.04$^a$ –0.71$\pm$0.02 –17.6$\pm$0.8 119
NGC 362 –1.36 223.5 5.53$\pm$0.04$^a$ –1.12$\pm$0.03 222.3$\pm$0.6 73
NGC 1851 –1.18 320.5 5.49$\pm$0.04$^a$ –1.07$\pm$0.04 320.2$\pm$0.5 89
NGC 1904 (M 79) –1.60 205.8 5.20$\pm$0.04$^a$ –1.51$\pm$0.03 205.2$\pm$0.5 30
NGC 2808 –1.14 101.6 5.93$\pm$0.05$^a$ –1.03$\pm$0.03 103.7$\pm$1.4 45
NGC 4833 –1.85 200.2 5.20$\pm$0.21$^b$ –1.92$\pm$0.03 200.6$\pm$1.0 28
NGC 5927 –0.49 –107.5 5.32$\pm$0.21$^b$ –0.39$\pm$0.04 –102.5$\pm$0.7 85
NGC 6752 –1.54 –26.7 5.16$\pm$0.21$^b$ –1.48$\pm$0.04 –26.3$\pm$0.7 57
NGC 7089 (M 2) –1.65 –5.3 5.84$\pm$0.05$^a$ –1.47$\pm$0.03 –1.8$\pm$1.3 46
------------------ ----------------------- ----------------- --------------------- ----------------------- ----------------- -------------
The Gaia-ESO survey [GES, @ges1; @ges2], that is being carried out at the ESO VLT with FLAMES [@flames], observed GCs as calibrators for the astrophysical parameters (AP) and abundance ratios [@pancino16 hereafter P16]. Part of the observed GCs were included in the fourth internal data release (iDR4), that is based on data gathered from December 2011 to July 2014 and from which the next GES public release will be published through the ESO archive system[^2]. The iDR4 data also include relevant archival data obtained with FLAMES in the GES setups. A particular advantage of the adopted observing setups is that they allow for an accurate measurement of the Mg and Al abundance ratios with both the UVES and GIRAFFE spectrographs, thus providing statistical samples comparable to those recently obtained by APOGEE [@meszaros15] and the FLAMES GC survey [@carretta09a; @carretta09b; @carretta11; @carretta13; @carretta14].
The paper is organized as follows: in Section \[sec:data\] we describe the data treatment and sample selection; in Section \[sec:results\] we present the results and explore their robustness; in Section \[sec:discussion\] we describe and discuss the behaviour of the Mg-Al abundance variations; in Section \[sec:conclusions\] we summarize our findings and conclusions.
Data sample and treatment {#sec:data}
=========================
The GES iDR4 data on GCs are all based on the UVES setup centred around 5800 Å and on the two GIRAFFE setups HR 10 (5339–5619 Å) and HR 21 (8484–9001 Å). The selection of calibration targets, which include GCs, was described in detail by P16. Briefly, 14 GCs were selected to adequately cover the relevant metallicity range, from \[Fe/H\]$\simeq$–2.5 to -0.3 dex, 11 of which were analyzed in iDR4. A few less studied GCs were included at the beginning of the survey, owing to pointing constraints (see P16 for more details), and in particular the sample includes NGC 5927, one of the most metal-rich GCs available. The selection of stars was focussed on red giants, except in NGC 5927 where mostly red clump stars were selected because of the high differential reddening and the need to maximize cluster members. Stars already having GIRAFFE archival observations in the ESO archive were prioritized, to increase the wavelength coverage by including the GES setups. Stars already observed with UVES were not repeated. A few fibers were dedicated to re-observe with UVES some GIRAFFE targets and vice-versa, to allow for cross-calibration.
------------------ --------- ---------------- ------------------------ -------- ---------------- ---------- ------------------ --------------------------- --------------------------------- --------------------------- ---------------------------------
CNAME Cluster T$_{\rm{eff}}$ $\delta$T$_{\rm{eff}}$ log$g$ $\delta$log$g$ \[Fe/H\] $\delta$\[Fe/H\] $\log \epsilon_{\rm{Al}}$ $\delta\log \epsilon_{\rm{Al}}$ $\log \epsilon_{\rm{Mg}}$ $\delta\log \epsilon_{\rm{Mg}}$
(K) (K) (dex) (dex) (dex) (dex) (dex) (dex) (dex) (dex)
12593863-7051321 NGC4833 4673 124 1.308 0.246 –1.844 0.103 5.61 0.07 5.94 0.13
13000316-7053486 NGC4833 4675 132 1.207 0.239 –1.920 0.106 5.60 0.07 5.59 0.13
12585746-7053278 NGC4833 4678 127 1.316 0.261 –2.024 0.119 5.29 0.07 5.73 0.14
12592040-7051156 NGC4833 4623 123 1.130 0.252 –1.922 0.101 5.55 0.07 5.50 0.13
12593089-7050304 NGC4833 4613 123 1.112 0.254 –1.920 0.108 5.55 0.07 5.66 0.14
12594306-7053528 NGC4833 4635 117 1.070 0.235 –1.890 0.111 5.61 0.07 5.69 0.13
------------------ --------- ---------------- ------------------------ -------- ---------------- ---------- ------------------ --------------------------- --------------------------------- --------------------------- ---------------------------------
All iDR4 data were reduced as described in detail by @sacco14 for spectra taken with UVES [@uves] at high resolution (R$=\lambda/\delta\lambda\simeq$47000) and by @jeffries14 for spectra taken with GIRAFFE [@flames] at intermediate resolution (R$\simeq$16000–20000). Briefly, the UVES pipeline [@uvespipe] was used to process UVES spectra, performing the basic reduction steps. Additional data analysis was performed for UVES with specific software developed at the Arcetri Astrophysical Observatory. GIRAFFE spectra were processed with a dedicated software developed at CASU[^3] (Cambridge Astronomy Survey Unit).
![Example of the small (compared to the errors and internal spreads) residual offsets in \[Fe/H\] between UVES and GIRAFFE in GES iDR4 data, in two of the sample GCs: NGC 2808 (top panels) and NGC 1851 (bottom panels). The left panels show \[Fe/H\] as a function of T$_{\rm{eff}}$ and the right ones of log$g$. UVES stars are plotted as cyan symbols, with their median \[Fe/H\] as a cyan line. GIRAFFE stars are plotted as magenta symbols, with their median \[Fe/H\] as a magenta line. The reference \[Fe/H\] from @harris96 [@harris10] is plotted as an orange line.[]{data-label="fig:trends"}](fig_trends.pdf){width="\columnwidth"}
Abundance analysis
------------------
The GES abundance analysis of UVES spectra was described in detail by @smiljanic14 and Casey et al. (in preparation), while the analysis of GIRAFFE spectra by Recio-Blanco et al. (in preparation). Both are carried out by many research groups, using several state-of-the-art techniques. Because of the GES complexity, the data analysis is performed iteratively in each internal data release (iDR), gradually adding not only new data in each cycle, but also new processing steps to take into account lessons learned in the previous iDRs (offsets or trends identified through early science projects) or to increase the number of elements measured (from molecules, or faint features), or finally by adding detail to the measurements (corrections for non-LTE, rotational velocities, veiling, and many more). This methodology allows for a better quantification of the internal and external systematics, that are evaluated in a process of homogeneization of all node results, producing the final GES recommended APs and abundance ratios, as described by P16 and Hourihane et al. (in preparation).
To make the GES data analysis as uniform as possible, the analysis of F, G, and K type stars relies on a common set of atmospheric models [the MARCS grid, @marcs], a common linelist [@heiter15b], and – for those methods that require it – a common library of synthetic spectra [computed with MARCS models and based on the grid by @laverny12]. The Solar reference abundances adopted in this paper were those by @grevesse07. As mentioned, iDR4 abundances are computed in the LTE regime, and only future releases will include non-LTE corrections. Moreover, the GES homogenous analysis relies on a rich set of calibrating objects (including GCs), selected as described by P16. In particular, the external calibration of FGK stars in iDR4 relies mostly on the Gaia benchmark stars [@jofre14; @blanco14; @heiter15a; @hawkins16].
{width="\textwidth"}
When comparing the iDR4 abundances obtained from UVES and GIRAFFE, small (i.e., comparable to the internal spreads) offsets in the abundance ratios were found ($\sim$0.10–0.15 dex, depeding on the GC), as shown by P16. For the present analysis, we reported the \[Fe/H\] GIRAFFE measurements to the UVES scale using the difference between the median abundance of the two samples in each GC. We observed that once the \[Fe/H\] offsets were corrected this way, there were no significant residual offsets when comparing the UVES and GIRAFFE measurements of the other elements considered in this paper. In any case, in the GES cyclic processing the recommended values of RVs, APs, and chemical abundances generally improve from one iDR to the next (see Randich et al., in preparation, and P16). We thus expect the offsets to reduce considerably in future GES releases. Most importantly, as Figure \[fig:trends\] shows, in iDR4 there are no significant trends of \[Fe/H\] as a function of T$_{\rm{eff}}$ or log$g$ for either UVES or GIRAFFE results.
Sample selection {#sec:sele}
----------------
We applied the same quality selection criteria of the GES public release (that will be described in the ESO release documentation) to the iDR4 recommended results. For the cool giants in GCs, these are: $\delta$T$_{\rm{eff}}/$T$_{\rm{eff}}<$5%, $\delta$log$g$<0.3 dex, and $\delta$\[Fe/H\]$<$0.2 dex. We also left out all stars that lacked AP or RV determinations.
We then selected GC probable members using the median \[Fe/H\] and RV [as done by @lardo15] as a reference for each GC, and removing all stars that deviated more than 3$\sigma$ from it. As discussed by P16, the GES median \[Fe/H\] and RV generally agree with reference literature values [@harris96; @harris10]. The members selection was quite straightforward, because the vast majority of field stars have roughly Solar metallicity and RV approximately 0$\pm$50 km s$^{-1}$, thus the GC stars differ significantly from field stars in at least one of \[Fe/H\] or RV.
The above selections lead to highly varying sample sizes for UVES and GIRAFFE depending on several factors like spectral quality (S/N ratio, spectral defects), observing conditions (sky, seeing), availability of previous information (photometry, membership, other archival data), and cluster (crowding, GC compactness, distance, metallicity). Of the 11 GCs included in iDR4 (see P16, for the selection criteria of calibrating objects) only 10 contained at least 5 red giants after the quality and membership selections. Of these, we excluded M 15 because the iDR4 analysis of its very metal-poor spectra did not provide satisfactory results. The final list of 9 analysed GCs is presented in Table \[tab:clusters\], along with some relevant properties. The final sample contained 510 stars (159 with UVES and 351 with GIRAFFE) in 9 GCs, that had Mg or Al measurements. The stars and their relevant properties are listed in Table \[tab:stars\].
{width="\textwidth"}
We stress again that the size and quality of the presented GC sample are comparable to the two largest GC surveys presented in the literature so far, i.e., the FLAMES GC survey and the APOGEE sample.
Results {#sec:results}
=======
A quality control test on the Na-O anti-correlation
---------------------------------------------------
We started by comparing our results for the well studied Na-O anti-correlation with: the FLAMES GC survey by @carretta09a [@carretta09b; @carretta11; @carretta13; @carretta14]; the 47 Tuc data by @cordero14; the NGC 6752 study by @yong05; and the M2 studies by @yong14 and @meszaros15. We restricted the comparisons to high-resolution studies (R$>$15000) of red giants. The results are plotted in Figure \[fig:NaO\], where only UVES measurements appear because oxygen is not included in the GES GIRAFFE setups.
As can be seen, the GES measurements agree well with the literature ones, in spite of the different methods, linelist selections, models, and data sets involved. The median offsets, measured by taking the difference between the median abundances obtained by GES and in the literature for each GC[^4], were in general lower than $\simeq$0.1 dex. We note that for 47 Tuc the GES data are less spread than the literature ones in \[O/Fe\], but they do not sample the full extension of \[Na/Fe\], most probably because of the quality selection criteria described in Section \[sec:sele\], that penalize oxygen abundances derived mostly from the weak \[O I\] line at 6300 Å. We also note that the GES data for NGC 2808 show two well separated clumps of stars while the literature data apparently display a more continuous distribution. We ascribe this to our small sample which, being randomly chosen, picked stars near two most populated peaks of the underlying distribution, which contains five separate groups [@carretta15]. The apparently continuous distribution of literature data is mostly driven by the GIRAFFE measurements (brown dots), which are more numerous but less precise than the UVES ones (gold dots).
We present here for the first time \[Na/Fe\] and \[O/Fe\] abundance ratios for NGC 5927, one of the most-metal rich GC studied with high resolution spectroscopy in the literature so far. NGC 5927 displays the same stubby Na-O anti-correlation as 47 Tuc, the other metal-rich GC in the sample: while the upper \[Na/Fe\] limit is the same as any other GCs, and is governed by the equilibrium abundance of the NeNa hot cycle, the lowest \[Na/Fe\] abundances are slightly super-Solar rather than sub-Solar, as expected for field stars at the same metallicity, as further discussed in Section \[sec:sample\].
In conclusion, the presented comparison confirms that the atmospheric parameters resulting from the GES homogenized analysis are well determined (see also P16).
Mg-Al anti-correlation
----------------------
The Mg-Al anti-correlation for the selected iDR4 stars is plotted in Figure \[fig:MgAl\], along with the available literature data. In contrast to the Na-O anti-correlation, we present both UVES and GIRAFFE measurements. Our measurements compare well with the literature, with small offsets that are $<$0.1 dex, i.e., within the quoted errors, as in the Na-O case.
For NGC 1904 there are few stars and they appear quite scattered. For the other 8 GCs, however, we clearly see that the Mg-Al anti-correlation has a variable extension. Four GCs have a well-developed and curved Mg-Al anti-correlation: NGC 2808, NGC 4833, NGC 6752, and M 2. Two GCs have a stubby Mg-Al distribution: NGC 362 and NGC 1851 which mostly display an \[Al/Fe\] spread and no significant \[Mg/Fe\] spread. The two most metal-rich GCs in the sample, 47 Tuc and NGC 5927, show no clear signs of an anti-correlation. This behaviour was already noted by @carretta09a, who explicitly mentioned the GC present-day mass and metallicity as the two main parameters driving the extent of the Mg-Al anti-correlation (see Section \[sec:discussion\] for more discussion on this point).
We did not detect any significant variation of the combined abundance of Mg and Al. This is consistent with no net production of these elements, but just the result of the conversion of Mg into Al during the Mg-Al cycle. Concerning the Al-Si branch of the Mg-Al cycle [see also @yong05; @carretta09a], we looked for Si variations in our sample, but unfortunately GES iDR4 contains only a few Si measurements that pass all the criteria employed to select the sample stars. Inspection of the \[Si/Fe\] ratio as a function of \[Al/Fe\] or \[Mg/Fe\] for the few stars with reliable Si measurements in iDR4 did not reveal any clear trend.
Discussion {#sec:discussion}
==========
![Run of the four main anti-correlated elements as a function of \[Fe/H\]. For MW field stars, GES iDR4 results are plotted as black dots and SAGA metal-poor stars as grey dots. Homogenized APOGEE data are plotted in green and FLAMES GC surveys data are plotted in yellow for UVES and brown for GIRAFFE. GES iDR4 measurements are plotted in cyan for UVES and magenta for GIRAFFE.[]{data-label="fig:GCfield"}](fig_GCfield.pdf){width="\columnwidth"}
To put our results in context, we combined the GES iDR4 data with the FLAMES GC survey [@carretta09a; @carretta09b] and the APOGEE survey [@meszaros15] measurements. Literature data were shifted in both \[Fe/H\] and the \[El/Fe\] abundance ratios by small amounts ($\leq$0.1 dex) to place them on the GES iDR4 scale. The shifts were computed using the median values of key elements for the GCs in common among studies[^5]. The combined sample contains $\simeq$1300 stars in 28 GCs, having both Mg and Al measurements (or 2500 stars if one counts also the stars having Na or O, but missing one of Mg or Al).
In the next sections, we discuss some of the Mg-Al anti-correlation properties that were apparent during a preliminary exploration of the combined sample. We leave the discussion of other elements to the following GES releases, where more stars, more GCs, and more elements will be available, and the whole GES intercalibration procedure will be more refined.
Comparison with field stars {#sec:sample}
---------------------------
We started by examining the Na-O and Mg-Al anti-correlation as a function of metallicity, and we compared the available GC measurements with the Milky Way (MW) field population. Because iDR4 contains mostly MW stars with \[Fe/H\]$\geq$–1.0 dex, we added metal-poor stars extracted from the SAGA database [@saga]. Figure \[fig:GCfield\] shows the comparisons. Oxygen measurements in iDR4 are still quite spread out, because they are often based solely on the weak \[O I\] line at 6300 Å, and they rely on the generally lower S/N ratio of field star spectra compared to GC stars (see P16 for details), but the bulk measurements follow the expected trend. In spite of the heterogeneity of the sample and of our relatively simple homogeneization method, the agreement among the plotted studies is remarkably good.
Two important things should be noted at this point. The first is that both GES and the FLAMES GC survey use similar instrumental setups, wavelength ranges, and S/N ratios. GES is targeting mostly MW field stars of higher metallicity, while the FLAMES GC survey was focused on the Na-O anti-correlation. As a result, neither of these surveys contains many measurements at \[Fe/H\]$\leq$–1.7 dex, and in particular, they do not contain many stars with low values of \[Al/Fe\] or \[Mg/Fe\][^6], because they mostly rely on spectral lines that become weak at those metallicities. On the contrary, APOGEE measurements are obtained with a different wavelength range and using different features and selection criteria, and therefore that sample contains many more stars with low Al or Mg, as can be seen in Figure \[fig:GCfield\]. On the other hand, GES data add NGC 5927 to the sample, extending the \[Fe/H\] coverage to \[Fe/H\]=–0.49 dex, while the two previous systematic studies considered here reached \[Fe/H\]$\simeq$–0.7 dex with 47 Tuc and M 71.
As was noted by others before, the lower boundary of the Na and Al distribution in GCs is aligned with the typical field star value at any given metallicity. Similarly, the upper boundary of the O and Mg distribution in GCs is aligned with the typical field-star $\alpha$-enhancement at any given metallicity. This supports the idea that the main contributors to the chemistry of [*normal*]{} stars in GCs (often called [*first generation*]{} stars or [*unenriched*]{} stars) are mostly SNe II, like for the field stars at the same metallicity, with SNe Ia intervening only above \[Fe/H\]$\simeq$–1.0 dex.
The abundance of [*anomalous stars*]{} (often called [*second generation*]{} or [*enriched*]{} stars) is thought to be governed by CNO cycle processing at high temperatures [@kraft94; @gratton04]. The extent of Na variations in GC stars changes slightly with \[Fe/H\]. This is mostly governed by the lower boundary variations of \[Na/Fe\] in GC stars, that follow the field population behaviour as discussed. The upper boundary – governed by the equilibrium abundances reached in the Ne-Na cycle – shows only moderate variations in our sample, being roughly at \[Na/Fe\]$\simeq$+0.6 dex, and contained within $\pm$0.15 dex[^7]. The extent of \[Al/Fe\] variations in GC stars, instead, changes dramatically with \[Fe/H\] both in the upper and lower boundaries. While it was suggested that \[Fe/H\] is not the sole parameter governing Al variations (see also Section \[sec:ext\]), the Al spread clearly varies with metallicity, from a maximum of $\Delta$\[Al/Fe\]$\simeq$1.5 dex and more below \[Fe/H\]$\simeq$–1.0 dex, to $\Delta$\[Al/Fe\]$\leq$0.5 dex above that metallicity, where the spread become compatible with measurement uncertainties.
These considerations lead us to believe that the entire sample of 1300 stars should be used when studying the behaviour of the Mg-Al anti-correlation with GC properties, to increase the parameter coverage and the statistical significance of the analysis. Figure \[fig:GCfield\] is an example of the striking power of such a sample, and reveals the importance of \[Fe/H\] as a driving parameter for the presence and extent of the Mg-Al anti-correlation.
![The extent of the Mg-Al anti-correlation, measured as $\sigma$\[Al/Mg\] (upper panels) and $\Delta$\[Al/Mg\] (lower panels), based on the sample described in the text. The behaviour as a function of average \[Fe/H\] (left panels) and total log M (present-day mass, right panels) of each GC is shown. GCs in the left panels are coloured as a function of log M, where yellow corresponds to log M=4.19 dex (the lowest mass in the sample) and dark orange to log M=6.05 dex (the highest mass). In the right panels, points are coloured as a function of their metallicity, with red corresponding to \[Fe/H\]=–0.5 dex (the highest metallicity in the sample) and blue to \[Fe/H\]=–2.5 dex (the lowest metallicity). Our models in the form $a$\[Fe/H\]+$b$logM+$c$ are also plotted as lines coloured based on mass or metallicity.[]{data-label="fig:ext"}](fig_ext.pdf){width="\columnwidth"}
Mg-Al anti-correlation extension {#sec:ext}
--------------------------------
We have seen that a clear variation of the \[Al/Fe\] spread with \[Fe/H\] is apparent in Figure \[fig:GCfield\], and this is not only caused by the natural \[Al/Fe\] variations observed for field stars (the lower \[Al/Fe\] boundary). The question of which GC properties govern the extension (or presence) of the Mg-Al anti-correlation has been explored previously in the literature [see, e.g., @carretta09a; @carretta09b; @meszaros15; @cabrera16 for example]. Both \[Fe/H\] and present-day mass were mentioned as the most important parameters in those works. However, when only \[Fe/H\] was considered [Figure 4 by @cabrera16], only weak correlations were found, with large spreads and unclear statistical significance. In that case, 25 GCs were examined with typically 10–20 stars per GC. Here, we can profit from our combined sample of 28 GCs with $\simeq$50 stars each on average, as described in Section \[sec:sample\], and re-examine these parameters as drivers of the Mg-Al anti-correlation.
We therefore proceeded to fit the data using two different indicators of the anti-correlation extension, the standard deviation of the \[Al/Mg\] distribution and its maximum variation, i.e., the difference between tha maximum and minimum values of \[Al/Mg\] for each GC. The two indicators will be expressed as $\sigma$\[Al/Mg\] and $\Delta$\[Al/Mg\] in the following[^8]. Figure \[fig:ext\] shows the results graphically, where it is apparent that the most massive GCs tend to have higher values with both indicators in the plot as a function of \[Fe/H\], and the most metal-poor ones also have higher spread in the plot as a function of log M. If we were to fit the two parameters separately, we would obtain very high spreads and very weak relations even with our larger sample.
We therefore employed a linear fit on both parameters simultaneously and we obtained the following results: $$\sigma\rm{[Al/Mg]}=0.19(\pm0.06)~\log M - 0.20(\pm0.05)~\rm{[Fe/H]} -
0.94(\pm0.33)$$ $$\Delta\rm{[Al/Mg]}=0.67(\pm0.21)~\log M - 0.53(\pm0.17)~\rm{[Fe/H]} -
3.16(\pm1.11)$$
The fits are also reported in Figure \[fig:ext\]. The p-values of the $\sigma$\[Al/Mg\] and $\Delta$\[Al/Mg\] are 0.0001493 and 0.0005242, respectively, suggesting that it would be improbable to obtain the observed distribution by chance (if the chosen model[^9] was correct). The errors on the coefficient are also relatively low, suggesting that the two-parameter linear model is a reasonable description of the data. We thus can conclude that both parameters[^10] are indeed important in determining the extension of the Mg-Al anti-correlation, in the sense that we do find much smaller extensions for GCs that are metal-rich or less massive (or both). This also supports the results obtained by @carretta10 on the Na-O data of the FLAMES GC survey, and the photometric analysis carried out by @milone17.
This does not mean that the model we adopted is the best one, and it does not mean that \[Fe/H\] and logM are the only two parameters at play, especially considering that the errors on the derived coefficients are of about $\simeq$30%, and that the residual distributions, although centered on zero, have relatively large spreads: $\rm{med(r.m.s.}_{\Delta\rm{[Al/Mg]}})=-0.005\pm0.768$ and $\rm{med(r.m.s.}_{\sigma\rm{[Al/Mg]}})=+0.014\pm0.258$. In the present analysis, we have not used the errors in the fit, because of the heterogenity of the data sources and therefore of error determinations, but even accounting for that, the relatively large spreads could point towards some extra parameter. We also tried a different model, adding a quadratic term in both \[Fe/H\] and logM but the fit did not improve significantly. Similarly, when adding the age parameter from @marin09 or from @van13 as a third linear term, the coefficient was always low ($<$0.0001), and the quality of the fit was worse than that of the two-parameters one. A full statistical analysis of the relation between anti-correlation parameters and GC properties will be presented in a forthcoming paper, when the analysis of the whole GES sample of stars in all the observed GCs will be completed, and we will also have data on the \[C/Fe\] and \[N/Fe\] ratios.
The Mg-Al anti-correlation is a problem for the scenarios based on fast rotating massive stars [FMRS, @decressin07] or massive interacting binaries [MIB, @demink09], which activate CNO burning in their cores but require very high masses (well above 100 M$_{\odot}$) and some tweaking of the reaction rates to reproduce the Mg-Al observations. More massive stars would be required like the super-massive stars [SMS, $\sim$1000 M$_{\odot}$, @denissenkov15], but these are not observed and therefore their postulated physics is highly uncertain. We expect a metallicity dependency for SMS because of the strong wind mass loss [@vink11] that would lead to the formation of smaller SMS at higher metallicity. Asymptotic Giant Branch polluters (AGB), that activate CNO burning in the shell and also hot-bottom burning at high masses, can explain naturally the Mg-Al observations, because both the depletion of Mg and the production of Al are extremely sensitive to the AGB star metallicity [@ventura16]. However, we remark here that none of the scenarios presented in the literature so far is entirely free from serious shortcomings [@renzini15]. We also remark that no conclusive answer can be drawn by considering one anti-correlation only and this, like other works, has to be considered as a preliminary exploration.
The correlation of the Mg-Al extent with present-day GC mass has not been explained in detail in any of the scenarios proposed so far. It would be necessary to explore whether the observed mass variations among Galactic GCs (presently in the range 10$^4$–10$^6$ M$_{\odot}$) are sufficient to significantly change the ability of the forming GCs (with their unknown initial masses) to retain the polluters ejecta.
Low-Mg in extragalactic GCs {#sec:low}
---------------------------
It was reported by various authors [@larsen14; @colucci14; @sakari15] that the integrated light, high-resolution abundance determinations of extragalactic GCs tend to have \[Mg/Fe\] significantly below that of MW GCs, around \[Mg/Fe\]$\simeq$0 dex and lower, rather than 0.3–0.4 dex. This observational fact is difficult to explain with problems in the abundance analysis alone: the comparison by @colucci16 highlights an underestimate of \[Mg/Fe\] of $\simeq$0.2 dex with integrated light spectroscopy for some Galactic GC, while Larsen et al. (in preparation) find systematic effects of 0.1 dex at most. The Mg underabundance is not seen in other $\alpha$-element abundances, that are consistent with the typical $\alpha$-enhancement expected from metal-poor GCs in the respective galaxies. In other words, \[Mg/$\alpha$\] in these metal-poor, extragalactic GCs is lower than in MW GCs with similar metallicity.
Figures \[fig:MgAl\] and \[fig:GCfield\] show that some Galactic GCs – not all – contain a fraction of stars well below \[Mg/Fe\]$\simeq$0 dex. The question then is whether the fraction of low-Mg stars and the Mg spread caused by a normal Mg-Al anticorrelation would be sufficient to produce an average GC $<$\[Mg/Fe\]$>$ close to Solar or even lower, as observed in extragalactic GCs [@larsen16]. While a deeper investigation of this topic is outside the scope of the present paper, we can use the collected GES and literature samples to understand if anti-correlations are at least a viable explanation for the observed low \[Mg/$\alpha$\] abundances in many extragalactic GCs. In practice, we averaged the \[Mg/$\alpha$\] measurements for stars in each GC, which is appropriate because they are based on relatively weak absorption lines, but can be an incomplete representation of the abundance in the whole GC and on the proportions of stars with different Mg content. Integrated light measurements, on the other hand, represent a complete average – weighted by star brightness and cut by limiting magnitude – of a GC [see @colucci16 for a comparison between the two methods].
We collected literature data on extragalactic GCs in M 31 [@colucci09; @colucci14; @sakari15], the LMC [Large Magellanic Cloud, @mucciarelli08; @mucciarelli09; @mucciarelli10; @mucciarelli14; @johnson06; @mateluna12], the Fornax dwarf galaxy [@letarte06; @larsen12b], and WLM [Wolf-Lundmark-Melotte galaxy, @larsen14]. To illustrate the effect, we plotted the data for extragalactic GCs together with the MW field samples and the Galactic GCs from the collection described in the previous section (Figure \[fig:lowMg\]). The Figure shows the average or integrated abundance of each GC, where the $\alpha$-elements are represented by Ca and Si, that are present in all the used studies. As can be noticed, many extragalactic GCs have normal $\alpha$-enhancement but low \[Mg/Fe\], and as a result their \[Mg/$\alpha$\] ratios are below zero. The MW GCs however, all have \[Mg/Fe\]$\simeq$0.4 dex – with very few exceptions – and have a spread compatible with the errors and the internal Mg spread of Figure \[fig:GCfield\].
![The average \[Mg/Fe\], \[$\alpha$/Fe\], and \[Mg/$\alpha$\] of our collected GES and literature sample of 28 MW GCs (green circles, see Section \[sec:discussion\]) and of the literature sample of extragalactic GCs (purple squares, see Section \[sec:low\]). The MW reference population is drawn from the GES iDR4 sample (black dots) and from the SAGA database of metal-poor stars (grey dots). We also plotted NGC 2419 as a yellow upward triangle and Ru 106 as a yellow downward triangle.[]{data-label="fig:lowMg"}](fig_lowMg.pdf){width="\columnwidth"}
To our knowledge, the only Galactic GC that contains a sufficient fraction of stars ($\simeq$50%) with a sufficiently low \[Mg/Fe\] is NGC 2419 [@mucciarelli12], reaching as low as \[Mg/Fe\]$\simeq$–1.0 dex. Based on the complicated chemistry of NGC 2419, it was suggested that it has extragalactic origin [@mucciarelli12; @cohen12; @carretta2419; @ventura12], which would fit the observed data trend. On the other hand Rup 106, which is known to have low \[Mg/Fe\] [@villanova13], has a perfectly normal \[Mg/$\alpha$\], because its stars are not $\alpha$-enhanced. We conclude that it is difficult to explain the low integrated \[Mg/$\alpha$\] values of many extragalactic GCs with the typical Mg-Al anti-correlation observed in Galactic ones. A more extreme Mg depletion and a larger fraction of stars with such low Mg would be required, similarly to what observed in NGC 2419.
Apart from the extreme morphology of the Mg-Al anti-correlation observed for example in NGC 2419 (an [*internal*]{} effect), there is an additional explanation for the low average \[Mg/$\alpha$\] of some extragalctic GCs, linked to the global chemical evolution of their host galaxies (an [*external*]{} effect). It has been observed that in dwarf galaxies \[Mg/Fe\] is lower than the average $\alpha$-enhancement for stars close to the “knee" of the \[$\alpha$/Fe\] trend. This was explained considering that SNe Ia produce some amounts of Ca, Si, and Ti but not Mg, which is produced only by SNe II [@tsujimoto12]. In that case, we should observe a progressively lower \[Mg/$\alpha$\] in the field stars as \[Fe/H\] increases [as in Figure 10 by @mucciarelli12 for the LMC]. The exact distribution would be governed by the global star formation rate of each galaxy, which governs the metallicity at which the knee occurs.
Both the [*external*]{} and [*internal*]{} explanations appear viable at the moment, and they might also operate simultaneously. Further information could be obtained: (1) by obtaining large and homogeneous samples of field stars with \[Mg/$\alpha$\] and \[Fe/H\] measurements to compare with the available GC measurements on a galaxy-by-galaxy basis, and (2) by obtaining for the nearest extragalactic GCs larges sample of individual star abundances.
Summary and conclusions {#sec:conclusions}
=======================
We used GES iDR4 data on calibrating globular clusters to explore the Mg-Al anti-correlation, which is well measured in the GES observing setups and varies significantly from one GC to the other, and therefore can provide strong constraints on the GC properties that control the anti-correlation phenomenon.
Even if iDR4 is a preliminary and intermediate data release, it was the first one in which many different loops of the internal and external calibration were closed in the complex GES [*homogenization*]{} workflow (see P16, Hourihane et al., in preparation, and Randich et al., in preparation). As result, the agreement between UVES and GIRAFFE is within the quoted uncertainties, with 0.10–0.15 dex median differences; there are no significant trends of abundance ratios with the APs, in particular with T$_{\rm{eff}}$ or log$g$; and there are small offsets with the high-resolution literature data of no more than 0.1 dex. We also add a new GC, NGC 5927, one of the most metal-rich GCs, that was included in GES to facilitate the internal calibration in conjunction with open clusters.
Given the excellent agreement with the literature, we assembled a homogenized database of $\simeq$1300 stars in 28 GCs with \[Fe/H\], \[Mg/Fe\], and \[Al/Fe\] measurements from GES iDR4, the FLAMES GC survey [@carretta09a; @carretta09b and other papers cited above], and the APOGEE survey [@meszaros15]. We explored two different open topics as a demonstration of the presented data quality. The first topic concerns the dependency of the Mg-Al anti-correlation extension with GC global parameters. In particular, it was suggested by @carretta09a that the extension depends on both mass and metallicity, but no formal analysis was performed in that paper owing to the limited sample. The suspicion was supported by the @meszaros15 data. However a different analysis by @cabrera16 found a very weak relation between the Mg-Al extension and \[Fe/H\] with a large spread and low statistical significance from a literature database of 20 GC measurements. We profited from our large homogenized sample, that includes NGC 5927, and we employed a linear fit on cluster mass and metallicity simultaneously. Our analysis removes any remaining doubt about the fact the the Mg-Al anti-correlation extension depends on [*both*]{} mass and metallicity. Adding age as a third parameter worsened the fit and we concluded that the Mg-Al anti-correlation does not change significantly with age.
We also explored another open topic related to the low \[Mg/$\alpha$\] measured in some extragalactic GCs [@larsen14; @colucci14; @sakari15], to see whether a highly extended Mg-Al anti-correlation could explain the observed trends. We made the reasonable hypothesis that an average of the available individual star abundances is comparable with the abundances obtained by integrated light spectroscopy [see @colucci16 and references therein]. We concluded that a normal anti-correlation, no matter how extended, would not reproduce those low \[Mg/$\alpha$\] values. A more extreme chemical composition, like that of NGC 2419 [@mucciarelli12; @cohen12; @carretta2419; @ventura12] would be required. Besides this explanation, related to the [*internal*]{} GC chemical properties, there is another [*external*]{} explanation related to the global chemical evolution properties of the host galaxy and the yields of SNe type Ia and II, but the data available so far do not allow to discriminate between the two, that could be either mutually exclusive or cohexist in different GC populations.
We conclude that the GES data have a quality sufficient to explore the presented and many other topics related to the chemistry of GCs, providing clear results. When the whole sample of GCs and of the observed stars will be analyzed, including also elements that are not completely determined in iDR4, it will be possible to statistically analyze the entire set of elements that vary in GCs.
We warmly thank: I. Cabrera-Ziri for a discussion on the biases in determining the extent of the Mg-Al anti-correlation; A. Mucciarelli for a discussion on anomalous GCs like NGC 2419; S. Larsen for a discussion on the phenomenon of low \[Mg/$\alpha$\] in extragalactic GCs; M. Gieles for a discussion on the possible polluters and their impact on the Mg-Al anti-correlation extension; and the referee of this paper, P. Ventura, who offered his insight to improve the manuscript both in its substance and form.
This research has made use of the following softwares, databases, and online resources: topcat [@topcat]; the CDS and Vizier databases (http://cdsportal.u-strasbg.fr/); the R project (https://www.r-project.org), and Rstudio (https://www.rstudio.com/).
Based on data products from observations made with ESO Telescopes at the La Silla Paranal Observatory under programme ID 188.B-3002. These data products have been processed by the Cambridge Astronomy Survey Unit (CASU) at the Institute of Astronomy, University of Cambridge, and by the FLAMES/UVES reduction team at INAF/Osservatorio Astrofisico di Arcetri. These data have been obtained from the Gaia-ESO Survey Data Archive, prepared and hosted by the Wide Field Astronomy Unit, Institute for Astronomy, University of Edinburgh, which is funded by the UK Science and Technology Facilities Council.
This work was partly supported by the European Union FP7 programme through ERC grant number 320360 and by the Leverhulme Trust through grant RPG-2012-541. We acknowledge the support from INAF and Ministero dell’ Istruzione, dell’ Università e della Ricerca (MIUR) in the form of the grant “Premiale VLT 2012”. The results presented here benefit from discussions held during the Gaia-ESO workshops and conferences supported by the ESF (European Science Foundation) through the GREAT Research Network Programme.
M.T.C. acknowledges the financial support from the Spanish [*Ministerio de Economía y Competitividad*]{}, through grant AYA2013-40611-P. E.P. and D.R. benefited from the International Space Science Institute (ISSI, Bern, CH), through funding of the International Team [*“The Formation and Evolution of the Galactic Halo"*]{}. S.G.S acknowledges the support by Fundaç\~ ao para a Ciência e Tecnologia (FCT) through national funds and a research grant (project ref. UID/FIS/04434/2013, and PTDC/FIS-AST/7073/2014). S.G.S. also acknowledge the support from FCT through Investigador FCT contract of reference IF/00028/2014 and POPH/FSE (EC) by FEDER funding through the program [*“Programa Operacional de Factores de Competitividade – COMPETE"*]{}. D.G., B.T., and S.V. gratefully acknowledge support from the Chilean BASAL Centro de Excelencia en Astrofísica y Tecnologías Afines (CATA) grant PFB-06/2007. L.M. acknowledges support from [*Proyecto Interno*]{} of the Universidad Andres Bello. E.J.A was supported by Spanish MINECO under grant AYA2016-75931-C2-1-P with FEDER funds. R.S. acknowledges support from the Polish Ministry of Science and Higher Education (660/E-60/STYP/10/2015).
[^1]: Based on data products from observations made with ESO Telescopes at the La Silla Paranal Observatory under programme ID 188.B-3002.
[^2]: http://archive.eso.org/cms.html
[^3]: http://www.ast.cam.ac.uk/$\sim$ mike/casu/
[^4]: In many cases, the stars in common between GES and the literature are too few or missing, therefore we preferred to use the differences between the median of each sample.
[^5]: We had M 2 in common with the APOGEE survey and 6 GCs in common with the FLAMES GC survey (see also Figures \[fig:NaO\] and \[fig:MgAl\]). The handful of stars in common among the various studies was not sufficient to compute reliable shifts, and was removed from the sample, retaining with precedence the GES data, then APOGEE ones, and then the FLAMES GC survey ones.
[^6]: Both GES and the FLAMES GC survey contain several upper limits in the most metal-poor GCs, that are not plotted in this paper.
[^7]: We remark here that an extremely homogeneous and populous sample would be required to better quantify this important aspect.
[^8]: We remark here that both indicators are subject to measurement and statistical biases. Measurement effects (most notably outliers) tend to produce an overstimate of the Mg-Al extension, while sampling effects (small sample sizes) tend to produce an underestimate.
[^9]: Here and in the following, we use the word [*model*]{} in the statistical sense, i.e., a way of describing the data phenomenologically and not a physical model.
[^10]: It is important to stress at this point that no mass-metallicity relation is apparent in Galactic GCs.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'The Angstrom Project is using a global network of 2m-class telescopes to conduct a high cadence pixel microlensing survey of the bulge of the Andromeda Galaxy (M31), with the primary aim of constraining its underlying bulge mass distribution and stellar mass function. Here we investigate the feasibility of using such a survey to detect planets in M31. We estimate the efficiency of detecting signals for events induced by planetary systems as a function of planet/star mass ratio and separation, source type and background M31 surface brightness. We find that for planets of a Jupiter-mass or above that are within the lensing zone ($\sim 1 -3$ AU) detection is possible above 3 $\sigma$, with detection efficiencies $\sim 3\%$ for events associated with giant stars, which are the typical source stars of pixel-lensing surveys. A dramatic improvement in the efficiency of $\sim 40$ – 60% is expected if follow-up observations on an 8m telescope are made possible by a real-time alert system.'
author:
- |
S.-J. Chung D. Kim,\
The Angstrom Collaboration: M.J. Darnley, J.P. Duke, A. Gould, C. Han, Y.-B. Jeon, E. Kerins, A. Newsam and B.-G. Park
title: The possibility of detecting planets in the Andromeda Galaxy
---
Introduction
============
Various techniques are being used to search for extrasolar planets, including the radial velocity technique [@mayor95; @marcy96], transit method [@struve52], direct imaging [@angel94; @stahl95], pulsar timing [@wolszczan92], and microlensing [@mao91; @gould92]. See the reviews of @perryman00 [@perryman05]. The microlensing signal of a planetary companion to microlens stars is a short-duration perturbation to the smooth standard light curve of the primary-induced lensing event occurring on a background source star. Once the signal is detected and analyzed, it is possible to determine the planet/star mass ratio, $q$, and the projected planet-star separation, $s$ (normalized by the angular Einstein ring radius ${\theta_{\rm E}}$). Recently, two robust microlensing detections of exoplanets were reported by @bond04 and @udalski05.
The microlensing technique has various advantages over other methods. First, microlensing is sensitive to lower-mass planets than most other methods (except pulsar timing) and it is possible, in principle, to detect Earth-mass planets from ground-based observations [@gould04]. Second, the microlensing technique is most sensitive to planets located in the so-called lensing zone corresponding to the range of 0.6 – 1.6 Einstein ring radii. The typical value of the Einstein radius, ${r_{\rm E}}$, for Galactic lensing events is a couple of AU, and thus the lensing zone roughly overlaps with the habitable zone. Third, the microlensing technique is the only proposed method that can detect and characterize free-floating planets [@bennett02; @han05]. Fourth, the biases in the search technique are less severe and can be quantified easily compared to other methods [@gaudi02]. Therefore, the microlensing technique will be able to provide the best statistics of the Galactic population of planets.
In addition to the advantages mentioned above, the microlensing technique is distinguished from other techniques in the sense that the planets to which it is sensitive are much more distant than those found with other techniques. With the advent of photometry techniques like the difference imaging [@alard98] and pixel method [@melchior99], microlensing searches are not restricted to the field within the Galaxy and can be extended to unresolved star fields of nearby galaxies such as M31. Therefore, microlensing is the only feasible technique that can detect planets located in other galaxies.
Microlensing searches toward M31 have been and are being carried out by various collaborations including the POINT-AGAPE [@auriere01; @paulinhendriksson02; @paulinhendriksson03; @an04; @belokurov05], AGAPE [@ansari97; @ansari99], VATT-Colombia [@crotts96; @uglesich04], MEGA [@dejong04], and WeCAPP [@riffeser01; @riffeser03] collaborations, as well as MDM [@calchinovati03], McGraw-Hill [@calchinovati02], and Nainital [@joshi03; @joshi05] surveys. The monitoring frequencies of these experiments are typically $\sim 3$ observations per week, too low to detect planetary signals. However, with the expansion of the global telescope network, the monitoring frequency of such surveys is rapidly increasing. For example, a new M31 pixel-lensing survey, the Andromeda Galaxy Stellar Robotic Microlensing (Angstrom) project is expected to achieve a monitoring frequency of $\sim 5$ observations per 24-hour period by using a network of telescopes, including the robotic 2m Liverpool Telescope at La Palma, Faulkes Telescope North in Hawaii, 1.8m telescope at the Bohyunsan Observatory in Korea, and the 2.4m Hiltner Telescope at the MDM Observatory in Arizona [@kerins05].
The possibility of detecting planetary microlensing events caused by a lens located in M31 was discussed by @baltz01. However, the main focus of that paper was evaluating the detectability of events caused by binary lenses in general and the comment about the planetary lensing was brief treating the planetary system as one case of the binary lenses. In addition, their detection rate estimate of the M31 planetary lensing events was based only on events that exhibit caustic crossings, while a significant fraction of events with detectable planetary signals might be non-caustic-crossing events. Moreover, the aim of their work is was the rough evaluation of feasibility and thus not based on specific observational setup and instruments. Similarly, the work of @covone00 was also based on an arbitrary observational setup.
In this paper, we explore the feasibility of detecting planets in M31 from a high-frequency pixel-lensing survey using a global network of 2m-class telescopes. The paper is organized as follows. In § 2, we briefly describe the basics of planetary lensing. In § 3, we estimate the efficiency of detecting planetary signals for events induced by planetary systems with various planet-star separations and mass ratios, associated with source stars of different types, and occurring toward fields with a range of surface brightness $\mu$. From the dependence of the detection efficiency on these parameters, we investigate possible types of detectable planets, the optimal source stars, fields, and observation strategy for M31 planet detections. In § 4, we discuss about methods to further improve the planet detection efficiency. We summarize the results and conclude in § 5.
Basics of Planetary Lensing
===========================
Planetary lensing is described by the formalism of a binary lens with a very low-mass companion. Because of the very small mass ratio, the planetary lensing behavior is well described by that of a single lens of the primary star for most of the event duration. However, a short-duration perturbation can occur when the source star passes the region near a caustic, which represent the set of source positions at which the magnification of a point source becomes infinite. The caustics of binary lensing form a single or multiple closed figures where each figure is composed of concave curves (fold caustics) that meet at cusps.
For the planetary case, there exist two sets of disconnected caustics. One ‘central caustic’ is located close to the host star. The other ‘planetary caustic’ is located away from the host star and its number is one or two depending on whether the planet lies outside ($s>1$) or inside ($s<1$) the Einstein ring. The size of the caustic, which is directly proportional to the planet detection efficiency, is maximized when the planet is located in the ‘lensing zone’, which represents the range of the star-planet separation of $0.6\lesssim s\lesssim 1.6$ [@gould92].
The planetary perturbation induced by the central caustic is of special interest for M31 pixel-lensing events. While the perturbation induced by the planetary caustic can occur at any part of the light curve of any event, even those of low magnification, the perturbation induced by the central caustic always occurs near the peak of the light curve of a high-magnification event. Then, the chance for the M31 pixel-lensing events to be perturbed by the central caustic can be high because these events tend to have high magnifications. In addition, the chance of detecting planetary signals for these events becomes even higher due to the improved photometric precision thanks to the enhanced brightness of the lensed source star during the time of perturbation.
Detection Efficiency
====================
To estimate the efficiency of detecting planetary signals of M31 events, we compute the ‘detectability’ defined as the ratio of the planetary signal, $\epsilon$, to the photometric precision, $\sigma_{\rm ph}$, i.e., $${\cal D}= {|\epsilon|\over \sigma_{\rm ph}}.
\label{eq3.1}$$ The planetary signal is the deviation of the lensing light curve from that of the single lensing event of the primary lens star, and thus it is defined as $$\epsilon = {{A-A_0}\over A_0},
\label{eq3.2}$$ where $A$ is the magnification of the planetary lensing and $A_0$ is the single lensing magnification caused by the host star alone. For an M31 pixel-lensing event, the lensing signal is the flux variation measured on the subtracted image, while the noise is dominated by the background flux. Then, the photometric precision can be approximated as $$\sigma_{\rm ph} = {\sqrt{F_{\rm B}}\over F_S(A-1)},
\label{eq3.3}$$ where $F_S$ and $F_{\rm B}$ are the baseline flux of the lensed source star and the blended background flux, respectively. Under this definition of the detectability, ${\cal D}=1$ implies that the planetary signal is equivalent to the photometric precision.
We estimate the detection efficiency for a representative event that is most probable under the assumption that the M31 halo is not significantly populated with MACHOs. Under this assumption, it is expected that the events to be detected toward the field in and around the M31 bulge, where the event rate is highest, are caused mostly by low-mass stars located in the bulge itself [@kerins05]. We, therefore, choose a representative event as the one caused by a lens with a primary star mass of $m=0.3\ M_\odot$ and distances to the source star and lens of $D_S=780$ kpc and $D_L=(780-1)$ kpc, respectively. Then, the corresponding physical and angular Einstein ring radii are ${r_{\rm E}}=1.56\ {\rm AU}$ and ${\theta_{\rm E}}=2.0\ \mu{\rm as}$, respectively. The assumed timescale is ${t_{\rm E}}=20$ days.
To see the dependence of the detection efficiency on the stellar type of the source star, we test several types of source stars with various absolute magnitudes and sizes. The type of the source star affects the detection efficiency in two different ways. On one side, high-luminosity of the bright source star contributes to the detection efficiency in a positive way because the photometric precision improves with the increase of the source star brightness. If the source star is too bright, on the other hand, it is likely to be a giant star, for which the planetary signal $\epsilon$ might be diminished due to the finite-source effect [@bennett96]. The stellar types of the tested source stars are giant, A5, and F5 main-sequence (MS) stars with the absolute magnitudes of $M_I=0.0$, 1.73, and 2.86, and stellar radii of $R_\star=10.0\ R_\odot$, $1.7\ R_\odot$, and $1.3\ R_\odot$, respectively. The corresponding source angular radii normalized by the Einstein radius are $\rho_\star=\theta_\star/{\theta_{\rm E}}=
(R_\star/D_S)/ {\theta_{\rm E}}=0.03$, 0.005, and 0.004, respectively.
Observations and photometry are assumed to be carried out as follows. Following the specification of the Liverpool Telescope, we assume that the instrument can detect 1 photon/s for an $I=24.2$ star. We also assume that the average seeing is $\theta_{\rm see}=1''\hskip-2pt .0$ and the observation is carried out such that small-exposure images are combined to make a 30 min exposure image to obtain a high signal-to-noise ratio while preventing saturation in the central bulge region. The photometry is done such that the flux variation is measured at an aperture that maximizes the signal-to-noise ratio of the measured flux variation. In the background-dominated regime such as the M31 field, the noise is proportional to the aperture radius $\theta_{\rm ap}$, i.e. $F_B\propto
\pi \theta_{\rm ap}^2$. On the other hand, assuming a gaussian PSF, the measured source flux variation scales as $F=F_S(A-1)\propto
\int_0^{\theta_{\rm ap}}(\theta/\sigma_{\rm PSF}^2)\exp(-\theta^2/2
\sigma_{\rm PSF}^2)d\theta$, where $\sigma_{\rm PSF}=0.425\theta_{\rm see}$. Therefore, the signal-to-noise ratio scales as $S/N =F/\sqrt{F_B}\propto
(1/\theta_{\rm ap})\int_0^{\theta_{\rm ap}}(\theta/\sigma_{\rm PSF}^2)
\exp\left( -{\theta^2/2\sigma_{\rm PSF}^2} \right) d\theta$. Then, the optimal aperture that maximizes the signal-to-noise ratio is $\theta_{\rm ap}=0.673 \theta_{\rm see}$. With the adoption of this aperture, the fraction of the source flux within the aperture is $F(\theta\leq \theta_{\rm ap})/F_{\rm tot}= 0.715$, where $F_{\rm tot}$ is the flux measured at $\theta_{\rm ap}\equiv \infty$.
In Figure \[fig:one\]–\[fig:three\], we present the contour maps of the detectability of the planetary lensing signal as a function of the source star position for events caused by planetary systems with various $s$ and $q$, and involved with source stars of various stellar types. The contours are drawn at the levels of ${\cal D}=1.0$ (white), 2.0 (yellow), and 3.0 (brown), respectively. We assume that the planetary signal is firmly detected if ${\cal D}\geq 3.0$. In the map, we present only the region around the ‘detection zone’, which represents the region of the source star position where the magnification is higher than a threshold magnification required for the event detection, $A_{\rm th}$. The threshold magnification is defined by $(A_{\rm th}-1)F_S=3\sqrt{F_B}$, i.e., $3\sigma$ detection of the event. If we define $u_{0,{\rm th}}$ as the threshold lens-source impact parameter corresponding to $A_{\rm th}$, everything of interest is contained within the circle with the radius $u_{0,{\rm th}}$ (marked by a white dotted circle in each panel). We, therefore, use $u_{0,{\rm th}}$ as a scale length instead of the Einstein radius. However, to provide the relative size of the detection zone, we mark the absolute value of $u_{0,{\rm th}}$ in the bottom left panel of each figure. The value of the threshold impact parameter decreases as either the source star becomes fainter or the background surface brightness increases. In Figure \[fig:four\], we present the variation of the threshold impact parameter as a function of the background surface brightness for source stars of various types. The maps are constructed for a common surface brightness of $\mu=18.0\ {\rm mag}/ {\rm arcmin}^2$, which is a representative value of the M31 bulge region. For the construction of the maps, we consider the attenuation of the magnification caused by the finite-source effect.
From the maps, we find that although both the sizes of the event detection zone ($u_0<u_{0,th}$) and the region of the planetary perturbation ($D>3$) decrease as the source star becomes fainter, the rate of the decrease of the planetary perturbation region is smaller than the rate of decrease of the event detection zone. This is because the planetary perturbation is confined to the region around the caustic whose size does not depend on the source brightness. As a result, the perturbation region occupies a greater fraction of the event detection zone with the decrease of the source star brightness. However, this does not necessarily imply that the planet detection efficiency of events associated with MS stars is higher than that of events associated brighter giant stars. This is because despite the slow rate of decrease, the perturbation region does decrease, and thus picking up the resulting short-duration planetary signals for events involved with faint MS stars requires higher monitoring frequency.
Once the maps of the detectability are constructed, we produce a large number of light curves of lensing events resulting from source trajectories with random orientations and impact parameters $u_0\leq u_{0,{\rm th}}$ (see example light curves in Figure \[fig:five\]). Then, we estimate the detection efficiency as the ratio of the number of events with detectable planetary signals to the total number of the tested events. We assume that on average five combined images with a 30 min exposure are obtained daily following the current Angstrom survey. By applying a conservative criterion for the detection of the planetary signal, we assume that the planet is detected if the signal with ${\cal D}\geq 3$ is detected at least five times during the event. Since the monitoring frequency is $f=5\ {\rm times}/ {\rm day}$, this implies that the planetary signal should last at least 1 day for detection.
In Figure \[fig:six\], we present the estimated detection efficiency as a function of $s$ and $q$ for events involved with various source stars. In Figure \[fig:seven\], we also present the variation of the efficiency depending on the background surface brightness, where the efficiency is estimated by varying $\mu$ but fixing the lens parameters as $s=1.2$ and $q=5\times 10^{-3}$. From the figures, we find the following results.
1. For events associated with giant source stars, it will be possible to detect planets with masses equivalent to or heavier than the Jupiter ($q\sim 3\times 10^{-3}$). Although the efficiency varies considerably depending on the star-planet separation, the average efficiency is $\sim 3\%$ for events caused by a lens system having a planet with a mass ratio $q=5\times 10^{-3}$ and located in the lensing zone. However, it is expected that detecting planets with masses less than the Saturn ($q\sim 10^{-3}$) would be difficult.
2. The optimal events for the detections of the planetary signals are those associated with giant stars. It is expected that detecting planets for events associated with MS stars would be difficult because of the poor photometry and the resulting short durations of the planetary signals. For example, the duration of the event associated with an F-type MS star is $t_{\rm dur} \leq 2 u_{0,{\rm th}} t_{\rm E}\sim 1\ {\rm day}$. Considering that the planetary perturbation region with $D\geq 3$ occupies a fraction of the detection zone (as shown in Fig. \[fig:three\]), the planetary signal would be too short for detection. Although planets can be detected with a non-negligible efficiency ($\sim 1\%$) for events associated with A-type MS stars, the number of planet detections from these events would be small because of the rarity of early-type MS stars projected on the M31 bulge region. MS-associated events could be detected at low surface-brightness regions as shown in Figure \[fig:seven\], but the event rate toward these fields would be low due to the low column density of lens matter along the line of sight.
3. The efficiency peaks at a certain surface brightness. In the region of very high surface brightness, planet detection is limited by the poor photometry. In the very low surface-brightness regime, on the other hand, the event detection zone is significantly larger than the size of the planetary deviation region, which is confined around caustics. As a result, the efficiency, which is proportional to the one-dimensional size ratio of the planetary deviation region to the detection zone, is low in this region. The peak efficiency occurs at $\mu\sim 20\ {\rm mag}/ {\rm arcsec}^2$ for events involved with giant source stars (see Figure \[fig:seven\]).
4. In most cases, the planetary perturbations of M31 pixel-lensing events are induced by central caustics. Therefore, an observational strategy of focusing on these perturbations would maximize the detections of M31 planets. An alert system based on real-time survey observations combined with prompt follow-up observations would do this.
[ccc]{} I & giant & 3%\
& A5 MS & 1%\
& F5 MS & 0%\
II & giant & 7%\
& A5 MS & 9%\
& F5 MS & 1%\
III & giant & 41%\
& A5 MS & 52%\
& F5 MS & 66%\
Improving Planet Detection Efficiency
=====================================
Considering that planetary perturbations, in many cases, are missed from detection due to short durations, a significant improvement in the planet detection efficiency is expected with increased monitoring frequency. One way to do this is using more telescope time or employing more telescopes (‘strategy II’). The other way is conducting follow-up observations for events detected in the early phase by the survey experiment (strategy ‘III’). In this section, we estimate the efficiencies expected under these improved observational strategies. We designate the observational condition of the current pixel-lensing survey (with $f=5\ {\rm times}/ {\rm day}$) as ‘strategy I’.
We simulate the observations under the strategy II by doubling the monitoring frequency of the current Angstrom experiment, i.e. $f=10\ {\rm times}/ {\rm day}$. For strategy III, we assume survey-mode observations with $f=5\ {\rm times}/{\rm day}$ and follow-up observations with a single 8m-class telescope. Follow-up observations are assumed to be carried out 4 hours after the first pixel-lensing event signal is detected by the survey observations and the assumed monitoring frequency is $f=20\ {\rm times}/ {\rm night}$. Since a single telescope is employed, follow-up observations are assumed to be carried out only during the night (8 hrs per day), and thus 24 min per each combined image. Assuming 20 min per exposure (to allow $\sim 4$ min for readout), the photometric uncertainty of the follow-up observation is $\sigma_{\rm 8m}/\sigma_{\rm 2m}\sim
(30\ {\rm min}/20\ {\rm min})^{1/2} (2{\rm m}/8{\rm m})\sim 31\%$ of the survey observation.
In Figure \[fig:eight\], we present the efficiency expected under the two improved observational strategies. In Table \[table1\], we also present the average detection efficiencies of detecting a planet with $q=5\times 10^{-3}$ located in the lensing zone under the three different observational strategies. From the figure and table, we find that significant improvement in efficiency is expected, especially from the adoption of the follow-up observation strategy. The improvement is more significant for events involved with MS stars because the short-duration perturbations associated with these events are readily detectable with the increased monitoring frequency.
Conclusion
==========
We explored the feasibility of detecting planets in M31 from a high-frequency pixel-lensing survey using a global network of 2m-class telescopes. For this evaluation, we estimated the efficiency of detecting planetary signals for events induced by planetary systems with various planet/star mass ratios and star-planet separations, associated with source stars of various types, and occurring toward fields with various surface brightness. From the dependence of the detection efficiency on these parameters, we found that 3$\sigma$ detection of the signals produced by giant planets located in the lensing zone with masses equivalent to or heavier than the Jupiter would be possible with detection efficiencies $\sim 3\%$ if the event is associated with giant stars. A dramatic improvement of the efficiency is expected if follow-up observations based on real-time survey observations become possible.
Work by C.H. was supported by the Astrophysical Research Center for the Structure and Evolution of the Cosmos (ARCSEC) of the Korea Science & Engineering Foundation (KOSEF) through the Science Research Program (SRC) program. B.-G.P. and Y.-B.J. acknowledge the support of the Korea Astronomy and Space Science Institute (KASI). E.J.K. was supported by an Advanced Fellowship from the UK Particle Physics and Astronomy Research Council (PPARC). M.J.D. and J.P.D. were supported, respectively, by a PPARC post-doctoral research assistantship and PhD studentship. A.G. was supported in part by grant AST 042758 from the NSF. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the view of the NSF.
[99]{}
Alard, C., & Lupton, R. 1998, , 503, 325
An, J.H., et al. 2004, , 601, 845
Angel, J. R. P. 1994, Nature, 368, 203
Ansari, R., et al. 1997, , 324, 843
Ansari, R., et al. 1999, , 344, L49
Aurière, M., et al. 2001, , 553, L137
Baltz, E.A., & Gondolo, P. 2001, , 559, 41
Belokurov, V., et al. 2005, , 357, 17
Bennett, D.P., & Rhie, S.H. 2002, , 574, 985
Bennett, D.P., & Rhie, S.H. 1996, , 472, 660
Bond, I.A., et al. 2004, , 606, L155
Calchi Novati, S., et al. 2002, , 381, 848
Calchi Novati, S., Jetzer, Ph., Scarpetta, G., Giraud-Héraud, Y., Kaplan, J., Paulin-Hendriksson, S., & Gould, A. 2003, , 405, 851
Covone, G., de Ritis, R., Dominik, M., & Marino, A.A. 2000, , 357, 810
Crotts, A.P.S., & Tomaney, A.B. 1996, , 473, L87
de Jong, J.T.A., et al. 2004, , 417, 461
Gaudi, B.S., et al. 2002, , 566, 463
Gould, A., Gaudi, B.S., & Han, C. 2004, , submitted (astro-ph/0405217)
Gould, A., & Loeb, A. 1992, , 396, 104
Han, C., Gaudi, B.S., An, J.H., & Gould, A. 2005, , 618, 962
Joshi, Y.C., Pandey, A.K., Narasimha, D., & Sagar, R. 2005, Bul. Astro. Soc. India, 31, 41
Joshi, Y.C., Pandey, A.K., Narasimha, D., & Sagar, R. 2005, , 433, 787
Kerins, E., Darnley, M.J., Duke, J.P., Gould, A., Han, C., Jeon, Y.-B., Newsam, A., & Park, B.-G. 2005, , in press
Mao, S., & Paczyński, B. 1991, , 374, L37
Marcy, G. W., & Butler, R. P. 1996, , 464, L147
Mayor, M., & Queloz, D. 1995, Nature, 378, 355
Melchior, A.-L., et al. 1999, , 134, 377
Paulin-Hendriksson, S., et al. 2002, , 576, L121
Paulin-Hendriksson, S., et al. 2003, , 405, 15
Perryman, M.A.C. 2000, Rep. Prog. Phys., 63, 1209
Perryman, M.A.C., et al. 2005, ESA-ESO Working Groups Report No. 1 (astro-ph/0506163)
Riffeser, A., et al. 2001, , 379, 362
Riffeser, A., Fliri, J., Bender, R., Seitz, S., & Gössl, C.A.2003, , 599, L17
Stahl, S. M., & Sandler, D. G. 1995, , 454, L153
Struve, O. 1952, Observatory, 72, 199
Udalski, A., et al. 2005, , 628, L109
Uglesich, R.R., Crotts, A.P.S., Baltz, E.A., de Jong J., Boyle, R.P., & Corbally, G.J. 2004, , 612, 877
Wolczan, A., & Frail, D. A. 1992, Nature, 355, 145
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Color laser printers have fast printing speed and high resolution, and forgeries using color laser printers can cause significant harm to society. A source printer identification technique can be employed as a countermeasure to those forgeries. This paper presents a color laser printer identification method based on cascaded learning of deep neural networks. The refiner network is trained by adversarial training to refine the synthetic dataset for halftone color decomposition. The halftone color decomposing ConvNet is trained with the refined dataset, and the trained knowledge is transferred to the printer identifying ConvNet to enhance the accuracy. The robustness about rotation and scaling is considered in training process, which is not considered in existing methods. Experiments are performed on eight color laser printers, and the performance is compared with several existing methods. The experimental results clearly show that the proposed method outperforms existing source color laser printer identification methods.'
address:
- '$^a$Graduate School of Information Security, KAIST, $^b$School of Computing, KAIST'
- 'Korea Advanced Institute of Science and Technology, Guseong-dong, Yuseong-gu, Daejeon 305-701, Republic of Korea'
author:
- 'Do-Guk Kim$^a$'
- 'Jong-Uk Hou$^b$'
- 'Heung-Kyu Lee$^{b,}$'
title: |
Learning deep features for source color laser printer identification\
based on cascaded learning
---
Generative adversarial network ,Convolutional neural network ,Color laser printer ,Source printer identification ,Mobile camera
Introduction
============
The development of color laser printing makes printing much easier than before. However, powerful printing devices are often abused to make forged documents. Because of the high resolution of modern color laser printers, it is hard for ordinary people to distinguish forged documents from genuine documents. Moreover, a color laser printer has a fast printing speed so that those forgeries can be produced in large quantities. Banknote forgeries and document forgeries on a large scale can cause harm to the society.
To prevent those forgeries, researchers have introduced various source laser printer identification methods. Source laser printer identification is a multimedia forensic technique that can be employed as a countermeasure to forgeries made by laser printers. Counterfeiters use a laser printer different from the source printer of the genuine documents. Even if the model of their printer is the same as that of the source printer of the genuine documents, every printing device is uniquely different from all other printers. Source laser printer identification techniques identify the exact source printing device of the target printed material. It can be utilized to help forgery investigation, and it can also be used as a part of a forgery detection system.
The typical process of source laser printer identification techniques is shown in Figure \[PrinterIdentify\]. First, features are extracted from a scanned or photographed image of the target printed material. In this process, various image processing techniques are used to extract features from the image. Then, the features are used to classify the source printer of the target printed material. In the classification process, machine learning techniques are used, and reference features extracted from the known printed materials are used to train the classifier.
![ Source laser printer identification process []{data-label="PrinterIdentify"}](PrinterIdentify.pdf){width="9cm"}
Majority of existing source laser printer identification techniques used scanned images as inputs. Scanning is a suitable method to acquire document image in a forensic investigation. However, these techniques have a limit to being distributed to the public since most scanners are not portable. While most scanners are not portable, mobile cameras are widespread due to the high distribution rate of smartphones. Therefore, to prevent document forgery effectively by using source printer identification techniques, it is essential to use photographed images as inputs.
There were several existing methods that used photographed images as inputs. However, they have a low applicability in two aspects. First, their identification accuracy is low to be applied to a real forensic application. Their low identification accuracy is mainly caused by the difficulty of halftone color channel decomposition. CMYK toners were used in the printing process while the digital image of the printed image is photographed with RGB color channels. Existing methods use CMY or CMYK color domain converted by using pre-defined color profile, but CMYK toner patterns are not decomposed clearly in the converted color domain. Thus, clear decomposition of CMYK toner patterns is needed to improve the identification accuracy. Second, the robustness against scaling and rotation is not considered in existing methods. Unlike scanning, it is difficult to keep the photographing distance and angle identically in photographing environment. Therefore, achieving robustness against scaling and rotation is necessary for printer identification with photographed images.
In this paper, we present a method based on cascaded learning of Generative Adversarial Networks (GANs) and Convolutional Neural Networks (CNNs) to identify the source color laser printer of photographed color documents. The proposed method is divided into two components; improved halftone color channel decomposition based on a GAN-alike framework, and printer identification based on a CNN. GAN framework was used to generate training data for halftone color channel decomposition. Since there is no label for decomposed toner channels of the photographed color images, we utilized the Simulated+Unsupervised (S+U) learning introduced in [@SimGan]. After that, a CNN was trained to decompose CMYK halftone color channels of input RGB image. Halftone color channel decomposition was carried out by the trained CNN. Our color decomposition method exceeds by far the performance of the existing color decomposition method which based on pre-defined color profiles.
Weights of the Halftone Color Decomposing-CNN (HCD-CNN) were used to initialize the Printer Identifying-CNN (PI-CNN). Knowledge about decomposing color channels of photographed color documents was transferred to the PI-CNN. Then, the PI-CNN was trained for two phases. The first phase was training for original input images, and the second phase was training for robustness about scaling and rotation. As a result of this two-step training, the PI-CNN not only showed a high identification accuracy but also achieved a robustness against various scaling and rotation values that were not trained. We performed experiments to verify the performance of the proposed method comparing it to several previous methods: Kim’s methods ([@Kim1] and [@Kim2]), Tsai’s method [@Tsai1], and Ryu’s method [@Ryu]. Thus, a total of five methods were used to compare the performance. For each method, the same printed materials were used in training and testing. The experimental results showed that the proposed method had overcome the limitations of the existing methods.
Major contributions of this paper are:
- We propose an improved halftone color channel decomposition method based on a GAN-alike framework using S+U learning.
- We propose a color laser printer identification method based on a CNN which shows the state-of-the-art performance.
- We achieve a robustness against rotation and scaling which is not considered in existing methods.
The rest of the paper is organized as follows. Section 2 describes some background on source color laser printer identification for photographed color documents. The presented method is described in Section 3. In Section 4, experimental results are reported. Section 5 gives our conclusions and presents the further research issues.
Background {#sec:2}
==========
Related work
------------
### Source laser printer identification methods
Source laser printer identification techniques can be classified into two categories: methods for text documents and methods for color documents. After the Mikkilineni et al. [@Mik1] firstly suggested a source laser printer identification method using the banding frequency of the source printer, researchers have proposed various methods that identify the source laser printer of the input text [@Mik2]-[@Ferreira2]. Most of them [@Mik2][@Deng][@Zhou][@Ferreira1] are based on analyzing texture of printed letters. The texture is analyzed by using Gray-Level Co-occurrence Matrix (GLCM) related features. Bulan et al. [@Bulan] introduced a method based on the geometric distortion feature of the halftone printed dots. Geometric distortion features were extracted by estimating an ideal halftone dot position and subtracting it from the real halftone pattern. Recently, Ferreira et al. [@Ferreira2] proposed a data-driven source laser printer identification method that uses CNN as the classifier. It showed high accuracy rate on noisy text image data and outperforms existing other methods for text documents.
{width="16cm"}
Source color laser printer identification is a technique that extracts features from color laser printed images unlike methods for text documents. Choi et al. [@Choi1] proposed a source color laser printer identification method using noise features extracted from an HH band of the wavelet domain. Thirty-nine statistical features were extracted, and the support vector machine (SVM) was used as a classifier. They expanded their work as a method using GLCM of the color noise [@Choi2]. Statistical features were extracted from the GLCM of the color noise, and it was used to train and test the SVM.
Ryu et al. [@Ryu] suggested a method using a halftone printing angle histogram as feature vectors. A halftone printing angle histogram was extracted using the Hough transform in the CMY domain. Then, the source printer was identified by correlation-based detection. While Choi’s method [@Choi1] used only the HH band of the wavelet domain, Tsai et al. [@Tsai1] introduced an identification method that utilized noise features from all bands of the wavelet domain. After that, they expanded their work as a hybrid identification method using both noise features from color images and GLCM features from monochrome characters [@Tsai2]. They adopted feature selection algorithms to find the best feature set. SVM-based classification was then used to identify the source printer.
While the printer identification methods mentioned above use scanned images, Kim and Lee [@Kim1] presented a method that using photographed images from a mobile device as input. Kim’s method can identify a source color laser printer with photographed images; however, it required an additional close-up lens to acquire useful input images. Therefore, they suggested a method using a halftone texture fingerprint extracted from photographed images [@Kim2]. The method proposed in [@Kim2] used photographed images that do not require an additional close-up lens as input images.
### Simulated+Unsupervised learning
The GAN framework is composed of two competing networks. A generator generates a synthetic image similar with a real image, whereas a discriminator tries to classify whether the input is the real image or not. Goodfellow et al. [@Gan] firstly introduced the GAN framework, and many improvements and applications (such as EBGAN[@ebgan], BEGAN[@began], DCGAN[@dcgan], SRGAN[@srgan]) have been presented.
Shrivastava et al. [@SimGan] suggested S+U learning based on the SimGAN framework. S+U learning means learning of a model to improve the realism of a simulated synthetic image using unlabeled real data while preserving the annotation information of the synthetic image. SimGAN is composed of refiner network and discriminator network. The refiner is similar to a generator of the traditional GAN framework, and simulated synthetic images are refined using the trained refiner network. In the training process, the refiner network and the discriminator network compete to reduce their losses. After the losses are stabilized, the trained refiner can refine synthetic image to be similar to a real image while preserving the annotation information.
Halftone color channel decomposition
------------------------------------
Color laser printing process and importance of halftone color channel decomposition is described in Figure \[PrintProcess\]. CMYK toners are printed by rolling of Optical PhotoConductor (OPC) drum in the toner cartridge, and OPC drum fingerprints are imported in printed toner pattern. OPC drum fingerprint means unique halftone pattern printed by the OPC drum. Each OPC drum has unique halftone pattern because even the OPC drums of same cartridge model have different geometric distortion or noise. Therefore, unique halftone pattern of the OPC drum or extracted features from the pattern can be used as a fingerprint of the OPC drum.
Since a digital image is photographed or scanned in RGB color domain, CMYK toners are mixed and presented in all RGB channel as shown in Figure \[PrintProcess\]. If the halftone color channel decomposing works perfect, we can extract OPC drum fingerprints from decomposed CMYK channel. However, existing color domain transition method couldn’t clearly decompose each CMYK toner pattern to different channels. This is mainly caused by image processing arose in digital camera or scanner.
One possible way to decompose each CMYK toner pattern is a machine-learning based method. If we have photographed color halftone image that CMYK toner pattern for printing is known, we can use supervised learning. However, CMYK toner pattern for printing is hard to acquire because it is processed internally in printer driver or firmware. On the other way, generating synthetic color halftone image can be done easily in photo editing software such as Photoshop. Then, since we have synthetic images with annotation information and unlabeled real images, we can adopt S+U learning to create datasets for supervised learning. Therefore, we adopted S+U learning to create datasets for halftone color channel decomposition.
Differences between photographing environment and scanning environment
----------------------------------------------------------------------
Most existing source color laser printer identification techniques use scanned images as input. These methods cannot be used to identify the source printer of input images photographed with mobile devices because of differences between the scanning environment and the photographing environment. While the intensity of the illumination is uniform in scanned images, it is not in photographed images. Moreover, blurring caused by defocusing can occur in photographed images, but this does not happen with scanned images.
Comparison results of various image acquisition methods are shown in Figure \[ImageAcquisition\]. Figure \[HR\] is an image scanned at 1200 dpi, Figure \[LR\] is an image scanned at 400 dpi, Figure \[Closeup\] is an image photographed with a mobile device equipped with an additional close-up lens, and Figure \[Normal\] is an image photographed with a normal mobile device. All images were acquired from the same printed image. Halftone dots appear clearly in high resolution images (Figure \[HR\] and Figure \[Closeup\]) while they do not in low resolution images (Figure \[LR\] and Figure \[Normal\]). The illumination is uniform in the scanned images (Figure \[HR\] and Figure \[LR\]) while it is not in the photographed images (Figure \[Closeup\] and Figure \[Normal\]).
Source color laser printer identification techniques that use scanned images as input images could not identify the source printer of photographed input images, for either close-up photographed images or normally photographed images. To overcome this limitation of the color laser printer identification techniques, Kim and Lee [@Kim1] presented a method that uses halftone texture features of photographed images. They extracted texture features from close-up photographed halftone images and used the extracted features to train and test the SVM. Adaptive thresholding was adopted to extract halftone patterns under non-uniform illumination of the photographed images.
In [@Kim1], they used three sorts of halftone texture features: printing angle, printing resolution, and detail texture. However, it is difficult to extract the detail texture feature from normally photographed images. The method presented in [@Kim1] was not able to analyze normally photographed halftone images taken from mobile devices. Therefore, they extracted the halftone texture fingerprint from normally photographed images [@Kim2], and they used it in source color laser printer identification. Halftone texture fingerprints were extracted in the discrete curvelet transform domain. Noise components were removed in the extraction process, and the extracted halftone texture fingerprint included printing angle features and printing resolution features.
Although the halftone texture fingerprint can be used in color laser printer identification, it has an inherent limitation in detection accuracy. Without detail texture features, it is hard to distinguish color printers made from the same manufacturer since the printing angle is similar. Therefore, we adopted a CNN to identify the source color laser printer of normally photographed images. All three sorts of halftone texture features are considered in feature learning process in a CNN. As a result, the CNN was able to identify source color laser printers that had similar halftone patterns by analyzing these features.
![Overall process of the proposed method []{data-label="Overall"}](overall.pdf){width="9cm"}
Printer identification method
=============================
Cascaded learning framework
---------------------------
As described in Section 2.2, the OPC drum fingerprint is a key feature for the source color laser printer identification. However, it is difficult to extract the OPC drum fingerprint by using existing color decomposition method. Although a machine learning can be used to color decomposition, it is hard to acquire labeled datasets. To resolve these issues, we propose a cascaded learning framework for source color laser printer identification.
The overall process of the proposed method is described in Figure \[Overall\]. The first step is the HCD-CNN training. We generated the dataset by S+U learning to resolve the issue about the difficulty of acquiring labeled data. The refiner is trained in GAN framework, and the dataset is generated by refining synthetic images. Then, the HCD-CNN is trained with the dataset to resolve the issue about the difficulty of color decomposition. The HCD-CNN decomposes CMYK color channels of given input RGB image. Trained weights of the HCD-CNN are used to initialize weights of the PI-CNN. After that, the PI-CNN is trained with photographed color image blocks. The PI-CNN is designed to decompose halftone components, extract features and classify in a single network. The detailed framework of the PI-CNN is described in Section 3.3. Finally, printer identification is carried out with the trained PI-CNN.
Halftone color decomposing-CNN
------------------------------
### Halftone image refiner training
Halftone image refiner gets synthesized halftone image as input and refines it to similar with real photographed halftone images. The refined image should have a similar appearance with a real image while preserving the CMYK color channel components from the original synthesized image. The objective of halftone image refiner training is minimizing the following loss: $$\mathcal{L}_R(\theta)=\sum_{i}\ell_{real}(\theta;x_i,Y)+\lambda\ell_{reg}(\theta;x_i),$$ where $x_i$ is the $i^{th}$ image in synthetic images, $Y$ is the real image set, $\theta$ is the parameters of the refiner, and $\lambda$ is a scaling factor. A scaling factor of 10$^{-5}$ is used for the proposed method. The realism loss $\ell_{real}$ is loss about the reality of the refined output, and the self-regularization loss $\ell_{reg}$ is loss about preserving the annotation information about CMYK color channel components.
Following the S+U learning presented in [@SimGan], the refiner $R_\theta$ is trained alternately with the discriminator $D_\phi$, where $\phi$ are the parameters of the discriminator network. The objective of the discriminator network is minimizing the loss: $$\begin{split}
\mathcal{L}_D(\phi)=-\sum_it_{Fake}\cdot\log(f(D_{\phi}(R_\theta(x_i))))\\-\sum_jt_{Real}\cdot\log(f(D_{\phi}(y_j))),
\end{split}$$ where $t$ means label for fake or real, and $f$ means softmax function. The loss is equivalent to cross-entropy error for a two class classification of which label is expressed as a one-hot vector of size 2. The discriminator $D_\phi$ is implemented as a CNN that has two output neurons representing the input is fake or real. The architecture of the discriminator is: (1) Conv3$\times$3, stride=1, feature maps=64, (2) Conv3$\times$3, stride=2, feature maps=64, (3) Conv3$\times$3, stride=1, feature maps=128, (4) Conv3$\times$3, stride=2, feature maps=128, (5) Conv3$\times$3, stride=1, feature maps=256, (6) Conv3$\times$3, stride=2, feature maps=256, (7) FC2. The input is a 64$\times$64 RGB image. The Leaky-ReLU is used for non-linearity function.
Regarding the refiner, we defined the loss $\ell_real$ and $\ell_reg$ as follows: $$\ell_{real}(\theta;x_i,Y)=-t_{Real}\cdot\log(f(D_{\phi}(R_\theta(x_i)))),$$ $$\ell_{reg}(\theta;x_i)=\parallel R_\theta(x_i)-x_i\parallel_2,$$ where $\parallel\cdot\parallel_2$ means L2 norm. The refiner is trained to refine input image realistically by minimizing the $\ell_{real}$ loss. The $\ell_{reg}$ is needed to preserve the annotation information of the synthetic images. The refiner $R_\theta$ is implemented as a fully convolutional neural net (FCN). We adopted refined image history buffer suggested in [@SimGan] to stabilize the refiner training. The architecture of the refiner is: (1) Conv3$\times$3, stride=1, feature maps=64, (2) Conv3$\times$3, stride=1, feature maps=64, (3) Conv3$\times$3, stride=1, feature maps=64, (4) Conv3$\times$3, stride=2, feature maps=64, (5) Conv3$\times$3, stride=1, feature maps=64, (6) Conv3$\times$3, stride=2, feature maps=64, (7) Conv3$\times$3, stride=2, feature maps=16, (8) Conv3$\times$3, stride=2, feature maps=4. The input is a 64$\times$64 RGB image. The ReLU is used for non-linearity function except for the last layer that used Tanh. The detail process of the refiner training is described in Algorithm 1.
image buffer $B$, mini-batch size $b$, sets of synthetic images $x_i \in X$, and real images $y_j \in Y$, max iteration number $T$ FCN model $R_\theta$ Set $B$ as a empty buffer Sample a mini-batch input of $x_i$ Append $R_\theta(x_i)$ to the $B$ Replace $b/2$ images in the $B$ with $R_\theta(x_i)$ Update $\theta$ by taking a SGD step on the $\mathcal{L}_R(\theta)$ calculated from the mini-batch Sample a mini-batch input of $x_i$ and $y_j$ Sample $b/2$ images in the $B$ and $b/2$ images $R_\theta(x_i)$ with current $\theta$ and merge images to create refined input of the discriminator Update $\phi$ by taking a SGD step on the $\mathcal{L}_D(\phi)$ calculated from the mini-batch
### Halftone color decomposing-CNN training
After the refiner training is completed, we refined all synthetic images to prepare the dataset for HCD-CNN training. Then, the HCD-CNN is trained with the refined dataset. The objective of the HCD-CNN is decomposing CMYK toner information that is mixed in RGD color domain. The trained HCD-CNN is used for transferring the knowledge about halftone color decomposition to the PI-CNN.
The architecture of the HCD-CNN is: (1) Conv3$\times$3, stride=1, feature maps=64, (2) Conv3$\times$3, stride=1, feature maps=64, (3) Conv3$\times$3, stride=1, feature maps=64, (4) Conv3$\times$3, stride=1, feature maps=64, (5) Conv3$\times$3, stride=1, feature maps=64, (6) Conv3$\times$3, stride=1, feature maps=64, (7) Conv3$\times$3, stride=1, feature maps=64, (8) Conv3$\times$3, stride=1, feature maps=4, (9) Euclidean loss. The input is a 64$\times$64 RGB image. The ReLU is used for non-linearity function, and batch normalization is applied to every convolutional layer. Each feature map of the output means decomposed CMYK color channel of the input image, respectively.
![Parameter transferring between the HCD-CNN and the PI-CNN []{data-label="Transfer"}](Transfer.pdf){width="9cm"}
![PI-CNN training process []{data-label="Robust"}](Robust.pdf){width="8.5cm"}
Printer identifying-CNN
-----------------------
The objective of the PI-CNN is identifying the source color laser printer of input RGB image block. To utilize the knowledge about halftone color decomposition of the HCD-CNN, part of the weights of the PI-CNN is initialized with the trained weights of the HCD-CNN as presented in Figure \[Transfer\]. As shown in Figure \[Transfer\], the PI-CNN is composed of three parts that carry out halftone component decomposition, feature extraction, classification respectively. Halftone component decomposition part is initialized with the weights of the HCD-CNN, and other parts are initialized with Xavier initialization [@xavier]. Then, all layers of the PI-CNN are trained in the gradient descent process.
The training process is composed of two phases as shown in Figure \[Robust\] to achieve robustness about input scaling and rotation. In the first phase, the PI-CNN is trained with the photographed color document input $I_n$. $I_n$ isn’t scaled and rotated. Next, weights of the trained PI-CNN are transferred and fine-tuned to achieve robustness about scaling and rotation. Input image $I_{(n,s,\theta)}$ has scaling factor $s$ and rotation factor $\theta$ that are randomly selected as $s\in\{0.8,1.0,1.2\},\theta\in\{-10^\circ,0^\circ,+10^\circ\}$.
The architecture of the PI-CNN used in the proposed method is follows: (1) Conv3$\times$3, stride=1, feature maps=64, (2) Conv3 $\times$3, stride=1, feature maps=64, (3) Conv3$\times$3, stride=1, feature maps=64, (4) Conv3$\times$3, stride=1, feature maps=64, (5) Conv3$\times$3, stride =1, feature maps=64, (6) Conv3$\times$3, stride=1, feature maps=64, (7) Conv3$\times$3, stride=1, feature maps=64, (8) Conv3$\times$3, stride=1, feature maps=64, (9) Conv3$\times$3, stride=1, feature maps=64, (10) MaxPool2$\times$2, stride=2, (11) Conv3$\times$3, stride=1, feature maps=128, (12) Conv3$\times$3, stride=1, feature maps=128, (13) MaxPool2$\times$2, stride=2, (14) Conv3$\times$3, stride= 1, feature maps=256, (15) Conv3$\times$3, stride=1, feature maps= 256, (16) MaxPool2$\times$2, stride=2, (17) FC4096, (18) FC4096, (19) FC8, (20) Softmax. The input is a 64$\times$64 RGB image, and the output is a vector consisting of eight neuron values, which denotes the number of the source printers used in the experiment. If the number of the candidate source printers is changed, the number of neurons in the last fully-connected layer can be changed. The ReLU is used for non-linearity function and batch normalization is applied to every convolutional layer.
![Example images used in experiments []{data-label="ExpImgs"}](ExpImgs.pdf){width="8.5cm"}
Source color laser printer identification
-----------------------------------------
The trained PI-CNN gets image blocks as inputs. A photographed color document image is much bigger than the PI-CNN input size. Therefore we divide the image into blocks and merge all softmax result of block feed-forwardings to the PI-CNN. The source color laser printer is identified based on the average of softmax outputs as the following equation:
$$p_s = \operatorname*{argmax}_{i=1\sim n} m^{-1}\sum^{m}_{j=1}f_j(z_i),$$
where $p_s$ denotes source printer, $n$ is the number of candidate source printers, $m$ is the number of input blocks, $f_j(z_i)$ denotes softmax output of $i$-th candidate printer where the input block is $j$-th block.
{width="12cm"}
{width="10cm"}
Experimental results and discussion
===================================
Experimental environment
------------------------
### Refiner and HCD-CNN
Example images used in our experiments are shown in Figure \[ExpImgs\]. Same images were used in all steps of the proposed method. In the refiner training, images halftoned with Adobe Photoshop CS 6 were used as synthetic images. The ratio between halftone dot size and image size is set to equal with the ratio between toner dot size and printed image size. The images printed from eight color laser printers and photographed by using a Galaxy Note 3 (Samsung) smartphone were used as unlabeled real images. Source color laser printers are listed in Table \[printers\]. The images were also used for printer identification.
In all training process of neural networks, we used Adam optimizer. The refiner was trained with 10$^{-5}$ learning rate and 32 batch size. The HCD-CNN was trained with 10$^{-4}$ learning rate and 32 batch size. Training was stopped when the validation loss converges. All neural networks were implemented using the TensorFlow [@tf] library.
**Label** **Brand** **Model**
----------- ---------------- -------------------------
H1 HP HP 4650
H2 HP HP CM3530
X1 Xerox 700 Digital Color Press
X2 Xerox Docu Centre C450
X3 Xerox Docu Centre C6500
K1 Konica Minolta Bizhub Press C280
K2 Konica Minolta Bizhub Press C280
K3 Konica Minolta Bizhub Press C8000
: A list of printers used in experiments
\[printers\]
### Printer identification
A total of eight printers from three brands listed in Table \[printers\] were used for the experiments. The proposed method, Kim’s method using halftone texture fingerprints [@Kim2], Kim’s method using close-up photography [@Kim1], Tsai’s method [@Tsai1], and Ryu’s method [@Ryu] were used in the experiment to compare the performances. As Tsai’s hybrid method [@Tsai2] used not only printed images but also printed texts, it was not used in the comparison experiment. Choi’s methods [@Choi1][@Choi2] were not used in the experiment because Tsai’s method [@Tsai1] was based on Choi’s methods, and it had slightly better performance than Choi’s methods.
The images that used in refiner and HCD-CNN training were used for experiments. The size of original images photographed with the smartphone was 2322$\times$4128. The photographing distance and the photographing angle were kept equal in all input images. We cropped original images, and a total of 768 images from each color laser printer was used for the experiment. The size of input images was 512$\times$512. 2-fold cross-validation was adopted for the test of the existing methods. In the experiment of the proposed method, 49,152 image blocks for each color laser printer (extracted from same input images used for existing methods) were used for the training. 2-fold cross-validation is also adopted for the test of the proposed method in a modified format. To use early stopping, we divided validation set of cross-validation into two sets. Each set was used as validation set and test set alternatively. Thus, there were four test results for the proposed method. Data augmentation was not used in the experiment. The PI-CNN was trained with 2$\times10^{-5}$ learning rate and 32 batch size. The training was stopped when the validation accuracy is not increased for ten epochs.
[ l |\*[4]{}[c|]{}]{} & &\
& HCD-CNN & Profile & HCD-CNN & Profile\
& **17.8953** & 5.7045 & **0.8412** & 0.0093\
& **17.8108** & 5.8335 & **0.8172** & 0.0094\
& **18.5375** & 6.6622 & **0.7537** & 0.0245\
& 26.5491 & **29.5980** & **0.7725** & 0.5192\
\[hcd\_table\]
![Halftone color decomposition of refined image []{data-label="HCD_refine"}](HCD_CNN_refine.pdf){width="8.8cm"}
Refiner training results
------------------------
Figure \[Refined\] shows examples of synthetic, real and refined color halftone images. As shown in example images, synthetic images show clear halftone dots while real photographed halftone images show blurred halftone dots and mixed patterns. Refined images show blurred halftone dots and patterns that are similar to real images. The trained refiner successfully refine synthetic images to be looked as real while preserving annotation information.
Example outputs of the refiner at various training epochs are shown in Figure \[RefineEpoch\]. The refiner produced unrealistic artifacts when it trained for just one epoch. During the training, the refiner learned to refine synthetic images to be looked as real. It produced realistic images after trained ten epochs. We used the refiner trained for 18 epochs since the validation loss was converged from that point.
![Halftone color decomposition of real image []{data-label="HCD_real"}](HCD_CNN_real.pdf){width="8.8cm"}
HCD-CNN training results
------------------------
Performance comparison results between the HCD-CNN and existing halftone color decomposition using pre-defined color profile are presented in Figure \[HCD\_refine\] and Table \[hcd\_table\]. Figure \[HCD\_refine\] (a) is original CMYK channels of a refined halftone image not used for training, Figure \[HCD\_refine\] (b) is decomposed CMYK channels by using HCD-CNN, and Figure \[HCD\_refine\] (c) is decomposed CMYK channels by existing method using color profile; U.S. Sheetfed Coated profile was used to convert color domain. Table \[hcd\_table\] presents peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) between the decomposed color channel and original color channel of 12,288 refined halftone images not used for training. The higher PSNR and SSIM mean that the decomposed color channel is similar to the original pattern.
As shown in Figure \[HCD\_refine\], the HCD-CNN decomposed each CMYK channels from blurred and mixed halftone patterns. The existing method couldn’t separate mixed halftone dots so that decomposed channels are blurred. The HCD-CNN shows better performances in all measures of Figure \[HCD\_refine\]. In the result of Table \[hcd\_table\], all measures of the HCD-CNN is better than the existing method except for the PSNR of the black channel. Notably, the performance gap in SSIM of CMY channels is much higher than other measures.
![Printer identification results graph []{data-label="Result"}](Result.pdf){width="8.8cm"}
[ l |\*[9]{}[c|]{}]{} &\
& H1 & H2 & X1 & X2 & X3 & K1 & K2 & K3 & Avg.$\pm$ Std.Dev\
& 59.51 & 67.58 & 82.94 & 70.44 & 74.87 & 72.40 & 27.34 & 51.69 & 63.35$\pm$1.60\
& 52.08 & 39.32 & 55.60 & 44.92 & 58.33 & 20.05 & 23.57 & 33.33 & 40.90$\pm$1.42\
& 12.89 & 27.47 & 67.06 & 37.11 & 11.85 & 15.63 & 36.07 & 40.23 & 31.04$\pm$0.44\
& 31.38 & 22.27 & 47.66 & 22.66 & 28.78 & 20.05 & 41.54 & 22.40 & 29.59$\pm$1.50\
& **95.67** & **99.64** & **96.29** & **94.40** & **96.94** & **96.19** & **95.61** & **93.95** & **96.09$\pm$2.37**\
\[result\_table\]
In Figure \[HCD\_real\], results of halftone color decomposition of the real image are shown. In comparison with the existing method, the HCD-CNN suggested halftone dots of each CMYK channel while existing method just decomposes each CMYK channels as the same pattern with different local intensity. The knowledge about decomposing halftone color channels of the HCD-CNN was transferred to the PI-CNN, and the PI-CNN showed overwhelming source printer identification accuracy compared to existing methods that are using color profile based halftone color decomposition.
PI-CNN training results
-----------------------
The validation accuracy during the PI-CNN training is presented in Fig. \[Phase1\_2\]. Among all four cross-validation test results, Fig. \[Phase1\_2\] represents one of them. The validation accuracy during the training phase 1 and the training phase 2 is described in Fig. \[Phase1\] and Fig. \[Phase2\], respectively. As shown in Fig. \[Phase1\], the validation accuracy did not increase for ten epochs after epoch 35. Therefore, weights of the network at epoch 35 were used in the training phase 2. In the training phase 2, weights of the network at epoch 45 were used for the test based on the early stopping rule. Average identification accuracy of the trained networks at the blockwise test was 88.37%, and the standard deviation was 1.25.
Printer identification results
------------------------------
### Identification accuracy evaluation
The printer identification results are summarized in Fig. \[Result\] and Table \[result\_table\]. The average accuracy of the proposed method was 96.09%, which is the highest accuracy among the all five tested methods. Kim’s method using halftone fingerprints had a better performance than the other three existing methods, but its average accuracy was lower by about 33% than the proposed method. It was hard for the other methods to identify the source color laser printers because the input images were photographed images while these methods were designed to identify the source printer of close-up photographed images [@Kim1] or scanned images [@Ryu][@Tsai1].
[ l |\*[5]{}[c|]{}]{} &\
& $-$10 & $-$5 & 0 & $+$5 & $+$10\
& 16.63$\pm$0.65 & 28.71$\pm$0.62 & 63.35$\pm$1.60 & 31.22$\pm$0.36 & 14.55$\pm$0.23\
& 13.64$\pm$0.85 & 13.83$\pm$1.17 & 40.90$\pm$1.42 & 13.67$\pm$1.04 & 13.49$\pm$0.76\
& 17.92$\pm$1.97 & 18.60$\pm$1.17 & 31.04$\pm$0.44 & 21.74$\pm$0.65 & 22.04$\pm$1.14\
& 11.62$\pm$0.68 & 19.01$\pm$0.03 & 29.59$\pm$1.50 & 13.79$\pm$0.24 & 10.68$\pm$0.20\
& **95.00$\pm$2.49** & **93.01$\pm$2.98** & **96.09$\pm$2.37** & **92.80$\pm$2.97** & **94.46$\pm$2.47**\
\[Rotation\_table\]
[ l |\*[5]{}[c|]{}]{} &\
& 0.8 & 0.9 & 1.0 & 1.1 & 1.2\
& 11.85$\pm$0.52 & 10.69$\pm$0.86 & 63.35$\pm$1.60 & 16.65$\pm$1.64 & 14.29$\pm$0.36\
& 21.97$\pm$1.24 & 23.91$\pm$2.46 & 40.90$\pm$1.42 & 19.45$\pm$0.73 & 17.37$\pm$0.50\
& 15.92$\pm$0.00 & 22.69$\pm$0.13 & 31.04$\pm$0.44 & 30.58$\pm$0.67 & 26.20$\pm$1.07\
& 26.03$\pm$0.18 & 28.06$\pm$0.94 & 29.59$\pm$1.50 & 29.20$\pm$1.30 & 29.33$\pm$0.46\
& **95.81$\pm$2.51** & **89.44$\pm$3.93** & **96.09$\pm$2.37** & **92.98$\pm$2.13** & **95.25$\pm$2.71**\
\[Scaling\_table\]
The graphical confusion matrices of the identification accuracy evaluations of all the methods are presented in Fig. \[Confusion\]. As shown in Fig. \[Kim2Conf\] and Table \[result\_table\], Kim’s method using halftone fingerprints had irregular identification accuracy. This was caused by similar halftone printing angles between H1 and K3, and X3 and K2. Kim’s method had a limitation in that it could not identify source printers that had the same printing angles. For the proposed method, the printer identification results showed regular identification accuracy for all tested source printers. It means that the proposed method can differentiate source color laser printers that have similar halftone printing angles with high reliability.
![Robustness evaluation results graph for rotation []{data-label="Rotation"}](Rotation.pdf){width="8.5cm"}
![Robustness evaluation results graph for scaling []{data-label="Scaling"}](Scaling.pdf){width="8.5cm"}
### Robustness evaluation
Robustness evaluation results are presented in Fig. \[Rotation\]-\[Scaling\] and Table \[Rotation\_table\]-\[Scaling\_table\]. The proposed method showed stable identification accuracy in all tests despite images that are rotated with -5 degree and +5 degree and images that are scaled with the factor of 0.9 and 1.1 were not included in the training set. It means the proposed method achieved robustness about rotation and scaling for not only trained values but also values within the trained interval.
Other methods showed performance degradation for transformed input images. Severe performance degradation occurred in all comparison methods for rotated input images. In case of scaling robustness test, Kim’s method [@Kim1] and Ryu’s method [@Ryu] showed stable identification accuracy because they identify the source printer mainly based on printing angle features. Scaling does not affect to the printing angle. Printing resolution feature is crucial for Kim’s method [@Kim2] and Tsai’s method [@Tsai1]. Thus they couldn’t identify the source printer in scaled images.
Discussion
----------
We adopted a deep learning-based approach to identify the source color laser printer, and a typical limitation of deep learning is that it requires large dataset to train the network. In the experiment, only four pages of printed images were used for each source printer. Seven hundred and sixty-eight images were photographed from these printed images, and 49,152 halftone image blocks were extracted. If the source printer or several pages of printed material is available, the proposed method can be used to identify the source color laser printer. Therefore, the proposed method could be utilized in the real forensic situation despite the requirement of a large dataset.
The main limitation of the proposed method is that it cannot determine whether or not the source color laser printer of the input image is one of the candidates. In a real identification case, the source color laser printer of the input image might be none of the candidates. However, one of the trained source printers must be selected for any input image in the proposed method. To overcome this limitation, adding one more output neuron for other printers is possible. To train a network with other printers, data about other printers that includes images printed from various source printers will be necessary.
Conclusion
==========
In this paper, we proposed a source color laser printer identification method based on cascaded learning of neural networks. Firstly, the refiner is trained to refine synthetic halftone images. Next, the HCD-CNN is trained to decompose CMYK color channels of photographed color halftone images. Based on the knowledge of the HCD-CNN, the PI-CNN is trained to identify the source printer of the input. The trained PI-CNN was used in the identification process, and the source color laser printer was selected based on the result of the PI-CNN.
Our experimental results demonstrated that the proposed method overcame the limitations of the existing methods. The proposed method achieved a state-of-the-art performance for identifying the source color laser printer of photographed input images. Since input images were taken with a smartphone with no additional close-up lens, the proposed method can be utilized to identify the source printer in a mobile environment.
For future work, we will work on reducing the computation cost. The proposed method achieved high identification accuracy and robustness about rotation and scaling, however, its computational cost is too high to operate on mobile devices. Therefore, we will test various techniques that reduce computation cost of deep neural networks and work on optimizing those techniques to our source printer identification framework.
Acknowledgement {#acknowledgement .unnumbered}
===============
This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (NRF-2016R1A2B2009595)
References {#references .unnumbered}
==========
[1]{} A.K. Mikkilineni, G.N. Ali, P-J. Chiang,G.T. Chiu, J.P. Allebach, E.J. Delp, Signature embedding in printed documents for security and forensic applications. Proc. of SPIE Int. Conf. on Security, Steganography, and Watermarking of Multimedia Contents, (2004) pp. 455-466 A.K. Mikkilineni, P-J. Chiang, G.N. Ali, G.T. Chiu, J.P. Allebach, E.J. Delp, Printer identification based on graylevel co-occurrence features for security and forensic applications. Proc. of the SPIE Int. Conf. on Security, Steganography, and Watermarking of Multimedia Contents, (2005) pp. 430-440 W. Deng, Q. Chen, F. Yuan, Y. Yan, Printer identification based on distance transform. Proc. of the ICINIS, (2008) pp. 565-568 O. Bulan, J. Mao, G. Sharma, Geometric distortion signatures for printer identification. Proc. of the ICASSP, (2009) pp. 1401-1404 Q. Zhou, Y. Yan, T. Fang, X. Luo, Q. Chen, Text-independent printer identification based on texture synthesis. Multimedia Tools and Applications. (2015) doi:10.1007/s11042-015-2525-5 A. Ferreira, L.C. Navarro, G. Pinheiro, J.A. dos Santos, A. Rocha, Laser printer attribution: Exploring new features and beyond. Forensic Sci. Int., vol. 247, (2015) pp. 105–125 A. Ferreira, L. Bondi, L. Baroffio, P. Bestagini, J. Huang, J.A. dos Santos, S. Tubaro, A. Rocha, Data-driven feature characterization techniques for laser printer attribution. IEEE Trans. on Information Forensics and Security, Vol. 12, no. 8, (2017) pp. 1860-1873 J.H. Choi, D.H. Im, H.Y. Lee, H.K. Lee, Color laser printer identification by analyzing statistical features on discrete wavelet transform. Proc. of the ICIP, (2009) pp. 1505-1508 J.H. Choi, H.Y. Lee, H.K. Lee, Color laser printer forensic based on noisy feature and support vector machine classier. Multimedia Tools and Application, (2011) doi:10.1007/s11042-011-0835-9 S.J. Ryu, H.Y. Lee, D.H. Im, J.H. Choi, H.K. Lee, Electrophotographic printer identification by halftone texture analysis. Proc. of the ICASSP, (2010) pp. 1846-1849 M.J. Tsai, J. Liu, C.S. Wang, C.H. Chuang, Source color laser printer identification using discrete wavelet transform and feature selection algorithms. Proc. of the ISCAS, (2011) pp. 2633-2636 M.J. Tsai , J. Liu, Digital forensics for printed source identification. Proc. of the ISCAS, (2013) pp. 2347-2350 D.G. Kim, H.K. Lee, Color laser printer identification using photographed halftone images. Proc. of the EUSIPCO, (2014) pp. 795-799. D.G. Kim, H.K. Lee, Colour laser printer identification using halftone texture fingerprint. Electronics Letters 51(13), (2015) pp. 981-983 I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In Proc. NIPS, (2014) A. Shrivastava, T. Pfister, O. Tuzel, J. Susskind, W. Wang, R. Webb, Learning from Simulated and Unsupervised Images through Adversarial Training. arXiv preprint arXiv:1612.07828. (2017) J. Zhao, M. Mathieu, Y. LeCun, Energy-based generative adversarial network. arXiv preprint arXiv:1609.03126. (2016) D. Berthelot, T. Schumm, L. Metz, Began: Boundary equilibrium generative adversarial networks. arXiv preprint arXiv:1703.10717. (2017) A. Radford, L. Metz, S. Chintala, Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434. (2015) C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, W. Shi, Photo-realistic single image super-resolution using a generative adversarial network. arXiv preprint arXiv:1609.04802. (2016) M. Abadi, et al. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467. (2016) X. Glorot, Y. Bengio, Understanding the difficulty of training deep feedforward neural networks. In Proc. of the Thirteenth International Conference on Artificial Intelligence and Statistics, (2010) pp. 249-256
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We study concentrated colloidal suspensions, a model system which has a glass transition. Samples in the glassy state show aging, in that the motion of the colloidal particles slows as the sample ages from an initial state. We study the relationship between the static structure and the slowing dynamics, using confocal microscopy to follow the three-dimensional motion of the particles. The structure is quantified by considering tetrahedra formed by quadruplets of neighboring particles. We find that while the sample clearly slows down during aging, the static properties as measured by tetrahedral quantities do not vary. However, a weak correlation between tetrahedron shape and mobility is observed, suggesting that the structure facilitates the motion responsible for the sample aging.'
address: 'Department of Physics, Mail Stop 1131/002/1AB, Emory University, Atlanta, GA 30322, U.S.A.'
author:
- 'Gianguido C. Cianci, Rachel E. Courtland and Eric R. Weeks'
title: Correlations of Structure and Dynamics in an Aging Colloidal Glass
---
A. Disordered systems ,D. Order-disorder effects ,D. Phase transitions. 05.70.Ln ,61.43.Fs ,64.70.Pf ,82.70.Dd
Introduction
============
When some liquids undergo a rapid temperature quench they can form glasses. This occurs at a glass transition temperature $T_g$ which often depends on the cooling rate. As the system is cooled, the approaching glass transition is marked by a dramatic increase in the macroscopic viscosity of the liquid, and a corresponding increase in the microscopic time scales for motion [@review1; @review2; @review3; @review4]. Both the viscosity and the microscopic relaxation time can change by many orders of magnitude as the temperature decreases by merely 10%. Once in the glass state, another phenomenon is noted, that of aging: the dependence of the properties of the system on the time elapsed since reaching $T_g$. When such behavior is observed the system is said to be out of equilibrium, a fact that could be anticipated by noting that the dependence of $T_g$ itself on cooling rate implies the glass transition is not an equilibrium phenomenon. Aging most prominently manifests itself in the dynamics: the microscopic relaxation time scale depends on the age of the sample.
Attempts to explain these phenomena try to link the microscopic structure to the microscopic dynamics. For example, one might postulate that the increase in viscosity is caused by the growth of domains whose dynamics are correlated [@ediger00]. However, no experiment has seen a structural length scale characterizing such domains that grows or diverges at $T_g$ [@review1; @menon94; @blaaderen95]. Likewise, aging might be due to some coarsening of structure; as domains of glassy structure grow, motion is slowed, which in turn slows the further growth of these domains. However, these domains have not been identified, and currently no structural features have been identified that explain aging dynamics [@vanmegen98; @kob00].
Recently, some interesting developments in the study of aging non-equilibrium systems have been brought about by the adoption of dense colloidal suspensions as model systems for liquids, glasses and gels [@blaaderen95; @vanmegen98; @pusey86; @cipelletti00; @cipelletti03; @courtland]. Colloidal suspensions consist of solid particles in a liquid, and the motion of the particles is analogous to that of atoms or molecules in a more traditional material [@pusey86; @kegel00; @weeks00]. In these systems, the particle interactions can easily be tuned from repulsive to attractive. A common case is when particles interact simply as hard spheres with no interactions, attractive or repulsive, other than when they are in direct contact [@pusey86; @snook91]. In all cases, a major control parameter is the packing fraction $\phi$. For hard spheres this is the only control parameter and when $\phi$ is raised above a value of $\phi_{\rm g}\approx 0.58$ the system becomes glassy and the aging process begins.
While structural changes in an aging system remain unclear, two experiments studying colloids have characterized the dynamics. Cipelletti and co-workers [@cipelletti03] studied aging in a colloidal gel using novel light scattering techniques and showed that the dynamics in such a non-equilibrium system present striking temporal heterogeneities. Aging has also been studied in a colloidal glass using confocal microscopy [@courtland]. In that study both temporal and spatial heterogeneities were seen. However, despite the ease with which a colloidal glass can be formed and observed a detailed understanding of the structural changes that accompany aging and the slowing of dynamics has not yet been seen.
In this paper we study the structure of an aging colloidal glass by considering how colloids pack together. Entropy can be maximized by optimizing packing in a dense suspension. Consider the intriguing case of the crystallization of hard spheres. When spheres arrange into a crystalline lattice, they lose configurational entropy. However, they each have more local room to move close to their lattice site, and thus the vibrational entropy is larger. This increase in vibrational entropy outweighs the loss of configurational entropy due to crystallization [@hoover68]. In practice, this argument holds true for systems with volume fractions above $\phi_{\rm freeze} = 0.494$, the point at which the system begins to nucleate crystals; below $\phi_{\rm freeze}$, the configurational entropy dominates and the system prefers an amorphous, liquid configuration [@pusey86].
For glasses, we consider a different sort of packing. The most efficient way to pack four spheres of diameter $d$ in three dimensions is to place them at the vertices of a regular tetrahedron with edge length $d$. In this configuration, the effective volume fraction for the four spheres reaches a surprising $0.78$. In other words, for a given volume fraction $\phi$, locally four particles can maximize their entropy by arranging into a regular tetrahedron consistent with the global volume fraction $\phi$, thus giving them additional room to move. However, regular tetrahedra do not tile 3D space and therefore the most efficient macroscopic packing is that of a hexagonally packed crystal at $\phi_{\rm hcp}\approx 0.74$. Thus in a glass there is a [*frustration*]{} between the drive to locally pack in tetrahedra to maximize the local volume available to vibrations, and the inability to tile 3D space with such structures. This has been suggested as a possible origin for the glass transition in simple liquids [@nelson84; @nelson02; @stillinger88; @kivelson94].
We take advantage of the insight afforded to us by fast laser scanning confocal microscopy [@blaaderen95; @kegel00; @weeks00; @dinsmore01] and study an aging colloidal glass in terms of tetrahedral packing. We focus on geometrical properties of the tetrahedra formed by the colloids and look for correlations between these [*static*]{} quantities and the conspicuous slowing of the [*dynamics*]{} as measure by the average tetrahedral mobility. We find that while the distribution of these static quantities does not age, they correlate weakly with mobility, suggesting that the structure facilitates the aging process.
Experimental Methods
====================
We suspend poly(methyl methacrylate) (PMMA) colloids of diameter $d=2.36\mu m$ in a mixture of $15\%$ decalin and $85\%$ cyclohexylbromide by weight. The mixture closely matches the density and refractive index of the particles, thus greatly reducing sedimentation and scattering effects. The size polydispersity of the colloids ($\approx 5\%$) prevents crystallization. The particles are sterically stabilized against van der Waals attractions by a thin layer of poly-12-hydroxystearic acid [@antl]. We dye the colloids with rhodamine 6G [@dinsmore01]. The particles also carry a slight charge due to the dye. In this paper, we measure all lengths in terms of the diameter $d$ and all times in terms of $\tau_{\rm diff}$, the time a particle would take to diffuse its own diameter in the [*dilute*]{} limit. Given the solvent viscosity $\eta=2.25$ mPa$\cdot$s at $T=295$ K, this time is $\frac{d^{2}}{6D}=11.4$ s where $D=\frac{k_{\rm B}T}{3\pi
\eta d}$.
We acquire three dimensional images by fast laser scanning confocal microscopy at a rate of 1 every 26 s. The observation volume measures 26$d$ $\times$ 25$d$ $\times$ 4.2$d$. At these high densities ($\phi>0.58$) the colloids move slowly and can easily be tracked using established analysis techniques [@weeks00; @dinsmore01; @crocker96]. The 3D positions of $\sim$2500 particles are measured virtually instantaneously with an accuracy of 0.013$d$ in the x-y plane and 0.021$d$ along the optical axis. We acquire data at least 25$d$ away from the closest wall, to avoid boundary effects [@kose76; @gast86]. We do not observe any crystals in the bulk even after several weeks.
The phase behavior of this quasi-hard sphere system is controlled by varying the packing fraction $\phi$. The system undergoes a glass transition when $\phi>\phi_{g}\approx
0.58$ in agreement with what is seen in hard sphere systems [@pusey86; @weeks00]. Here we present data from a sample at $\phi\approx 0.62$ though we see qualitatively similar results for all $\phi > \phi_{g}$.
Proper sample initialization is paramount when studying aging and is ensured here by a vigorous, macroscopic stirring. This shear melting effectively rejuvenates the glass and yields reproducible dynamics that depend exclusively on $t_{\rm w}$, the time elapsed since initialization. Data acquisition starts immediately after rejuvenation. Transient macroscopic flows are observable for the first 25 min$\approx 140 \tau_{\rm diff}$ and we set $t_{\rm w}=0$, or age zero, when they subside. The results below are insensitive to small variations in this choice.
Results
=======
We observe our sample for $\sim 700\tau_{\rm diff}$ without disturbing it. We then split the data in three time windows as follows: $[0-100\tau_{\rm diff}]$, $[100-300\tau_{\rm diff}]$ and $[300-700\tau_{\rm diff}]$. This corresponds to doing three experiments with samples aged $t_{\rm w}=0$, $100$ and $300\tau_{\rm diff}$ respectively. The dynamics slow as the sample ages, as shown in Fig. \[msd\], where we plot the mean square displacement for the three data portions averaged over all particles and over all initial times within a given time window. At short and medium times ($\frac{\Delta t}{\tau_{\rm
diff}}<10$), particle motions are subdiffusive as indicated by a slope less than unity on the log-log plot. At longer times the slope tends to one; the time scale for this upturn changes dramatically for different values of age $t_{\rm w}$ clearly indicating that the sample is out of equilibrium. It is this slowing down of dynamics that we wish to analyze in terms of tetrahedral structure.
![Aging mean squared displacement for a colloidal glass at $\phi\approx0.62$. The three curves represent three different ages of the sample. $\triangle:
t_{\rm w}=0\tau_{\rm diff}$, $\times:t_{\rm w}=100\tau_{\rm diff}$ and $\bigcirc:t_{\rm w}=300\tau_{\rm diff}$. The dashed line has a slope of 1 and represents diffusive behavior, not seen in this glassy sample.[]{data-label="msd"}](weeks-fig1.eps){height="5.5cm"}
We start our structural analysis by calculating the pair correlation function $g(r)$ and plot the result in Fig. \[gr\]. This function does not vary with age and thus is calculated by averaging over all times. The first peak of $g(r)$ is at $r=1.04d$ which deviates somewhat from the ideal hard-sphere position ($r=d$). This can be explained by the slight charging mentioned above and perhaps also by the uncertainty in the value of the particle diameter which we deem to be at most 2%. Figure \[gr\] also shows the characteristic double second peak found in many glassy systems.
![Pair correlation function $g(r)$. The shaded area indicates the range of interparticle distances used to define nearest neighbors.[]{data-label="gr"}](weeks-fig2.eps){height="5.5cm"}
In order to study the tetrahedral packing in our sample we begin by labeling as nearest neighbors every pair of colloids whose separation is within the first peak of $g(r)$, namely $0.74d>r>1.38d$, as is shown by the shaded area in Fig. \[gr\]. The lower limit is chosen to eliminate artificially close pairs which arise from the occasional error in particle identification, while the upper limit corresponds to the first minimum of $g(r)$. Note that a completely coplanar arrangement of four spheres in a square is excluded as a tetrahedron, as the diagonal would have length $\sqrt{2}d$ which is excluded by our upper limit. The results presented here are insensitive to small variations in these parameters and match those obtained using Delaunay triangulation as a nearest neighbor finding algorithm.
A tetrahedron is then defined as a quadruplet of mutually nearest neighbor colloids. To characterize each tetrahedron, we calculate several geometrical characteristics. The first is “looseness” $b$, defined as the average of the lengths of the 6 edges, or “bond lengths”, $b_{\rm i}$. An “irregularity” $\sigma_{b}$ is defined as the standard deviation of the $b_{\rm i}$. The looseness and irregularity appear to be the most important geometric parameters to characterize a tetrahedron shape, as will be discussed below. The nondimensional volume $V/d^3$ and nondimensional surface area $A/d^2$ are also measured. To quantify an effective aspect ratio of each tetrahedron, we calculate the height of the tetrahedron as measured from each of the four faces, and consider the largest height $H$ and shortest height $h$. We form aspect ratios from these two heights by dividing by the areas ($A_{H}$ and $A_{h}$ respectively) of the tetrahedron face they are perpendicular to. Conceptually, large values of $H^2/A_H$ correspond to thin pointy tetrahedra, and small values of $h^2/A_h$ correspond to flat pancake-like tetrahedra. We thus term these two quantities “sharpness” and “flatness” respectively.
In addition to these structural characteristics, we also consider the dynamics by the tetrahedral mobility $\mu$, which is calculated by averaging the distances moved by the four colloids over a time $\Delta
t=50\tau_{\rm diff}$: $$\mu(t) = {1 \over 4} \sum_{i=1}^4 | \Delta \vec{r_i}(t,\Delta t) |$$ The results that follow do not depend sensitively to the choice of $\Delta t$.
To assess the value of these structural and dynamical characteristics, we calculate the correlation coefficients between $\mu$ and the other tetrahedral characteristics and show them in Table \[cor\]. This is done in the standard way of defining correlation coefficients, $$C_{pq} = {1 \over N} \sum_{i=1}^N {(p_i - \bar{p}) \over
\sigma_p} {(q_i - \bar{q}) \over \sigma_q},$$ where $p$ and $q$ are any two variables with averages $\bar{p}$ and $\bar{q}$, and standard deviations $\sigma_p$ and $\sigma_q$. In our case the sum runs over all tetrahedra and all times. In Table \[cor\], a value of one would signify perfect correlation, a value of -1 would represent perfect anti-correlation while a value of zero would indicate completely uncorrelated data.
$\mu/d$ $b/d$ $\sigma_{b}/d$ $V/d^{3}$ $A/d^{2}$ $\frac{H^{2}}{A_{H}}$ $\frac{h^{2}}{A_{h}}$
----------------------------------- --------- ------- ---------------- ----------- ----------- ----------------------- -----------------------
mobility $\mu/d$ 1 0.045 0.068 0.018 0.036 0.016 -0.068
“looseness” $b/d$ - 1 0.37 0.82 0.98 -0.30 -0.50
“irregularity” $\sigma_{b}/d$ - - 1 -0.10 0.18 0.17 -0.82
volume $V/d^{3}$ - - - 1 0.91 -0.10 -0.11
surface area $A/d^{2}$ - - - - 1 -0.29 -0.37
“sharpness” $\frac{H^{2}}{A_{H}}$ - - - - - 1 -0.090
“flatness” $\frac{h^{2}}{A_{h}}$ - - - - - - 1
: Correlation matrix for some geometrical characteristics of tetrahedra and mobility. The matrix is symmetric with respect to the diagonal so the lower half is not repeated. $b$ is the average length of the tetrahedra edges (“bonds”) and $\sigma_b$ is the standard deviation of these lengths. See text for details of the other characteristics.[]{data-label="cor"}
Given that we are trying to understand the slowing of the dynamics seen in Fig. \[msd\], we focus on the correlation between mobility $\mu/d$ and the structural characteristics. While all the coefficients are quite small, those that relate mobility to looseness $b/d$ and irregularity $\sigma_b/d$ are relatively big. Mobility is also noticeably anticorrelated with the flatness $h^2/A_h$. Some insight into these correlations is gained by considering the correlations between the different structural characteristics, as shown by the other entries in Table \[cor\]. The flatness $h^2/A_h$ is strongly anticorrelated with irregularity $\sigma_b$, and given the more intuitive nature of $\sigma_b$ and its simpler mathematical definition, in what follows we focus on $\sigma_b$ rather than $h^2/A_h$. The volume and area parameters, $V/d^3$ and $A/d^2$, are strongly correlated with the looseness, which is sensible given that they all measure the size of a tetrahedron.
We therefore choose to study the looseness and irregularity as being both relatively well-correlated with mobility, both easily defined in terms of the six tetrahedron edge lengths, and weakly correlated with each other (as seen in Table \[cor\]). The last point suggests that they capture two distinct properties of tetrahedron structure which are both important for mobility.
Figure \[count\] shows the distribution of tetrahedra in the $b, \sigma_{b}$-plane. The closed curves represent the levels of abundance of tetrahedra with a given value of $b$ and $\sigma_{b}$ with respect to the abundance of the most probable tetrahedron at $b\approx 1.11d$ and $\sigma_b \approx
0.12d$. Somewhat surprisingly, the distribution does not age [@japan05] and we therefore take an average over all times. Figure \[count\] shows a broad variability of both looseness and irregularity. However, these curves do outline a major axis along which many of the tetrahedra lie. This axis suggests that the looser the tetrahedron the more irregular it is bound to be. This is reflected in the correlation coefficient of $b$ and $\sigma_{b}$ in Table. \[cor\], although its relatively small value (0.37) highlights the breadth of the overall distribution.
![Contour plot showing the abundance of tetrahedra with a given looseness and irregularity. The iso-curves are labeled relative to the peak tetrahedral abundance at {$b/d=1.11; \sigma_{b}/d=0.12$}.[]{data-label="count"}](weeks-fig3.eps){height="5cm"}
![Plot of tetrahedral mobility $\langle
\mu(\Delta t=50\tau_{\rm diff})\rangle$, averaged over all ages, versus looseness $b/d$ and irregularity $\sigma_{b}/d$. The darker the color the more mobile the tetrahedron. The contour lines are the same as in Fig. \[count\] and represent the abundance of tetrahedra with a given value of $b/d$ and $\sigma_{b}/d$.[]{data-label="2dmob"}](weeks-fig4.eps){height="5cm"}
We present a qualitative picture of the relationship between the above two [*static*]{} geometrical quantities and the [*dynamic*]{} quantity of mobility in Fig. \[2dmob\] which shows the average value of mobility as a shade of grey. The more mobile combinations of $b$ and $\sigma_{b}$ are darker and the less mobile combinations are lighter. This rendering gives a clear qualitative view of the correlations between these quantities. Specifically, we note that mobility increases with both looseness and irregularity of the tetrahedron. This makes intuitive sense: a larger value of $b$ suggests a smaller local volume fraction and thus more room to move, and a larger value of $\sigma_b$ likewise suggests a poorly-packed structure which may have more room to move. This also agrees with previous results seen in supercooled colloidal fluids [@weeks02; @harrowell99; @conrad05]. Overplotted on the intensity plot of Fig. \[2dmob\] are the same abundance contours as seen in Fig. \[count\], showing us that the most probable tetrahedra are a medium shade of grey - they are neither the fastest or slowest tetrahedra.
![Average tetrahedral mobility as a function of looseness $b/d$. The three curves represent three different ages of the sample. $\triangle:
t_{\rm w}=0\tau_{\rm diff}$, $\times:t_{\rm w}=100\tau_{\rm diff}$ and $\bigcirc:t_{\rm w}=300\tau_{\rm diff}$. The dashed curve represents the distribution $P(b/d)$ and is shown here to highlight the lack of statistics at low ($b/d<0.98$) and high ($b/d>1.2$) values of $b/d$.[]{data-label="mob_vs_b"}](weeks-fig5.eps){height="5.5cm"}
We thus have two results, an overall slowing of dynamics seen in Fig. \[msd\], and a relationship between structure and dynamics seen in Fig. \[2dmob\]. This suggests a possible hypothesis for aging, that the slowing of the dynamics is an accumulation of structure corresponding to slower dynamics: a buildup of tetrahedra with small values for looseness and irregularity. As mentioned earlier, though, the overall distribution of tetrahedra structural properties (Fig. \[count\]) does not depend on the age of the sample [@japan05]. To reconcile this, we consider in more detail the connection between structure, dynamics, and the age of the sample.
We show the influence of looseness on the tetrahedron mobility in Fig. \[mob\_vs\_b\] where we plot the average mobility of tetrahedra as a function of looseness. We do so averaging over the three sample ages separately. If we consider each curve separately, we note a reproducible trend: the least mobile tetrahedra are those with $b\approx d$ indicating that those tetrahedra are very tightly packed. At very low values of $b$ mobility increases somewhat, although as mentioned above, tetrahedra with $b/d<1$ may be erroneous. Furthermore, note that there are extremely few tetrahedra with $b/d<1$ as shown by the dashed curve representing $P(b/d)$, the probability of finding a tetrahedron with a given looseness averaged over all ages. We therefore cannot put too much weight on the values of $\mu$ for $b<d$. (The same can be said for tetrahedra with $b>1.17d$.) In the intermediate range there is a clear trend that looser tetrahedra are more mobile. Thus the structure in some way facilitates the aging, in that looser regions are more free to rearrange. However, it is also important to note that each symbol is an average over a broad distribution of mobilities associated with the given value of $b/d$. In particular the standard deviation of the distribution is almost comparable with the average value. This simply means that the correlation between $b$ and $\mu$ is a weak, average effect and not, for example, a usefully predictive relationship [@conrad05].
As expected with any plot involving the mobility of this system, aging is clearly visible in Fig. \[mob\_vs\_b\] as the three curves are shifted down to lower mobilities as $t_{\rm w}$ increases. The overall shape of the curves, however, does not depend on the age of the system. In other words, we are not witnessing a relative shift in the mobility of tetrahedra with varying looseness but merely an overall slowing down of all tetrahedra.
![Average tetrahedral mobility as a function of irregularity $\sigma_b/d$. The three curves represent three different ages of the sample. $\triangle: t_{\rm
w}=0\tau_{\rm diff}$, $\times:t_{\rm w}=100\tau_{\rm diff}$ and $\bigcirc:t_{\rm w}=300\tau_{\rm diff}$. The dashed curve represents the distribution $P(\sigma_{b}/d)$ and is shown here to highlight the lack of statistics at low ($\sigma_{b}/d<0.01$) and high ($\sigma_{b}/d>0.2$) values of $\sigma_{b}/d$.[]{data-label="mob_vs_sigma"}](weeks-fig6.eps){height="5.5cm"}
A similar analysis on the relation between tetrahedral mobility and irregularity is shown on Fig. \[mob\_vs\_sigma\]. As a reference we plot $P(\sigma_{b}/d)$, the probability of finding a tetrehedron with a given irregularity. This distribution does not age and is therefore averaged over all times. Again, we look at the dependence of $\mu$ on $\sigma_b$ for the three ages in our experiments and find that there is a positive correlation, as previously indicated in Table \[cor\]. Just as in the case of $b$, this positive correlation is an average effect and again, the distribution that leads to each of the symbols on the figure is quite broad. Nevertheless, a reproducible difference of $\sim 10\%$ in the mobility differentiates very regular tetrahedra from very irregular ones. Again, just as above, aging is evident in the data and again it has no strong effect on the shape of the curves but rather uniformly slows down tetrahedra with all values of irregularity.
Conclusion
==========
We observe colloidal glasses and find clear signs of aging in the mean squared displacement of the particles (Fig. \[msd\]). We analyze the static structure of the aging sample in terms of tetrahedral packing of colloidal particles. We find a broad distribution of tetrahedra as measured by the distributions of tetrahedral “looseness” and “irregularity”, corresponding to the tetrahedron’s mean edge length and the standard deviation of edge lengths, respectively (Fig. \[count\]). These two quantities are slightly correlated; on average, the looser a tetrahedron is the more irregular it will be. More importantly, we find that tetrahedral shape and mobility are somewhat correlated: the looser and the more irregular a tetrahedron is the higher its mobility (Fig. \[2dmob\]). This suggests that aging might be due to an increase in tight, regular tetrahedral structure, but surprisingly the distribution of geometrical quantities is age-independent. Instead we find that aging indiscriminately affects tetrahedra with all values of looseness and irregularity by uniformly decreasing their mobility.
In conclusion, we find that static structure as measured by tetrahedral quantities does not indicate the age of a glass. None of the distributions of the geometrical quantities considered in Table \[cor\] show any aging. However, at any instant in time the age of our sample must somehow be encoded in the positions of the colloids and, for example, analyzing the spatial correlations between tetrahedra, while beyond the scope of this paper, might provide more insight into this matter. Finally, since aging ought to result in subtle configuration changes, and since the looser and more irregular tetrahedra allow for the most motion to happen, we can infer that the local structure does indeed facilitate the aging process. While it is worth noting that it is not established that the most mobile particles are the most important ones for aging, the connection between structure and mobility holds true for less mobile particles as well. Table \[cor\] suggests that in this respect, tetrahedral irregularity is the most significant quantity whose positive correlation with mobility lends support to our original motivating idea that perfect tetrahedra are an important structural element in a glass.
We thank T. Brzinski, P. Harrowell, and C. Nugent for useful discussions. This work was supported by NASA microgravity fluid physics grant NAG3-2728.
[00]{}
C. A. Angell, Science 267 (1995) 1924.
F. H. Stillinger, Science 267 (1995) 1935.
M. D. Ediger, C. A. Angell, S. R. Nagel, J. Phys. Chem. 100 (1996) 13200.
C. A. Angell, J. Phys. Cond. Mat. 12 (2000) 6463
M. D. Ediger, Annu. Rev. Phys. Chem. 51 (2000) 99.
N. Menon, S. R. Nagel, Phys. Rev. Lett. 73 (1994) 963.
A. van Blaaderen, P. Wiltzius, Science 270 (1995) 1177.
W. van Megen, T. C. Mortensen, S. R. Williams, J. Müller, Phys. Rev. E 58 (1998) 6073.
W. Kob, J. L. Barrat, F. Sciortino, P. Tartaglia, J. Phys. Cond. Mat. 12 (2000) 6385.
P. N. Pusey, W. van Megen, Nature 320 (1986) 340.
L. Cipelletti, S. Manley, R. C. Ball, D. A. Weitz, Phys. Rev. Lett. 84 (2000) 2275.
L. Cipelletti, H. Bissig, V. Trappe, P. Ballesta, S. Mazoyer, J. Phys.: Condens. Matter 15 (2003) S257.
R. E. Courtland, E. R. Weeks, J. Phys.: Condens. Matter 15 (2003) S359.
W. K. Kegel, A. van Blaaderen, Science 387 (2000) 290.
E. R. Weeks, J. C. Crocker, A. C. Levitt, A. Schofield, D. A. Weitz, Science 287 (2000) 627.
I. Snook, W. van Megen, P. N. Pusey, Phys. Rev. A 43 (1991) 6900.
W. G. Hoover, F. H. Ree, J. Chem. Phys 49 (1968) 3609.
D. R. Nelson, M. Widom, Nucl. Phys. B 240 (1984) 113.
D. R. Nelson, Defects and Geometry in Condensed Matter Physics, Cambridge University Press, 2002.
F. H. Stillinger, J. Chem. Phys. 89 (1988) 6461.
S. A. Kivelson, X. Zhao, D. Kivelson, T. M. Fischer, C. M. Knobler, J. Chem. Phys. 101 (1994) 2391.
A. D. Dinsmore, E. R. Weeks, V. Prasad, A. C. Levitt, D. A. Weitz, App. Optics 40 (2001) 4152.
L. Antl et. al., Colloids and Surfaces 17 (1986) 67.
J. C. Crocker, D. G. Grier, J. Colloid Interface Sci. 179 (1996) 298.
A. Kose, S. Hachisu, J. Colloid Interface Sci. 55 (1976) 487.
A. Gast, W. Russel, C. Hall, J. Colloid Interface Sci. 55 (1986) 161.
G. C. Cianci, R. E. Courtland, E. R. Weeks, submitted to Proceedings for the 3rd Workshop on Complex Systems (cond-mat/0511301).
E. R. Weeks, D. A. Weitz, Phys. Rev. Lett. 89 (2002) 095704.
D. N. Perera, P. Harrowell, J. Chem. Phys. 111 (1999) 5441. J. .C. Conrad, F. W. Starr, and D. A. Weitz, J. Phys. Chem. B 109 (2005) 21235.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'The ITU-T Recommendation P.808 provides a crowdsourcing approach for conducting a subjective assessment of speech quality using the Absolute Category Rating (ACR) method. We provide an open-source implementation of the ITU-T Rec. P.808 that runs on the Amazon Mechanical Turk platform. We extended our implementation to include Degradation Category Ratings (DCR) and Comparison Category Ratings (CCR) test methods. We also significantly speed up the test process by integrating the participant qualification step into the main rating task compared to a two-stage qualification and rating solution. We provide program scripts for creating and executing the subjective test, and data cleansing and analyzing the answers to avoid operational errors. To validate the implementation, we compare the Mean Opinion Scores (MOS) collected through our implementation with MOS values from a standard laboratory experiment conducted based on the ITU-T Rec. P.800. We also evaluate the reproducibility of the result of the subjective speech quality assessment through crowdsourcing using our implementation. Finally, we quantify the impact of parts of the system designed to improve the reliability: environmental tests, gold and trapping questions, rating patterns, and a headset usage test.'
address: |
$^1$Quality and Usability Lab, Technische Universität Berlin\
$^2$Microsoft Corp.
bibliography:
- 'mybib.bib'
title: 'An Open source Implementation of ITU-T Recommendation P.808 with Validation'
---
**Index Terms**: perceptual speech quality, crowdsourcing, subjective quality assessment, ACR, P.808
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'A computationally efficient method for solving three-dimensional, viscous, incompressible flows on unbounded domains is presented. The method formally discretizes the incompressible Navier-Stokes equations on an unbounded staggered Cartesian grid. Operations are limited to a finite computational domain through a lattice Green’s function technique. This technique obtains solutions to inhomogeneous *difference* equations through the discrete convolution of source terms with the fundamental solutions of the *discrete* operators. The differential algebraic equations describing the temporal evolution of the discrete momentum equation and incompressibility constraint are numerically solved by combining an integrating factor technique for the viscous term and a half-explicit Runge-Kutta scheme for the convective term. A projection method that exploits the mimetic and commutativity properties of the discrete operators is used to efficiently solve the system of equations that arises in each stage of the time integration scheme. Linear complexity, fast computation rates, and parallel scalability are achieved using recently developed fast multipole methods for difference equations. The accuracy and physical fidelity of solutions is verified through numerical simulations of vortex rings.'
address: 'Division of Engineering and Applied Science, California Institute of Technology, Pasadena, CA 91125, USA'
author:
- Sebastian Liska
- Tim Colonius
bibliography:
- 'jcp\_nslgf.bib'
title: 'A fast lattice Green’s function method for solving viscous incompressible flows on unbounded domains'
---
Incompressible viscous flow ,Unbounded domain ,Lattice Green’s function ,Projection method ,Integrating factor ,Half-explicit Runge-Kutta ,Elliptic solver
Introduction {#sec:intro}
============
Numerical simulations of viscous, incompressible flows on unbounded fluid domains require numerical techniques that can accurately approximate unbounded computational domains using only a finite number of operations. Spatial truncation and artificial boundary conditions have been developed for this purpose but they can adversely affect the accuracy of the solution and even change the dynamics of the flow [@tsynkov1998; @colonius2004; @pradeep2004; @dong2014]. Furthermore, minimizing the error due to artificial boundaries by employing large computational domains increases the number of computational elements and often requires the use of solvers that are less efficient than those used on regular grids (e.g. FFT techniques, multigrid, etc.).
Recently, fast multipole methods (FMMs) for solving constant coefficient elliptic *difference* equations on unbounded regular grids have been developed for 2D [@gillman2010; @gillman2014] and 3D [@liska2014] problems. These methods obtain solutions to inhomogeneous *difference* equations by using fast summation techniques to evaluate the discrete convolution of source terms with the fundamental solutions of the *discrete* operators. The fundamental solutions of discrete operators on unbounded regular grids, or lattices, are also referred to as lattice Green’s functions (LGFs).
Similar to particle and vortex methods, e.g. [@leonard1980; @greengard1987; @winckelmans1993; @warren1993; @cheng1999; @ploumhans2000; @cottet2000; @ying2004; @winckelmans2004; @cocle2008; @chatelain2010; @rasmussen2011; @hejlesen2013] and references therein, the LGF techniques discussed in [@gillman2010; @gillman2014; @liska2014] have efficient nodal distributions and automatically enforce free-space boundary conditions. As a result, needlessly large computational domains and artificial boundary conditions can be avoided when solving flows on unbounded regular grids by using LGF techniques to compute the action of solution operators. A significant advantage of recently developed particle and vortex methods is their ability to efficiently solve large scale problems relevant to 3D incompressible flows using fast, parallel methods based on techniques such as tree-codes, FMMs, dynamic error estimators, hybrid Eulerian-Lagrangian formulations, hierarchical grids, FFT methods, and domain decomposition techniques [@warren1993; @cheng1999; @ploumhans2000; @ying2004; @cocle2008; @chatelain2010; @rasmussen2011; @hejlesen2013]. It is demonstrated in [@liska2014] that LGF FMMs can achieve computational rates and parallel scaling for 3D discrete (7-pt Laplacian) Poisson problems comparable to existing fast 3D Poisson solvers.
The present formulation numerically solves the incompressible Navier-Stokes equations expressed in the non-dimensional form given by
$$\begin{aligned}
\frac{ \partial \mathbf{u} }{ \partial t }
+ \mathbf{u} \cdot \nabla \mathbf{u}
&= - \nabla p + \frac{1}{\text{Re}} \nabla^{2} \mathbf{u}, \label{eq:ns-mom} \\
\nabla \cdot \mathbf{u} &=0,\label{eq:ns-cont}
\end{aligned}$$
\[eq:ns\]
where $\mathbf{u}$, $p$, and $\text{Re}$ correspond to the velocity, the pressure, and the Reynolds number, respectively. The equations are defined on an unbounded domain in all directions, and are subject to the boundary conditions $$\mathbf{u} \left( \mathbf{x}, t \right) \rightarrow \mathbf{u}_\infty\left( t \right) \,\, \text{as} \,\, \left|\mathbf{x}\right| \rightarrow \infty,
\label{eq:ns_bcs}$$ where $\mathbf{u}_\infty$ is a known time-dependent function. We limit our attention to flows in which the vorticity, $\boldsymbol{\omega} = \nabla \times \mathbf{u}$, decay exponentially fast as $\left|\mathbf{x}\right| \rightarrow \infty$.
The present formulation is simplified by considering the evolution of the velocity perturbation, $\mathbf{u}^\prime\left( \mathbf{x}, t \right) = \mathbf{u}\left( \mathbf{x}, t \right) - \mathbf{u}_\infty\left( t \right)$, and pressure perturbation, $p^\prime\left( \mathbf{x}, t \right) = p\left( \mathbf{x}, t \right) - p_\infty\left( \mathbf{x}, t \right)$. The freestream pressure, $p_\infty$, is given by $$p_\infty\left(\mathbf{x},t\right) = \frac{d \mathbf{u}_\infty } { dt } \cdot \mathbf{x},
\label{eq:press_infty}$$ where we have taken the arbitrary time-dependent constant to be zero. Subtracting the uniform freestream equations from Eq. yields $$\begin{split}
\frac{ \partial \mathbf{u}^\prime }{ \partial t } \
+ \left( \mathbf{u}^\prime + \mathbf{u}_\infty \right) \cdot \nabla \mathbf{u}^\prime \
= - \nabla p^\prime + \frac{1}{\text{Re}} \nabla^{2} \mathbf{u}^\prime,
\end{split} \quad
\begin{split}
\nabla \cdot \mathbf{u}^\prime = 0,
\end{split}
\label{eq:pns}$$ subject to the boundary conditions $\mathbf{u}^\prime \left( \mathbf{x}, t \right) \rightarrow 0$ as $\left|\mathbf{x}\right| \rightarrow \infty$. The boundary conditions on $\mathbf{u}^\prime$ and the irrotational nature of the flow at large distances imply that $p^\prime$ is subject to the compatibility condition[^1] $$p^\prime \left( \mathbf{x}, t \right) \rightarrow 0\,\,
\text{as}\,\, \left|\mathbf{x}\right| \rightarrow \infty.
\label{eq:press_compat}$$
The remainder of the paper is organized as follows. In Section \[sec:spatial\], we describe the spatial discretization of the governing equations on formally unbounded staggered Cartesian grids and discuss LGF techniques that can be used to obtain fast solutions to the associated discrete elliptic problems. Additionally, we present an integrating factor technique that facilitates the implementation of efficient, robust time integration schemes. In Section \[sec:temporal\], the system of differential algebraic equations (DAEs) resulting from the spatial discretization and integrating factor techniques is numerically solved using a half-explicit Runge-Kutta method. We show that the linear systems of equations that arise at each stage of the time integration scheme can be efficiently solved, without splitting errors or additional stability constraints, by a fast projection method based on LGF techniques and the properties of the discrete operators. In Section \[sec:truncation\], we demonstrate that an adaptive block-structured grid padded with appropriately sized buffer regions can be used to efficiently compute numerical solutions to a prescribed tolerance. In Section \[sec:algorithm\], we summarize the algorithm and discuss a few practical considerations including computational costs and performance optimization. Finally, in Section \[sec:verif\], we perform numerical experiments on vortex rings to verify the present formulation.
Spatial discretization {#sec:spatial}
======================
Unbounded staggered Cartesian grids {#sec:spatial_discrete}
-----------------------------------
![ Unit cell of the staggered Cartesian grid. The vertex enclosed by the circle corresponds to the $(i,j,k)$ vertex. The $(i,j,k)$ cell, faces, and edges correspond to the depicted elements intersecting the $(i,j,k)$ vertex. There are three faces and edges per vertex. The superscript “$(q)$” is used to denote faces (edges) normal (parallel) to $x_q$ axis. \[fig:grid-cell\] ](fig_1.pdf){width="75.00000%"}
In this section we describe the discretization of Eq. on a formally unbounded staggered Cartesian grid. Figure \[fig:grid-cell\] depicts our staggered grid, which consists of cells ($\mathcal{C}$) and vertices ($\mathcal{V}$) that house scalar quantities, and faces ($\mathcal{F}$) and edges ($\mathcal{E}$) that house vector quantities. The notation $\mathbb{R}^{\mathcal{Q}}$ denotes the set of real-valued grid functions with values defined on $\mathcal{Q}\in\{\mathcal{C},\mathcal{F},\mathcal{E},\mathcal{V}\}$. The value of a grid function $\mathsf{q}$ evaluated at $\mathbf{n}=(i,j,k)\in\mathbb{Z}^3$ is given by $\mathsf{q}(\mathbf{n})$ and $\mathsf{q}_{i,j,k}$. For the case of a vector-valued grid function $\mathsf{q}$, i.e. $\mathsf{q}\in\mathbb{R}^{\mathcal{F}}$ or $\mathsf{q}\in\mathbb{R}^{\mathcal{E}}$, $\mathsf{q}^{(k)}(\mathbf{n})$ denotes the component of $\mathsf{q}(\mathbf{n})$ in the $k$-th direction.
The spatial discretization of Eq. is performed using the techniques of @nicolaides1997, and @zhang2002. The resulting discrete operators are similar or equivalent to those obtained from standard second-order finite-volume or finite-difference schemes, e.g. [@harlow1965]. Yet we refer to the more general techniques of [@nicolaides1997] and [@zhang2002] since their discussions emphasize many of the algebraic properties of the discrete operators used by the present formulation. For convenience, point-operator representations of the discrete operators are included in \[app:oprs\].
The semi-discrete system of equations obtained from the spatial discretization of Eq. is $$\begin{split}
\frac{ d \mathsf{u} }{ d t } \
+ \mathsf{N}( \mathsf{u} + \mathsf{u}_\infty ) \
= -\mathsf{G} \mathsf{p} + \frac{1}{\text{Re}} \mathsf{L}_{\mathcal{F}} \mathsf{u},
\end{split} \quad
\begin{split}
\overline{\mathsf{D}} \mathsf{u} = 0,
\end{split}
\label{eq:dns}$$ where $\mathsf{u}\in\mathbb{R}^\mathcal{F}\times\mathbb{R}$ and $\mathsf{p}\in\mathbb{R}^\mathcal{C}\times\mathbb{R}$ denote the time-dependent grid functions associated with the discrete velocity and pressure perturbation fields, respectively.[^2] The time-dependent grid function $\mathsf{u}_\infty\in\mathbb{R}^\mathcal{F}\times\mathbb{R}$ is constant in space with values given by $\mathsf{u}_\infty(\mathbf{n},t) = \mathbf{u}_\infty(t)$. Discrete operators $\mathsf{G} : \mathbb{R}^\mathcal{C} \mapsto \mathbb{R}^\mathcal{F}$, $\overline{\mathsf{D}} : \mathbb{R}^\mathcal{F} \mapsto \mathbb{R}^\mathcal{C}$, and $\mathsf{L}_{\mathcal{F}} : \mathbb{R}^\mathcal{F} \mapsto \mathbb{R}^\mathcal{F}$ correspond to the discretizations of the gradient, divergence, and vector Laplacian operators, respectively. Finally, $\mathsf{N}:\mathbb{R}^\mathcal{F} \mapsto \mathbb{R}^\mathcal{F}$ denotes the discrete nonlinear operator approximating the convective term, i.e. $\mathsf{N}(\mathsf{u}+\mathsf{u}_\infty) \approx \left( \mathbf{u}^\prime + \mathbf{u}_\infty \right) \cdot \nabla \left( \mathbf{u}^\prime + \mathbf{u}_\infty \right) = \left( \mathbf{u}^\prime + \mathbf{u}_\infty \right) \cdot \nabla \mathbf{u}^\prime$.[^3]
In addition to the aforementioned discrete operators, the subsequent discussion makes use of the discrete gradient operator $\overline{\mathsf{G}}:\mathbb{R}^\mathcal{V} \mapsto \mathbb{R}^\mathcal{E}$, the discrete curl operators $\mathsf{C}:\mathbb{R}^\mathcal{F} \mapsto \mathbb{R}^\mathcal{E}$ and $\overline{\mathsf{C}}:\mathbb{R}^\mathcal{E} \mapsto \mathbb{R}^\mathcal{F}$, and the discrete Laplacian operators $\mathsf{L}_\mathcal{Q}:\mathbb{R}^\mathcal{Q} \mapsto \mathbb{R}^\mathcal{Q}$, where $\mathcal{Q}\in\{\mathcal{C},\mathcal{E},\mathcal{V}\}$. A summary of all the discrete vector operators and their definitions is also provided in \[app:oprs\].
The choice of discretization technique yields a numerical scheme with the following properties:
[=1em =0em]{}
*Second-order accuracy*: all discrete operators are second-order accurate in space.
*Conservation properties*: using appropriate discretizations of the nonlinear convective term leads to a scheme that conserves momentum, kinetic energy, and circulation in the absence of time-differencing errors and viscosity [@lilly1965; @morinishi1998; @zhang2002]. The benefits of discrete conservation properties related to numerical stability and physical fidelity are discussed in the review by @perot2011 and references therein.
*Mimetic properties*: discrete operators and their corresponding vector calculus operators satisfy similar symmetry and orthogonality properties in addition to similar integration by parts formulas [@nicolaides1991; @nicolaides1997; @perot2000; @zhang2002]. Specific properties pertinent to the discussion of the present method are:
$$\begin{gathered}
\overline{\mathsf{D}} = -\mathsf{G}^\dagger, \quad
\overline{\mathsf{C}} = \mathsf{C}^\dagger, \quad
\overline{\mathsf{G}} = -\mathsf{D}^\dagger,
\\
\text{Im}( \mathsf{G} ) = \text{Null}( \mathsf{C} ), \quad
\text{Im}( \mathsf{C} ) = \text{Null}( \mathsf{D} ),
\\
\mathsf{L}_\mathcal{C} = - \mathsf{G}^\dagger \mathsf{G}, \quad
\mathsf{L}_\mathcal{F} = - \mathsf{G} \mathsf{G}^\dagger - \mathsf{C}^\dagger \mathsf{C}, \quad
\mathsf{L}_\mathcal{E} = - \mathsf{D}^\dagger \mathsf{D} - \mathsf{C} \mathsf{C}^\dagger, \quad
\mathsf{L}_\mathcal{V} = - \mathsf{D} \mathsf{D}^\dagger.
\end{gathered}$$
Many of the mimetic properties of discrete operators are closely related to the conservation properties [@nicolaides1997; @zhang2002].
*Commutativity properties*: on unbounded staggered grids, discrete Laplacians and integrating factors (to be introduced in Section \[sec:spatial\_intfact\]) are able to commute with other operators in the sense $\mathsf{A} \mathsf{T}_\mathcal{X} = \mathsf{T}_\mathcal{Y} \mathsf{A}$, where $\mathsf{A}:\mathbb{R}^\mathcal{X}\mapsto\mathbb{R}^\mathcal{Y}$ is any of the previously mentioned linear operators, and $\mathsf{T}_\mathcal{X}$ ($\mathsf{T}_\mathcal{Y}$) is either the discrete Laplacian or integrating factor mapping $\mathbb{R}^\mathcal{X}$ to $\mathbb{R}^\mathcal{X}$ ($\mathbb{R}^\mathcal{Y}$ to $\mathbb{R}^\mathcal{Y}$). Similar commutativity properties result in discretizations of periodic domains using uniform staggered grids.
In subsequent sections we discuss how the mimetic and commutativity properties facilitate the construction of fast, stable methods for numerically solving Eq. .
It is convenient to define $$\mathsf{d} = \mathsf{p} \
+ \frac{1}{2} \mathsf{P}\left( \mathsf{u} + \mathsf{u}_\infty, \mathsf{u} + \mathsf{u}_\infty \right),
\label{eq:tot_pres}$$ where $\mathsf{P} : \mathbb{R}^\mathcal{F} \times \mathbb{R}^\mathcal{F} \mapsto \mathbb{R}^\mathcal{C}$ is an arbitrary discrete approximation of the vector dot-product, i.e. $\mathsf{P}( \mathsf{u}, \mathsf{v} ) \approx \mathbf{u} \cdot \mathbf{v}$. The time-dependent grid function $\mathsf{d} \in \mathbb{R}^\mathcal{C} \times \mathbb{R}$ can be regarded as a discrete approximation of the total pressure perturbation, i.e. $\mathsf{d} \approx p^\prime + \frac{1}{2} | \mathbf{u}^\prime + \mathbf{u}_\infty |^2$. Using Eq. , we express Eq. as $$\begin{split}
\frac{ d \mathsf{u} }{ d t } \
+ \tilde{\mathsf{N}}( \mathsf{u} + \mathsf{u}_\infty ) \
= -\mathsf{G} \mathsf{d} + \frac{1}{\text{Re}} \mathsf{L}_{\mathcal{F}} \mathsf{u},
\end{split} \quad
\begin{split}
\mathsf{G}^\dagger \mathsf{u} = 0,
\end{split}
\label{eq:dnstp}$$ where $\tilde{\mathsf{N}}(\mathsf{v}) = \mathsf{N}(\mathsf{v}) - \frac{1}{2}\mathsf{G}\mathsf{P}(\mathsf{v},\mathsf{v})$. Consequently, $\tilde{\mathsf{N}}( \mathsf{u} + \mathsf{u}_\infty )$ is a discrete approximation of $\boldsymbol{\omega}\times\left(\mathbf{u}+\mathbf{u}_\infty\right)$.[^4] As will be demonstrated in Section \[sec:truncation\], an advantage of using $\tilde{\mathsf{N}}(\mathsf{\mathsf{u} + \mathsf{u}_\infty})$ instead of $\mathsf{N}(\mathsf{\mathsf{u} + \mathsf{u}_\infty})$ is that the former typically has a smaller support than that of the latter, which in turn reduces the number of operations and storage required to numerically solve the flow. We emphasize that Eq. is equivalent to Eq. , and no additional discretization errors have been introduced.
Lattice Green’s function techniques {#sec:spatial_lgfs}
-----------------------------------
The procedure for solving difference equations on unbounded regular grids using LGFs is analogous to the procedure for solving inhomogeneous PDEs on unbounded domains using the fundamental solution of continuum operators. As a representative example, we consider the (continuum) scalar Poisson equation $$[ \Delta u ] (\mathbf{x})
= f (\mathbf{x}), \quad supp(f) \subseteq \Omega,
\label{eq:poisson}$$ where $\mathbf{x}\in\mathbb{R}$ and $\Omega$ is a bounded domain in $\mathbb{R}^3$. The solution to Eq. is given by $$u(\mathbf{x})
= [ G * f ] (\mathbf{x})
= \int_\Omega G( \mathbf{x} - \mathbf{y} ) f(\mathbf{y})\, d\mathbf{y},$$ where $G(\mathbf{x})= -1/(4 \pi |\mathbf{x}|)$ is the fundamental solution of the Laplace operator. Similarly, we consider the discrete scalar Poisson equation $$[ \mathsf{L}_{\mathcal{Q}} \mathbf{u} ] (\mathbf{n})
= \mathsf{f}(\mathbf{n}), \quad supp(\mathsf{f}) \subseteq D,
\label{eq:dpoisson}$$ where $\mathsf{u},\mathsf{f}\in\mathbb{R}^\mathcal{Q}$, $D$ is a bounded region in $\mathbb{Z}^3$, and $\mathcal{Q}\in\{\mathcal{C},\mathcal{V}\}$. The solution to Eq. is given by $$\mathsf{u}(\mathbf{n})
= [ \mathsf{G}_{\mathsf{L}} * \mathsf{f} ] (\mathbf{n})
= \sum_{\mathbf{m}\in D} \mathsf{G}_{\mathsf{L}}(\mathbf{n}-\mathbf{m})
\mathsf{f}(\mathbf{m})
\label{eq:dpoisson_conv}$$ where $\mathsf{G}_{\mathsf{L}}:\mathbb{Z}^3\mapsto\mathbb{R}$ is the fundamental solution, or LGF, of the discrete scalar Laplacian [@gillman2014; @liska2014]. Subsequently, we refer to the grid functions $\mathsf{f}$ and $\mathsf{u}$ as the source field and the induced field, respectively.
It is evident from the definitions of $\mathsf{L}_\mathcal{F}$ and $\mathsf{L}_\mathcal{E}$ that each component of a discrete vector Poisson problem corresponds to a discrete scalar Poisson problem. As a result, the $q$-th component of solutions to Eq. for $\mathcal{Q}\in\{\mathcal{F},\mathcal{E}\}$ are given by Eq. with $\mathsf{u} \rightarrow \mathsf{u}^{(q)}$ and $\mathsf{f} \rightarrow \mathsf{f}^{(q)}$. Procedures for obtaining expressions for $\mathsf{G}_{\mathsf{L}}(\mathbf{n})$ are discussed in [@mccrea1940; @duffin1958; @buneman1971; @martinsson2002]. For convenience, expressions for $\mathsf{G}_{\mathsf{L}}(\mathbf{n})$ are provided in \[app:lgfs\].
Fast numerical methods for evaluating discrete convolutions involving LGFs have recently been proposed in 2D by @gillman2014 and in 3D by @liska2014. Here, the 3D lattice Green’s function fast multipole method (LGF-FMM) of [@liska2014] is used to evaluate discrete convolutions involving $\mathsf{G}_{\mathsf{L}}$. The LGF-FMM is a kernel-independent interpolation-based FMM specifically designed for solving difference equations on unbounded Cartesian grids. In addition to its asymptotic linear algorithmic complexity, it has been shown that the LGF-FMM achieves high computation rates and good parallel scaling for the case of $\mathsf{G}_{\mathsf{L}}$ [@liska2014].
As final remark, the LGF-FMM is a direct solver that computes solutions to a prescribed tolerance $\epsilon$, $\| \mathsf{y}_\text{true} - \mathsf{y} \|_\infty / \|\mathsf{y}_\text{true}\|_\infty \le \epsilon$, where $\mathsf{y}$ is the numerical solution and $\mathsf{y}_\text{true}$ is the exact solution to the system of *difference* equations. In order to obtain accurate error bounds for the LGF-FMM it is necessary to profile the method once for each kernel and scheme used. Error estimates for the discrete 7-pt Laplace kernel and different schemes are provided in [@liska2014]. In the present formulation, all instances of $\mathsf{E}_{\mathcal{Q}}$ and $\mathsf{L}_{\mathcal{Q}}^{-1}$ are computed using values of $\epsilon$ that are less than or equal to prescribed value of $\epsilon_\text{FMM}$.
Integrating factor techniques {#sec:spatial_intfact}
-----------------------------
In this section we describe an integrating factor technique for integrating the stiff viscous term of Eq. analytically. Analytical integration has the advantage of neither introducing discretization errors nor imposing stability constraints on the time marching scheme. Integrating factor techniques for the viscous term are widely used in Fourier pseudo-spectral methods. These methods typically compute the action of the integrating factor in Fourier-space. In contrast, the present method computes the action of the integrating factor in real-space, since the Fourier series of an arbitrary grid function on an unbounded domain is not computationally practical.
We consider integrating factors defined as the solution operators of the discrete diffusion equation of the form $$\frac{d\mathsf{h}}{dt} = \kappa \mathsf{L}_{\mathcal{Q}} \mathsf{h}, \quad
\mathsf{h}(\mathbf{n},t) \rightarrow \mathsf{h}_\infty(t)
\,\, \text{as} \,\, |\mathbf{n}| \rightarrow \infty,
\label{eq:ddiffusion}$$ where $\kappa\in\mathbb{R}_{\ge 0}$ and $\mathsf{h}\in\mathbb{R}^\mathcal{Q}$. As discussed in \[app:oprs\], the discrete Laplace operator $\mathsf{L}_\mathcal{Q}$ is diagonalized by the Fourier series operator $\mathfrak{F}_\mathcal{Q}$, $$(\Delta x)^2 \mathsf{L}_\mathcal{Q} = \mathfrak{F}^{-1}_\mathcal{Q}
\sigma^{\mathsf{L}}_\mathcal{Q} \mathfrak{F}_\mathcal{Q},$$ where $\sigma^{\mathsf{L}}_\mathcal{Q}(\boldsymbol{\xi})$ for $\boldsymbol{\xi}\in(\pi,\pi)^3$ is the spectrum of $(\Delta x)^2 \mathsf{L}_\mathcal{Q}$. Next, we define the exponential of the $\mathsf{L}_\mathcal{Q}$ as $$\mathsf{E}_{\mathcal{Q}}( \alpha ) = \mathfrak{F}^{-1}_\mathcal{Q}
\exp(\alpha \sigma^{\mathsf{L}}_\mathcal{Q} ) \mathfrak{F}_\mathcal{Q},
\label{eq:if_def}$$ where $\alpha=\kappa (t-\tau) / (\Delta x)^2$. An immediate consequence of Eq. is that $$\frac{d}{d\alpha} \mathsf{E}_{\mathcal{Q}}( \alpha )
= \mathfrak{F}^{-1}_\mathcal{Q} \sigma^{\mathsf{L}}_\mathcal{Q}
\exp(\alpha \sigma^{\mathsf{L}}_\mathcal{Q}) \mathfrak{F}^{-1}_\mathcal{Q}
= \mathsf{L}_{\mathcal{Q}} \mathsf{E}_{\mathcal{Q}}(\alpha)
= \mathsf{E}_{\mathcal{Q}}(\alpha) \mathsf{L}_{\mathcal{Q}},
\label{eq:if_dt}$$ which implies that the solution to Eq. is given by $$\mathsf{h}(\mathbf{n},t) = \left[
\mathsf{E}_{\mathcal{Q}} \left(
\frac{\kappa (t-\tau) }{ (\Delta x)^2 } \right) \mathsf{h}_\tau \right]
(\mathbf{n},t),
\quad t \ge \tau, \quad \forall \mathbf{n} \in \mathbb{Z}^{3},
\label{eq:ddifussion_soln}$$ where $\mathsf{h}(\mathbf{n},\tau) = \mathsf{h}_\tau(\mathbf{n})$.
We now consider using $\mathsf{E}_{\mathcal{Q}}(\alpha)$ as an integrating factor for Eq. . Operating from the left on the semi-discrete momentum equation of Eq. with $\mathsf{E}_{\mathcal{F}}\left(\frac{t-\tau}{(\Delta x)^2\text{Re}}\right)$ and introducing the transformed variable $\mathsf{v} = \mathsf{E}_{\mathcal{F}}\left(\frac{t-\tau}{(\Delta x)^2\text{Re}}\right) \mathsf{u}$ yields the transformed system of semi-discrete equations $$\frac{ d \mathsf{v} }{ d t }
= -\mathsf{H}_{\mathcal{F}}
\tilde{\mathsf{N}} \left( \mathsf{H}_{\mathcal{F}}^{-1} \mathsf{v} + \mathsf{u}_\infty \right)
- \mathsf{H}_{\mathcal{F}}\mathsf{G} \mathsf{d},\quad
\mathsf{G}^\dagger \mathsf{H}_{\mathcal{C}}^{-1} \mathsf{v} = 0,
\label{eq:dnstp_trans_0}$$ where $\mathsf{H}_\mathcal{Q} = \mathsf{E}_{\mathcal{Q}}\left(\frac{t-\tau}{(\Delta x)^2\text{Re}}\right)$. Using the commutativity properties of integrating factors, Eq. simplifies to $$\frac{ d \mathsf{v} }{ d t }
= -\mathsf{H}_{\mathcal{F}} \tilde{\mathsf{N}} \left(
\mathsf{H}_{\mathcal{F}}^{-1} \mathsf{v} + \mathsf{u}_\infty \right)
- \mathsf{G} \mathsf{b},\quad
\mathsf{G}^\dagger \mathsf{v} = 0,
\label{eq:dnstp_trans}$$ where $\mathsf{b} = \mathsf{H}_\mathcal{F} \mathsf{d}$. We emphasize that the transformed system of equations Eq. is equivalent to the original system of equation Eq. . Furthermore, as is the case for Eq. , Eq. represents a system of DAEs of index 2.
The procedures for obtaining expressions $\mathsf{G}_{\mathsf{L}}(\mathbf{n})$ can be readily extended to the case of $[\mathsf{G}_{\mathsf{E}}(\alpha)](\mathbf{n})$, where $\mathsf{G}_{\mathsf{E}}(\alpha)$ is the LGF of the integrating factor $\mathsf{E}_{\mathcal{Q}}(-\alpha)$. Expressions for $\mathsf{G}_{\mathsf{E}}(\mathbf{n})$ are also provided in \[app:lgfs\]. As for the case of $\mathsf{L}_\mathcal{Q}^{-1}$, fast solutions to expressions involving $\mathsf{G}_{\mathsf{E}}(\alpha)$ are computed using the LGF-FMM.
An important distinction between $\mathsf{G}_{\mathsf{L}}(\mathbf{n})$ and $[\mathsf{G}_{\mathsf{E}}(\alpha)](\mathbf{n})$ is found in their asymptotic behavior. Whereas $|\mathsf{G}_{\mathsf{L}}(\mathbf{n})|$ decays as $1/|\mathbf{n}|$ as $|\mathbf{n}|\rightarrow \infty$, $|[\mathsf{G}_{\mathsf{E}}(\alpha)](\mathbf{n})|$ decays faster than any exponential as $|\mathbf{n}|\rightarrow\infty$ for a fixed $\alpha$.[^5] The fast decay of $\mathsf{G}_{\mathsf{E}}$ implies that, for typical computations, the application of $\mathsf{E}_{\mathcal{Q}}$ can be consider a local operation, i.e. values computed at a particular grid location only depend on the values of a few neighboring grid cells. Consequently, the LGF-FMM requires significantly fewer operations to evaluate the action of $\mathsf{E}_{\mathcal{Q}}$ compared to the action of $\mathsf{L}_{\mathcal{Q}}^{-1}$.[^6]
Time integration {#sec:temporal}
================
Half-explicit Runge-Kutta methods {#sec:temporal_herk}
---------------------------------
Failing to properly identify the semi-discrete form of the governing equations, i.e. Eq. , as a system of differential algebraic equations (DAEs) of index 2 prior to choosing a time integration scheme can have undesirable consequences on the quality of the numerical solution [@hairer1996; @ascher1998]. Half-explicit Runge-Kutta (HERK) methods are a type of one-step time integration schemes developed for DAEs of index 2 [@hairer1989; @brasey1993; @hairer1996]. Although there are multiple HERK methods [@hairer1996], we limit our attention to the original HERK method proposed by @hairer1989.
Consider DAE systems of index 2 of the form $$\frac{dy}{dt} = f \left( y, z \right),\quad g\left( y \right) = 0,
\label{eq:dae_brasey}$$ where $f$ and $g$ are sufficiently differentiable, and $z$ is an unknown that must be computed so as to have $y$ satisfy $g(y) = 0$. Problems of this form are of index 2 if the product of partial derivatives $g_y(y) f_z(y,z)$ is non-singular in a neighborhood of the solution. The HERK method applied to Eq. is given by an algorithm similar to that of explicit Runge-Kutta (ERK) methods except that the implicit constraint equation $g\left( y \right) = 0$ is solved at each stage of the ERK scheme.
Similarly to standard RK methods, HERK methods can be described by their Butcher tableau: $$\begin{array}{c|c}
\mathbf{c} & \mathbf{A} \\ \hline
{} & \mathbf{b}^\dagger
\end{array},
\label{eq:erk_tableau}$$ where $\mathbf{A}=[a_{i,j}]$ is the Runge-Kutta matrix, $\mathbf{b}=[b_i]$ is the weight vector, and $\mathbf{c}=[c_i]$ is the node vector. In subsequent sections, it is often convenient to use the *shifted* tableau notation: $$\tilde{a}_{i,j} =
\left\{\begin{array}{cl}
a_{i+1,j} & \text{for } i = 1,2,\dots,s-1\\
b_j & \text{for } i = s
\end{array}\right.,\,\,
\tilde{c}_i =
\left\{\begin{array}{cl}
c_{i+1} & \text{for } i = 1,2,\dots,s-1\\
1 & \text{for } i = s
\end{array}\right..
\label{eq:herk_shift}$$ We refer the reader to the discussions of [@hairer1989; @brasey1993] for a detailed algorithm and a list of order-conditions for the general case of Eq. .
We now turn our attention to the special case of the transformed semi-discrete governing equations given by Eq. . It is convenient to express the non-autonomous system of Eq. in terms of the autonomous system of Eq. . This is achieved by letting $y=[ \mathsf{v}, t ]$ and $z = \mathsf{b}$, and by adding $t^\prime=1$ to Eq. . For this case, $g_y = [\,\mathsf{G}^\dagger,\, 0]$ and $f_z = g_y^\dagger$, where $g_y = [ g_\mathsf{u}, g_t ]$ and $f_z=f_\mathsf{b}$. By construction, the operator $\mathsf{G}$ is a constant, which implies that $f_z$ and $g_y$ are also constants. As a result, order-conditions for the general system of Eq. involving high-order derivatives of $f_z$ and $g_y$ are trivially satisfied for the case of Eq. . Fewer order-conditions permit a wider range of RK tableaus to be used for a given order of accuracy. This is particularly relevant for high-order HERK schemes, since the number of order-conditions is significantly larger than that of standard RK schemes [@brasey1993].
The simplifications in the order-conditions obtained for the special case of constant $f_z$ and $g_y$ are well-described in the literature of HERK methods [@hairer1989; @brasey1993; @hairer1996; @sanderse2012]. Order-conditions up to order 4 for the $y$-component reduce to those of standard RK methods [@sanderse2012]. Similarly, order-conditions of order $r \le 3$ for the $z$-component (up to fourth-order accurate $z$-component) reduce to having the shifted sub-tableau $[\tilde{a}_{i,j}]$ for $i,j=1,2,\dots s-1$ satisfy the $y$-component order-conditions up to order $r$ [@brasey1993; @sanderse2012]. It is beyond the scope of the present work to provide an extended discussion on the properties and implementation details of the HERK method for particular RK tableaus. Instead, the order of accuracy and linear stability of a few selected schemes used to perform the numerical experiments of Section \[sec:verif\] is discussed in Section \[sec:temporal\_ifherk\] and \[app:stability\], respectively.
Combined integrating factor and half-explicit Runge-Kutta method {#sec:temporal_ifherk}
----------------------------------------------------------------
In this section we present a method for obtaining numerical solutions for the (untransformed) discrete velocity and total pressure perturbation by combining the integrating factor technique of Section \[sec:spatial\_intfact\] with the HERK method of Section \[sec:temporal\_herk\]. The combined method, referred to as the IF-HERK method, integrates Eq. over $t\in[0,T]$ subject to the initial condition $\mathsf{u}(\mathbf{n},0) = \mathsf{u}_0(\mathbf{n})$.
Formally, the IF-HERK method partitions the original problem into a sequences of $n$ sub-problems, where the $k$-th sub-problem corresponds to numerical integration of Eq. from $t_k$ to $t_{k+1}$ subject to the initial condition $\mathsf{u}(\mathbf{n},t_k) = \mathsf{u}_k(\mathbf{n})$. We restrict our discussion to the case of equispaced time-steps, i.e. $t_k = t_{k-1} + \Delta t$, since the more general case of variable time-step size is readily deduced.
The $k$-th sub-problem is solved by first introducing the transformed variables $$\mathsf{v}(\mathbf{n},t) = \left[ \mathsf{E}_\mathcal{F}
\left(\textstyle\frac{\Delta t}{(\Delta x)^2\text{Re}}\right) \right]
\mathsf{u}(\mathbf{n},t), \,\,
\mathsf{b}(\mathbf{n},t) = \left[ \mathsf{E}_\mathcal{F}
\left(\textstyle\frac{\Delta t}{(\Delta x)^2\text{Re}}\right) \right]
\mathsf{q}(\mathbf{n},t), \,\,
t \in[t_k,t_{k+1}],
\label{eq:ifherk_aux}$$ and using $\mathsf{E}_\mathcal{F}\left(\textstyle\frac{\Delta t}{(\Delta x)^2\text{Re}}\right)$ as an integrating factor for Eq. . Next, the HERK method is used to integrate the transformed nonlinear equations from $t_k$ to $t_{k+1}$ in order to obtain $\mathsf{v}_{k+1}(\mathbf{n}) \approx \mathsf{v}(\mathbf{n},t_{k+1})$ and $\mathsf{b}_{k+1} \approx \mathsf{b}(\mathbf{n},t_{k+1})$. Finally, values for the discrete velocity and total pressure perturbation at $t_{k+1}$, i.e. $\mathsf{u}_{k+1}(\mathbf{n}) \approx \mathsf{u}(\mathbf{n},t_{k+1})$ and $\mathsf{d}_{k+1}(\mathbf{n}) \approx \mathsf{d}(\mathbf{n},t_{k+1})$, are obtained from $\mathsf{v}_{k+1}$ and $\mathsf{b}_{k+1}$ by using the integrating factor $\mathsf{E}_\mathcal{F}\left(\textstyle\frac{-\Delta t}{(\Delta x)^2\text{Re}}\right)$.
A computationally convenient algorithm for the $k$-th time-step of the IF-HERK method, subsequently denoted by $(\mathsf{u}_{k+1},{t}_{k+1},\mathsf{p}_{k+1}) \leftarrow \text{IF-HERK}(\mathsf{u}_{k},t_k)$, is given by:
1. *initialize*: copy solution values from the $k$-th time-step, $$\mathsf{u}_k^0 = \mathsf{u}_k, \quad t^{0}_{k} = t_k.$$
2. *multi-stage*: for $i=1,2,\dots,s$, solve the linear system $$\left[ \begin{array}{cc}
\left( \mathsf{H}_\mathcal{F}^{i} \right)^{-1} & \mathsf{G} \\
\mathsf{G}^\dagger & 0
\end{array} \right]
\left[ \begin{array}{c}
\mathsf{u}_k^i \\
\hat{\mathsf{d}}_k^i
\end{array} \right]
=
\left[ \begin{array}{c}
\mathsf{r}_k^i \\
0
\end{array} \right],
\label{eq:ifherk_linsys}$$ where $$\mathsf{H}_\mathcal{F}^i =
\mathsf{E}_\mathcal{F}\left(\textstyle\frac{ (\tilde{c}_i-\tilde{c}_{i-1}) \Delta t}{(\Delta x)^2\text{Re}}\right),
\quad
\mathsf{r}_k^i = \mathsf{q}_k^{i}
+ \Delta t \sum_{j=1}^{i-1} \tilde{a}_{i,j} \mathsf{w}_k^{i,j}
+ \mathsf{g}_k^{i},
\label{eq:ifherk_aux_1}$$ $$\mathsf{g}_k^i = \
- \tilde{a}_{i,i} \Delta t \
\tilde{\mathsf{N}}\left( \mathsf{u}_k^{i-1} + \mathsf{u}_\infty(t_k^{i-1})\right),
\quad
t_k^{i} = t_k + \tilde{c}_i \Delta t.
\label{eq:ifherk_g}$$ For $i>1$ and $j>i$, $\mathsf{q}_k^{i}$ and $\mathsf{w}_k^{i,j}$ are recursively computed using [^7] $$\mathsf{q}_k^{i} = \mathsf{H}_\mathcal{F}^{i-1} \mathsf{q}_k^{i-1},
\quad
\mathsf{q}_k^{1} = \mathsf{u}_k^0
\label{eq:ifherk_q}$$ $$\mathsf{w}_k^{i,j} = \mathsf{H}_\mathcal{F}^{i-1} \mathsf{w}_k^{i-1,j},
\quad
\mathsf{w}_k^{i,i} = \left( \tilde{a}_{i,i} \Delta t \right)^{-1}
\left( \mathsf{g}_k^{i} - \mathsf{G} \hat{\mathsf{d}}_k^{i} \right).$$
3. *finalize*: define the solution and constraint values of the $(k+1)$-th time-step, $$\mathsf{u}_{k+1} = \mathsf{u}_k^s,
\quad
\mathsf{d}_{k+1} = \left( \tilde{a}_{s,s} \Delta t \right)^{-1} \hat{\mathsf{d}}_k^s,
\quad
t_{k+1} = t_k^{s}.$$
The above algorithm is obtained by applying the HERK method to either Eq. or, equivalently, Eq. for the $k$-th sub-problem, and introducing the auxiliary variables $$\mathsf{u}^{i}_{k}(\mathbf{n})
= \left[ \mathsf{E}_\mathcal{F}
\left(\textstyle\frac{-\tilde{c}_i \Delta t}{(\Delta x)^2\text{Re}}\right) \right]
\mathsf{v}^{i}_{k} (\mathbf{n}),
\quad
\mathsf{d}^{i}_{k}(\mathbf{n})
= \left[ \mathsf{E}_\mathcal{F}
\left(\textstyle\frac{-\tilde{c}_i \Delta t}{(\Delta x)^2\text{Re}}\right) \right]
\mathsf{b}^{i}_{k}(\mathbf{n}),$$ for $i= 1, 2, \dots s$. We clarify that the intermediate steps used to obtained the final form $\text{IF-HERK}$ algorithm make use of the commutativity properties of $\mathsf{E}_\mathcal{Q}$ and the identity $\mathsf{E}_\mathcal{Q}(\alpha_1) \mathsf{E}_\mathcal{Q}(\alpha_2) = \mathsf{E}_\mathcal{Q}(\alpha_1+\alpha_2)$.
The linear operator on the left-hand-side (LHS) of Eq. is symmetric positive semi-definite and its null-space is spanned by the set of $[ 0, \mathsf{a} ]^\dagger$, where $\mathsf{a}\in\mathbb{R}^\mathcal{C}\times\mathbb{R}$ is any discrete linear polynomial. Consequently, the compatibility condition on the pressure field given by Eq. guarantees Eq. has a unique solution. As presented, the $\text{IF-HERK}$ algorithm is compatible with any HERK scheme since no assumptions have been made on the RK coefficients. Of course, more efficient versions of this algorithm can potentially be obtained for specific families of RK coefficients, but such details are beyond the scope of the present work.
The IF-HERK schemes used to performed the numerical experiments of Section \[sec:verif\] are given by the following tableaus: $$\stackrel{\mbox{\small{\textsc{Scheme A}}}}{ \begin{array}{c|ccc}
0 & 0 & 0 & 0 \\
\textstyle\frac{1}{2} & \textstyle\frac{1}{2} & 0 & 0 \\
1 & \textstyle\frac{\sqrt{3}}{3} & \textstyle\frac{3-\sqrt{3}}{3} & 0 \\
\hline
{} & \textstyle\frac{3+\sqrt{3}}{6} & -\textstyle\frac{\sqrt{3}}{3} & \textstyle\frac{3+\sqrt{3}}{6}
\end{array} }, \quad
\stackrel{\mbox{\small{\textsc{Scheme B}}}}{ \begin{array}{c|ccc}
0 & 0 & 0 & 0 \\
\textstyle\frac{1}{3} & \textstyle\frac{1}{3} & 0 & 0 \\
1 & -1 & 2 & 0 \\
\hline
{} & 0 & \textstyle\frac{3}{4} & \textstyle\frac{1}{4}
\end{array} }, \quad
\stackrel{\mbox{\small{\textsc{Scheme C}}}}{ \begin{array}{c|ccc}
0 & 0 & 0 & 0 \\
\textstyle\frac{8}{15} & \textstyle\frac{8}{15} & 0 & 0 \\
\textstyle\frac{2}{3} & \textstyle\frac{1}{4} & \textstyle\frac{5}{12} & 0 \\
\hline
{} & \textstyle\frac{1}{4} & 0 & \textstyle\frac{3}{4}
\end{array} }.
\label{eq:ifherk_schemes}$$ The order of accuracy, based on the simplified order-conditions discussed in Section \[sec:temporal\_herk\], for each scheme is provided in Table \[tab:ifherk\_schemes\]. As a point of comparison, Table \[tab:ifherk\_schemes\] also provides the expected order of accuracy for general semi-explicit DAEs of index 2, i.e. Eq. .
*$y$-Order* *$z$-Order* *$y$-Order*${}^*$ *$z$-Order*${}^*$
---------- ------------- ------------- ------------------- ------------------- --
Scheme A 2 2 2 2
Scheme B 3 2 3 2
Scheme C 3 1 2 1
: Order of accuracy of the solution $y$ variable (velocity perturbation) and constraint $z$ variable (pressure perturbation) based on specialized HERK order conditions. The superscript $*$ denotes values for general semi-explicit DAEs of index 2. \[tab:ifherk\_schemes\]
The tableaus for Schemes B and C were obtained from [@brasey1993] and [@sanderse2012]. As discussed in [@sanderse2012], the tableau for Scheme C corresponds to the RK coefficients of the popular three-stage fractional step method of [@le1991]. Unlike Schemes B and C, the tableau for Scheme A was specifically defined for the IF-HERK method. An advantage of Scheme A over Schemes B and C is that the RK nodes, $c_i$’s, are equally spaced. As a result, the IF-HERK method only requires a single non-trivial integrating factor.[^8] This reduction in the number of distinct LGFs reduces the number of pre-processing operations and lowers the storage requirements of the LGF-FMM. Additionally, extensions of the present method including immersed surfaces, e.g. via the treatment of immersed boundaries of [@colonius2008], can potentially enjoy similar reductions in the computational costs of pre-processing operations by only having to consider a single non-trivial integrating factor. We will report on immersed boundary methods based on the present flow solver in subsequent publications. The linear stability analysis of the IF-HERK method is provided in \[app:stability\].
Projection method {#sec:temporal_projection}
-----------------
It is readily verified that the most computationally expensive operation performed by the IF-HERK method corresponds to solving Eq. for each stage. Systems of continuum or discrete equations similar to Eq. often arise in the literature of numerical methods for simulating incompressible flows. Solutions to these system are frequently obtained through classical projection, fractional-step, or pressure Schur complement methods [@perot1993; @turek1999]. These methods can be regarded as approximate block-wise LU decompositions of the original system [@perot1993; @turek1999]. More recently, *exact* projection techniques that are free of any matrix/operator approximations have been proposed, e.g. [@chang2002; @colonius2008]. These techniques have the advantage of not introducing any “splitting errors” and do not require artificial pressure boundary conditions. The present formulation uses an exact projection method to solve Eq. , but differs from the methods of [@chang2002; @colonius2008] in that it does not use the null-space of the discrete operators to obtain solutions to the linear system.
The block-wise LU decomposition of the operator in Eq. suggests a solution procedure, expressed in the standard correction form, given by:
$$\begin{aligned}
{2}
\mathsf{u}^{*}
&= \mathsf{H}_{\mathcal{F}}^{i} \mathsf{r}_k^{i}
&& \quad\text{(compute intermediate velocity)} \\
\mathsf{S} \hat{\mathsf{d}}_k^{i}
&= \mathsf{G}^\dagger \mathsf{u}^{*}
&& \quad\text{(solve for total pressure)}
\label{eq:proj_schur} \\
\mathsf{u}_k^{i}
&= \mathsf{u}^{*} - \mathsf{H}_{\mathcal{F}}^{i} \mathsf{G} \hat{\mathsf{d}}_k^{i}
&& \quad\text{(projection step)}, \end{aligned}$$
\[eq:proj\]
where $\mathsf{S} = \mathsf{G}^{\dagger} \mathsf{H}_\mathcal{F}^{i} \mathsf{G}$ is the Schur complement of the system.[^9] By taking into account the commutativity and mimetic properties of the spatial discretization scheme the procedure given by Eq. simplifies to: $$\hat{\mathsf{d}}_k^{i} = - \mathsf{L}_\mathcal{C}^{-1} \mathsf{G}^\dagger \mathsf{r}_k^{i},
\quad
\mathsf{u}_k^{i} = \mathsf{H}_{\mathcal{F}}^{i} \left( \mathsf{r}_k^{i} - \mathsf{G} \hat{\mathsf{d}}_k^{i} \right),
\label{eq:proj_final}$$ where $\mathsf{x} = \mathsf{L}_\mathcal{C}^{-1} \mathsf{y}$ is equivalent to solving $\mathsf{L}_\mathcal{C} \mathsf{x} = \mathsf{y}$ subject to uniform boundary conditions at infinity. In this form, one of the two integrating factors has been eliminated and the original elliptic problem $\mathsf{G}^{\dagger} \mathsf{H}_\mathcal{F}^{i} \mathsf{G} \mathsf{x} = \mathsf{y}$ has been replaced by the Poisson problem $\mathsf{L} \mathsf{x} = \mathsf{y}$. Reducing the original discrete elliptic problem to a discrete Poisson problem is of significant practical importance since it permits the use of the LGF-FMM with known LGF expressions [@liska2014]. As will be discussed in Section \[sec:truncation\], the operation count of our overall algorithm is dominated by the cost of solving for the discrete pressure perturbation; therefore, a projection method that is compatible with fast, robust discrete elliptic solvers greatly facilitates obtaining fast flow solutions.
Adaptive computational grid {#sec:truncation}
===========================
Restricting operations to a finite computational grid {#sec:truncation_active}
-----------------------------------------------------
Thus far we have described algorithms for discretizing and computing the incompressible Navier-Stokes equations on unbounded grids. In this section, we present a method for computing solutions, to a prescribed tolerance, using only a finite number of operations. This approximation is accomplished by limiting all operations to a finite computational grid obtained by removing grid cells of the original unbounded grid containing field values that are sufficiently small so as not to significantly affect the evolution of the flow field. As will be demonstrated in the following discussion, the ability of the present method to only track a finite region of the unbounded domain is a consequence of the exponential decay of the vorticity at large distances, which is assumed for all flows under consideration.
We first consider the error resulting from neglecting field values outside a finite region when solving the elliptic problems of the IF-HERK method.[^10] Using the notation of Section \[sec:spatial\_lgfs\], the solution to the discrete Poisson problem of Eq. is given by $$\hat{\mathsf{d}}(\mathbf{n}) = [ \mathsf{G}_\mathsf{L} * \mathsf{f} ](\mathbf{n}),
\quad \
\mathsf{f}(\mathbf{n}) = [-\mathsf{G}^\dagger \mathsf{r}_k^{i}](\mathbf{n}).
\label{eq:dpoisson_conv_d}$$ The source field $\mathsf{G}^\dagger \mathsf{r}_k^{i}$ is a discrete approximation of $\nabla \cdot \boldsymbol{\ell}$ at $t \approx k \Delta t$, where $\boldsymbol{\ell} = \boldsymbol{\omega} \times \mathbf{u}$ is the Lamb vector. It follows from the assumption that $\boldsymbol{\omega}$ is exponentially small at large distances that $\nabla \cdot \boldsymbol{\ell}$ and $\mathsf{G}^\dagger \mathsf{r}_k^{i}$ must also be exponentially small at large distances. As a result, the induced field of Eq. is computed to a prescribed tolerance by defining the finite computational domain such that it includes the region where the magnitude of $\mathsf{G}^\dagger \mathsf{r}_k^{i}$ is greater than some positive value.
The action of all operators present in the IF-HERK and projection algorithms, with the exception of $\mathsf{L}_\mathcal{C}^{-1}$, are evaluated using only a few local operations. Many of these local operators act on fields that typically decay algebraically, e.g. $\mathsf{u}$ and $\mathsf{d}$. As a result, the technique of only tracking regions with non-negligible source terms used for Eq. is impractical for most other operations required by the IF-HERK method. Unlike the action of $\mathsf{L}_\mathcal{C}^{-1}$, the action of local operators only incurs an error limited to a few cells near the boundary of a finite region if field values of outside the region are ignored, i.e. taken to be zero. Furthermore, repeated applications of local operators only propagate the error into the interior of the region by a few grid cells per application. This type of error is prevented from significantly affecting the solution in the interior by padding the interior with buffer grid cells and by periodically computing (“refreshing”) $\mathsf{u}$ from the discrete vorticity, $\mathsf{w} = \mathsf{C}\mathsf{u}$, which, like $\mathsf{G}^\dagger \mathsf{r}_k^{i}$, has bounded approximate support. As a result, the approximate support of both $\mathsf{G}^\dagger \mathsf{r}^i_k$ and $\mathsf{w}$ must be contained in the finite computational domain. Bounds for the error resulting from approximating the support of these fields and estimates for the number of time steps that can elapse before the velocity needs to refreshed will be discussed in Sections \[sec:truncation\_adaptivity\] and \[sec:truncation\_refresh\], respectively.
We recall that the discrete velocity perturbation $\mathsf{u}$ is subject to the constraint $\mathsf{G}^\dagger\mathsf{u}=0$ and that the null-space of $\mathsf{G}^\dagger$ is spanned by the image of $\mathsf{C}^\dagger$. As a result, it is possible to express $\mathsf{u}$ as $$\mathsf{u} = \mathsf{C}^\dagger \mathsf{a},
\label{eq:vel_stf}$$ where $\mathsf{a}\in\mathbb{R}^\mathcal{E}$ can be regarded as the discrete vector potential or streamfunction. Additionally, we require $\mathsf{D}\mathsf{a}=0$. The discrete vorticity, $\mathsf{w}$, can now be expressed in terms of $\mathsf{a}$ as $$\mathsf{w}
= \mathsf{C} \mathsf{C}^\dagger \mathsf{a}
= \left( \mathsf{C} \mathsf{C}^\dagger + \mathsf{D}^\dagger \mathsf{D}\right) \mathsf{a}
= -\mathsf{L}_\mathcal{E} \mathsf{a}.
\quad \mathsf{D} \mathsf{w} = 0
\label{eq:vor_stf}$$ Finally, Eq. and provide an expression for $\mathsf{u}$ in terms of $\mathsf{w}$, $$\mathsf{u} = -\mathsf{C}^\dagger \mathsf{L}_\mathcal{E}^{-1} \mathsf{w},
\label{eq:vel_vor}$$ where $\mathsf{L}_\mathcal{E}^{-1}$ imposes zero boundary conditions at infinity.[^11] As expected, the expressions relating $\mathsf{u}$, $\mathsf{w}$, and $\mathsf{a}$ are analogous to the continuum expressions relating the velocity, vorticity, and streamfunction fields. We emphasize that Eq. , , and were obtained through the algebraic properties of the discrete operators, as opposed to the discretization of continuum equations.
The present formulation can be cast into an equivalent vorticity formulation simply by taking the discrete curl of Eq. and computing $\mathsf{u}$, which is required to evaluate the non-linear term, using Eq. . This formulation is not pursued since each stage of the IF-HERK would require solving a discrete *vector* Poisson problem, as opposed to a discrete *scalar* Poisson problem, which would in turn roughly triple the cost of each stage.[^12] The vorticity formulation has the advantage of not having to periodically evaluate Eq. to refresh $\mathsf{u}$, but, as will be discussed in Sections \[sec:truncation\_refresh\], this operation occurs, at most, once per time step. Based on the stability analysis of \[app:stability\], RK schemes with a minimum of three stages are required to ensure stable solutions. As a result, the primitive variable formulation is approximately 1.5 to 3 times faster than the vorticity formulation. Differences in the errors between the two algebraically-equivalent formulations associated with the finite tolerances used to compute the LGF-FMM and the adaptive grid algorithms can be used to further distinguish each formulation. However, such differences in errors are not considered here since they are expected to be on the order of the prescribed tolerances, which, as will be discussed in Section \[sec:algorithm\], are specified to be much smaller than the discretization errors of practical flows.
Block-structured active computational grid {#sec:truncation_blocks}
------------------------------------------
We now turn our attention to the formal definition of the finite region of the unbounded computational domain tracked by our formulation, which we refer to as the *active* computational domain. Consider partitioning the unbounded staggered Cartesian grid described in Section \[sec:spatial\] into an infinite set of equally sized blocks arranged on a logically Cartesian grid. The block corresponding to the $\mathbf{n}=(i,j,k)$ location is denoted by $B(\mathbf{n})$ or, equivalently, $B_{i,j,k}$, and the union of all blocks is denoted by $D_\infty$. Each block is defined as a finite staggered Cartesian grid of $n^b_{1} \times n^b_{2} \times n^b_{3}$ cells. We limit our attention to the case in which each block contains the same number of cells in each direction, i.e. $n^b_i = n^b$, but note that the subsequent discussion readily extends to the general case. As a practical consideration, a layer of buffer or ghost grid cells surrounding each block is introduced to facilitate the implementation of the present algorithm.
![ Depiction of the finite computational domain in two-dimensions. Distant view of the three nested sub-domains $D_\text{supp} \subseteq D_\text{soln} \subset D_\text{xsoln}$ defined in the main text (*left*). Zoomed-in view illustrating the union of blocks used to define the domain (*middle*). Magnified view of an individual block (*right*). Each block is defined as a finite staggered Cartesian grid; dashed cells surrounding the interior grid correspond to buffer or ghost grid cells. \[fig:grid-blocks\] ](fig_2.pdf){width="\textwidth"}
Figure \[fig:grid-blocks\] depicts the three nested sub-domains $D_\text{supp} \subseteq D_\text{soln} \subset D_\text{xsoln} \subset D_\infty$ that constitute the active computational domain. These sub-domains are defined as:
- *Support blocks* ($D_\text{supp}$): union of blocks that defines the support of the source field of the discrete Poisson problems of Eq. and .
- *Solution blocks* ($D_\text{soln}$): union of blocks that tracks the solution fields $\mathsf{u}$ and $\mathsf{d}$. All field values defined in the blocks belonging to $D_\text{soln}$ are regarded as accurate approximations of the field values computed using an unbounded domain.
- *Expanded solution blocks* ($D_\text{xsoln}$): union of blocks given by a non-trivial neighborhood of $D_\text{soln}$. We limit our attention to neighborhoods defined by the union of blocks that are at most $N_b$ blocks away from any block contained in $D_\text{soln}$, $$D_\text{xsoln} = \left\{ B(\mathbf{m}) :
| \mathbf{n}-\mathbf{m} | \le N_b,\,\,
B(\mathbf{n}) \in D_\text{soln},\,\,
\mathbf{m},\mathbf{n} \in \mathbb{Z}^3 \right\}.
\label{eq:def_xsoln}$$
- *Buffer blocks* ($D_\text{buffer}$): union of blocks belonging to $D_\text{xsoln}$, but not belonging to $D_\text{soln}$, i.e. $D_\text{buffer}=D_\text{xsoln} \setminus D_\text{soln}$. (The domain $D_\text{buffer}$ is not one of the three primary sub-domains, but it is introduced to facilitate the subsequent discussion.)
The criteria for selecting which blocks belong to $D_\text{supp}$ and $D_\text{soln}$ are discussed in Section \[sec:truncation\_adaptivity\], and the techniques for selecting values of $N_n$ discussed in Section \[sec:truncation\_refresh\].
We now introduce the “mask operator” $\mathsf{M}^{\gamma}_\mathcal{Q}:\mathbb{R}^\mathcal{Q}\mapsto\mathbb{R}^\mathcal{Q}$ associated with the grid space $\mathcal{Q}$ and the domain $\gamma$, which is defined by $$[ \mathsf{M}^{\gamma}_\mathcal{Q} \mathsf{q} ] (\mathbf{n}) =
\left\{
\begin{array}{cc}
\mathsf{q}(\mathbf{n}) &
\,\, \text{if} \,\, \mathbf{n} \in \text{\textsl{ind}}[B]
\,\, \text{and} \,\, B \in D_\gamma \\
0 & \text{otherwise}
\end{array}
\right.,$$ where $\mathsf{q}\in\mathbb{R}^\mathcal{Q}$, and $\text{\textsl{ind}}[B]$ denotes the set of all indices of the unbounded staggered grid associated with block $B$. Mask operators are subsequently used to formally define operations performed on finite domains. For example, the operation $\mathsf{G} \mathsf{d}$ perform over $D_\text{xsoln}$ is defined as $\mathsf{M}^{\text{xsoln}}_\mathcal{F} \mathsf{G} \mathsf{M}^{\text{xsoln}}_\mathcal{C} \mathsf{d}$. For this particular operation, the values of $\mathsf{M}^{\text{xsoln}}_\mathcal{C} \mathsf{G} \mathsf{M}^{\text{xsoln}}_\mathcal{C} \mathsf{d}$ and $\mathsf{G} \mathsf{d}$ are equivalent for grid cells in $D_\text{xsoln}$, except for a single layer of grid cells on the boundary of $D_\text{xsoln}$. Computationally efficient implementations of $\mathsf{M}^{\gamma^\prime}_{\mathcal{Q}^\prime}\mathsf{A}\mathsf{M}^{\gamma}_\mathcal{Q}$ recognize that all non-trivial numerical operations are limited to grid cells contained in either $D_\gamma$ and $D_{\gamma^\prime}$.
Adaptivity {#sec:truncation_adaptivity}
----------
In this section we discuss the criteria used to select the blocks belonging to $D_\text{supp}$ and $D_\text{soln}$. It follows from subsequent discussions that the field values on $D_\text{soln} \setminus D_\text{supp}$ can be computed as a post-processing step from the field values on $D_\text{supp}$; therefore, only the criteria used to define the $D_\text{supp}$ affects the accuracy of the computed flow field. We allow for $D_\text{soln} \ne D_\text{supp}$ in order to emphasize that the present algorithm is able to track values of $\mathsf{u}$ and $\mathsf{d}$ over arbitrary regions of interest.
Consider a function $W$ that maps an unbounded grid of blocks, i.e. $D_\infty$, to an unbounded grid of positive real scalars. We define the support and solution regions as
$$\begin{aligned}
D_\text{supp} &= \left\{ B(\mathbf{n}) : [ W_\text{supp}(D_\infty) ] (\mathbf{n})
> \epsilon_\text{supp},
\,\, \mathbf{n} \in \mathbb{Z}^3 \right\}, \label{eq:d_supp} \\
D_\text{soln} &= \left\{ B(\mathbf{n}) : [ W_\text{soln}(D_\infty) ] (\mathbf{n})
> \epsilon_\text{soln},
\,\, \mathbf{n} \in \mathbb{Z}^3 \right\}, \label{eq:d_soln}
\end{aligned}$$
\[eq:d\_defs\]
respectively. The functions $W_\text{supp}$ and $W_\text{soln}$, and the scalars $\epsilon_\text{supp}$ and $\epsilon_\text{soln}$ are referred to as weight functions and threshold levels, respectively.
Although the weight function $W_\text{supp}$ can be defined to reflect any block selection criteria, we limit our attention to cases for which $[ W_\text{supp}(D_\infty) ] (\mathbf{n})$ reflects the magnitude of the fields $\mathsf{C}\mathsf{u}$ and $\mathsf{G}^\dagger \tilde{\mathsf{N}}(\mathsf{u}+\mathsf{u}_\infty)$ over the block $B(\mathbf{n})$. This choice of $W_\text{supp}$ facilitates establishing relationships between the threshold level $\epsilon_\text{supp}$ and the error incurred by neglecting source terms values outside $D_\text{supp}$ when solving the discrete Poisson problems of Eq. and . As a representative example, we consider the weight function $W_\text{supp}$ given by
$$[ W_\text{supp}(D_\infty) ] (\mathbf{n}) = \max\left(
{\mu(\mathbf{n})}/{\mu_\text{global}},\,
{\nu(\mathbf{n})}/{\nu_\text{global}} \right),
\label{eq:weight_fun}$$
$$\begin{aligned}
{2}
&\mu(\mathbf{n}) = \
\max_{\mathbf{m} \in \text{\textsl{ind}}[B(\mathbf{n})]}
| [\mathsf{C}\mathsf{u}](\mathbf{n})
|, \quad
&&\mu_\text{global} = \max_{\mathbf{n}\in\mathbb{Z}^3} \mu(\mathbf{n}) , \\
&\nu(\mathbf{n}) = \
\max_{\mathbf{m} \in \text{\textsl{ind}}[B(\mathbf{n})]}
| [ \mathsf{G}^\dagger \tilde{\mathsf{N}}(\mathsf{u}+\mathsf{u}_\infty) ] (\mathbf{n})
|, \quad
&&\nu_\text{global} = \max_{\mathbf{n}\in\mathbb{Z}^3} \nu(\mathbf{n}).
\end{aligned}$$
In the absence of any error associated with computing the action of $\mathsf{L}_\mathcal{Q}^{-1}$, this expression for $W_\text{supp}$ results in an upper bound of $\epsilon_\text{supp}$ for the point-wise normalized residual of the active domain approximations of Eq. and .[^13] For these cases, the point-wise normalized residual is defined as $\|\mathsf{r}\|_\infty/\|\mathsf{x}\|_\infty$, where $$\mathsf{r} = \mathsf{x} - \mathsf{M}^{\text{supp}} \mathsf{L}_\mathsf{Q} \mathsf{y},
\quad
\mathsf{y} = \mathsf{M}^{\text{xsoln}}_\mathcal{Q} {\mathsf{L}_\mathsf{Q}}^{-1}
\mathsf{M}^{\text{supp}}_\mathcal{Q} \mathsf{x},$$ and $\mathsf{x}$ is the source field of the corresponding discrete Poisson problem.
In general, as the solution changes over time the domain $D_\text{supp}$, as defined by Eq. and Eq. , will also change. Significant amounts of non-negligible source terms are prevented from being advected or diffused outside $D_\text{supp}$ by recomputing and, if necessary, reinitializing the active domain at the beginning of a time-step. This operation is performed by first computing $\mathsf{w}\leftarrow\mathsf{C}\mathsf{u}$ and $\mathsf{q}\leftarrow -\mathsf{G}^\dagger \tilde{\mathsf{N}}(\mathsf{u}+\mathsf{u}_\infty)$ on $D_\text{xsoln}$. Next, values of $\mathsf{w}$ and $\mathsf{q}$ of grid cells belonging to block in $D_\text{buffer}$ that have been significantly contaminated by finite boundary errors are zeroed. Finally, $[ W_\text{supp}(D_\infty) ] (\mathbf{n})$ and $[ W_\text{soln}(D_\infty) ] (\mathbf{n})$ are computed using Eq. for all $\mathbf{n}\in\mathbb{Z}^3$ such that $B(\mathbf{n}) \in D_\text{xsoln}$ and are set to zero otherwise.
If either of the newly computed $D_\text{supp}$ or $D_\text{soln}$ differ from their respective previous values, then it is necessary to reinitialize the active grid and compute the discrete velocity perturbation, $\mathsf{u}$, over the new $D_\text{xsoln}$. By construction, all non-negligible values of the discrete vorticity, $\mathsf{w}$, are contained in $D_\text{supp}$; therefore, $\mathsf{u}$ over $D_\text{xsoln}$ can be computed as $$\mathsf{a} \leftarrow -\mathsf{M}^{\text{xsoln}}_\mathcal{E}
{\mathsf{L}_\mathsf{Q}}^{-1} \mathsf{M}^{\text{supp}}_\mathcal{E} \mathsf{w},\quad
\mathsf{u} \leftarrow \mathsf{M}^{\text{xsoln}}_\mathcal{E}
\mathsf{C}^\dagger \mathsf{M}^{\text{xsoln}}_\mathcal{E} \mathsf{a}.
\label{eq:vel_refresh}$$ Subsequently, we denote the procedure implied by Eq. as $\mathsf{u} \leftarrow \text{Vor2Vel}( \mathsf{w} )$.
We emphasize that the present algorithm is also compatible with other choices of weight functions. Using weight functions that are well-suited for capturing the relevant flow physics of a particular application can potentially reduce the size of the active domain and the number of operations required to accurately simulate the flow. For example, if we are primarily interested in capturing the local physics of a flow over a particular region centered at $\mathbf{x}_0$, then a weight function $|\mathbf{n}-\mathbf{x}_0|^{-\alpha}[ W(D_\infty) ] (\mathbf{n})$ with $\alpha>0$ and $W$ given by Eq. might be an appropriate choice. Unless otherwise stated, subsequent discussions assume that $W_\text{supp}$ is defined by Eq. .
Velocity refresh {#sec:truncation_refresh}
----------------
In this section we present a set of techniques for limiting the error introduced from truncating non-compact fields that decay algebraically, e.g. $\mathsf{u}$ and $\mathsf{d}$, when computing the action of local operators. We limit the present discussion to issues that arise from evaluating expressions involving $\mathsf{E}^{\mathsf{L}}_\mathcal{Q}(\alpha)$ on the finite active domain since this operator has the largest stencil of all local operators involved in the IF-HERK and projection methods.
We recall that the action of $\mathsf{E}^{\mathsf{L}}_\mathcal{Q}(\alpha)$ on $\mathsf{q}\in\mathbb{R}^\mathcal{Q}$ is computed as $[ \mathsf{G}_{\mathsf{E}}(\alpha) * \mathsf{q}](\mathbf{n})$. Formally, $\mathsf{G}_{\mathsf{E}}(\alpha)$ has an infinite support, but, as discussed in Section \[sec:spatial\_lgfs\], $[\mathsf{G}_{\mathsf{E}}(\alpha)](\mathbf{n})$ decays rapidly as $|\mathbf{n}| \rightarrow \infty$; therefore, it is possible to approximate $\mathsf{G}_{\mathsf{E}}(\alpha)$ to prescribed tolerance using a finite support. Consequently, for a given $\alpha$, there exists some $n_{\mathsf{E}}\in\mathbb{Z}$ such that the field induced from an arbitrary source field can be computed at a distance $n_{\mathsf{E}} \Delta x$ from $\partial D_\text{xsoln}$ to a prescribed accuracy $\epsilon_{\mathsf{E}}$. By choosing the parameter $N_b$, used to define $D_\text{xsoln}$ in Eq. , to be equal or greater than $\lceil n_{\mathsf{E}} / n^b \rceil$ it is possible to evaluate the action of $\mathsf{E}^{\mathsf{L}}_\mathcal{Q}(\alpha)$ on $D_\text{soln}$ to an accuracy $\epsilon_{\mathsf{E}}$. As a result, the flow inside $D_\text{soln}$ remains an accurate approximation of the flow that would have been obtained using the entire unbounded grid.
As the solution is evolved using the IF-HERK method, the operator $\mathsf{E}^{\mathsf{L}}_\mathcal{Q}(\alpha)$ is repeatedly applied to various grid functions, causing the error associated with truncated non-compact source fields to progressively propagate into the interior of $D_\text{xsoln}$. The action of $\prod_{i=1}^{n} \mathsf{M}^{\text{xsoln}}_\mathcal{Q} \mathsf{E}^{\mathsf{L}}_\mathcal{Q}(\alpha_i) \mathsf{M}^{\text{xsoln}}_\mathcal{Q}$ is well-approximated by $\mathsf{M}^{\text{xsoln}}_\mathcal{Q} \mathsf{E}^{\mathsf{L}}_\mathcal{Q}(\beta) \mathsf{M}^{\text{xsoln}}_\mathcal{Q}$, where $\beta=\sum_{i=1}^{n} \alpha_i$. Given that the physical values of the nonlinear terms in the IF-HERK algorithm are approximately zero on $D_\text{buffer}$, the minimum buffer region required to integrate $\mathsf{u}$ over $q$ time-steps is determined by the support of $\mathsf{G}_{\mathsf{E}}(q\beta)$, where $\beta=\sum_{i=1}^{s} \frac{\Delta \tilde{c}_i \Delta t}{(\Delta x)^2\text{Re}} = \frac{\Delta t}{(\Delta x)^2\text{Re}}$. A procedure for obtaining estimates for $n_{\mathsf{E}}$ from $q$ and $\beta$ is provided in \[app:iferror\]. This procedure is extended to obtain an upper bound, $q_\text{max}$, on the number of time-steps, $q$, before the error at prescribed distance $n_{\mathsf{E}} \Delta x$ away from $\partial D_\text{xsoln}$ exceeds a prescribed value of $\epsilon_{\mathsf{E}}$. At its minimum, the depth of the buffer region is $n^b N_b \Delta x$; therefore, the present method takes $n_{\mathsf{E}}$ to be equal to $n^b N_b$.
Provided $q_\text{max}\ge1$, the solution is integrated over multiple time-steps before the error from truncating non-compact source field starts to significantly affect the accuracy of the solution on $D_\text{soln}$.[^14] In order to maintain the prescribed accuracy, after $q_\text{max}$ time-steps the discrete velocity perturbation on $D_\text{xsoln}$ is recomputed or *refreshed* from the discrete vorticity on $D_\text{supp}$ using the $\text{Vor2Vel}$ procedure.
Algorithm summary {#sec:algorithm}
=================
The present method for solving the incompressible Navier-Stokes on formally unbounded Cartesian grids using a finite number of operations and storage, referred to as the NS-LGF method, is summarized in this section. Implementation details are omitted since they are beyond the scope of the present work. Instead, we refer the reader to the parallel implementation of the LGF-FMM [@liska2014], which can be readily extended to accommodate the additional operations required by the NS-LGF method.
An outline of the steps performed by the NS-LGF algorithm at $k$-th time-step is as follows:
1. *Preliminary*: compute the discrete vorticity, $\mathsf{w}_k$, and divergence of the Lamb vector, $\mathsf{q}_k$.
$$\begin{aligned}
\mathsf{w}_k &\leftarrow \mathsf{M}^\text{xsoln}_\mathcal{E} \mathsf{C} \mathsf{M}^\text{xsoln}_\mathcal{F} \mathsf{u}_k, \\
\mathsf{q}_k &\leftarrow -\mathsf{M}^\text{xsoln}_\mathcal{C}
\mathsf{G}^\dagger \mathsf{M}^\text{xsoln}_\mathcal{F}
\tilde{\mathsf{N}}(\mathsf{M}^\text{xsoln}_\mathcal{F}(\mathsf{u}_k+\mathsf{u}_\infty(t_k))).
\end{aligned}$$
2. *Grid update*: update the computational grid based on prescribed criteria.
1. *Query*: use weight functions $W_\text{supp}$ and $W_\text{soln}$, threshold values $\epsilon_\text{supp}$ and $\epsilon_\text{soln}$, and fields $\mathsf{w}_k$ and $\mathsf{q}_k$ to determine whether $D_\text{supp}$ or $D_\text{soln}$ need to be updated.
2. *Update*: (if necessary) update $D_\text{supp}$, $D_\text{soln}$, and $D_\text{xsoln}$ by adding or removing blocks. Copy the values of the discrete vorticity from the old to the new computational grid for $\forall B \in D^{\text{new}}_\text{supp} \cap D^{\text{old}}_\text{supp}$, where $D^{\text{new}}_\text{supp}$ and $D^{\text{old}}_\text{supp}$ denote $D_\text{supp}$ before and after the update, respectively.
3. *Velocity refresh*: compute the discrete velocity perturbation, $\mathsf{u}_k$, from the discrete vorticity, $\mathsf{w}_k$.
1. *Query*: this operation is required if either the grid has been updated or if the number of time-steps since the last refresh is equal or greater than $q_\text{max}$.
2. *Refresh*: (if necessary) compute $\mathsf{u}_k$ using: $$\mathsf{u}_k \leftarrow \text{Vor2Vel}( \mathsf{w}_k ),$$ where the $\text{Vor2Vel}$ procedure given by Eq. .
4. *Time integration*: compute $\mathsf{u}_{k+1}$, $t_{k+1}$, and $\mathsf{p}_{k+1}$ using: $$(\mathsf{u}_{k+1},t_{k+1},\mathsf{p}_{k+1})
\leftarrow \text{xIF-HERK}(\mathsf{u}_{k},t_k),$$ where the $\text{xIF-HERK}$ algorithm is the finite computational grid version of the $\text{IF-HERK}$ algorithm.
The $\text{xIF-HERK}$ algorithm is identical to the $\text{IF-HERK}$ algorithm, except for the presence of mask operators which are used to confine all operations to the finite active domain. With the exception of a few special cases, the $\text{xIF-HERK}$ algorithm is obtained by operating from the left all operators and grid functions present in the $\text{IF-HERK}$ algorithm by the appropriate $\mathsf{M}^\text{xsoln}_\mathcal{Q}$, e.g. $\mathsf{A} \rightarrow \mathsf{M}^\text{xsoln}_\mathcal{Q} \mathsf{A}$ and $\mathsf{y} \rightarrow \mathsf{M}^\text{xsoln}_\mathcal{Q} \mathsf{y}$. The exceptions to this rule correspond to the expressions for $\mathsf{g}_k^i$ and $\hat{\mathsf{d}}_k^i$, which are given by
$$\mathsf{g}_k^i = \tilde{a}_{i,i} \Delta t
\mathsf{M}^\text{soln}_\mathcal{F} \tilde{\mathsf{N}}
\left( \mathsf{M}^\text{xsoln}_\mathcal{F} ( \mathsf{u}_k^{i-1}
+ \mathsf{u}_\infty(t_k^{i-1}) ) \right),
\label{eq:rhs_final}$$
$$\hat{\mathsf{d}}_k^{i} = - \mathsf{M}^\text{xsoln}_\mathcal{C}
\mathsf{L}_\mathcal{C}^{-1} \mathsf{M}^\text{supp}_\mathcal{C}
\mathsf{G}^\dagger \mathsf{M}^\text{xsoln}_\mathcal{F} \mathsf{r}_k^{i}.
\label{eq:dpoisson_final}$$
Both Eq. and reflect the fact that, by construction, the non-negligible physical values of $\mathsf{w}_k$ and $\mathsf{q}_k$ are contained in $W_\text{supp}$.
The operation count for the $k$-th time-step of the NS-LGF method, denoted by $N^{\text{NS}}_k$, is dominated by the number of operations required to evaluate the actions of $\mathsf{L}^{-1}_\mathsf{Q}$ and $\mathsf{E}^{\mathsf{L}}_\mathcal{Q}$. As a result, an estimate for $N^{\text{NS}}_k$ is given by: $$N^{\text{NS}}_k
\approx s N^{\mathsf{L}}_{k}
+ 3 C(s) N^{\mathsf{E}}_{k}
+ \lceil 3 N^{\mathsf{L}}_{k} \rfloor_k,
\label{eq:op_count}$$ where $s$ is the number of stages of the HERK scheme. $N^{\mathsf{L}}_{k}$ and $N^{\mathsf{E}}_{k}$ denote the number of operations required to compute the action of $\mathsf{M}^\text{xsoln}_\mathcal{Q} \mathsf{L}^{-1}_\mathcal{Q} \mathsf{M}^\text{supp}_\mathcal{Q}$ and $\mathsf{M}^\text{xsoln}_\mathcal{Q} \mathsf{L}^{-1}_\mathcal{Q} \mathsf{M}^\text{xsoln}_\mathcal{Q}$, respectively, using the LGF-FMM for scalar grid spaces.[^15] Detailed estimates for the values of $N^{\mathsf{L}}_{k}$ and $N^{\mathsf{E}}_{k}$ can be obtained from the discussion of the LGF-FMM [@liska2014], but we note here that both $N^{\mathsf{L}}_{k}$ and $N^{\mathsf{E}}_{k}$ scale as $\mathcal{O}(N)$ for sufficiently large values of $N$, where $N$ is the total number of grid cells of the active domain. The notation $\lceil\,\cdot\,\rfloor_k$ is used to clarify that cost associated with velocity update, i.e. $3 N^{\mathsf{L}}_{k}$, should only be included if a velocity update is performed. Lastly, $C(s)$ specifies the number of integrating factors required by an $s$-stage $\text{IF-HERK}$ scheme. In general, $C(s)$ is equal to $C_0(s)$, where $$C_0(s) = s + \left[ \frac{ (s-1) s }{ 2 } \right].$$ For special case of second-order IF-HERK schemes, $C(s)$ reduces to $C_0(s)-1$.[^16]
For convenience, a summary of the parameters used in our treatment of the active computational domain is provided by Table \[tab:run\_parameters\].
*Symbol* *Description* *Section*
------------------------ ----------------------------------------- --------------------------------
$N_b$ Width of $W_\text{buffer}$ (no. blocks) \[sec:truncation\_active\]
$n^{b}$ Block size (no. cells) \[sec:truncation\_active\]
$\epsilon_\text{FMM}$ LGF-FMM tolerance \[sec:spatial\_lgfs\]
$\epsilon_\text{supp}$ Support region threshold \[sec:truncation\_adaptivity\]
$\epsilon_\mathsf{E}$ Buffer region tolerance \[sec:truncation\_refresh\]
: Parameters used in the treatment of the finite computational domain. \[tab:run\_parameters\]
Of the parameters listed in Table \[tab:run\_parameters\], only $\epsilon_\text{FMM}$, $\epsilon_\mathsf{E}$, and $\epsilon_\text{supp}$ affect the accuracy of the numerical simulation. The *solution error* of the NS-LGF method, i.e. the error associated with approximately solving the fully discretized unbounded grid equations, is approximately bounded above by the sum of these three parameters.
The field values used to compute $D_\text{supp}$ should represent field values that would be obtained using the unbounded grid in the absence of numerical errors associated with the evaluation of discrete operators. Spurious and unnecessary changes to the active domain are avoided by requiring $$\max( \epsilon_\text{FMM}, \epsilon_\mathsf{E} ) < \alpha \epsilon_\text{supp},
\label{eq:cond_dgrowth}$$ where $\alpha<1$ is a safety parameter specifying the sensitivity of the adaptive scheme to the solution errors associated with $\epsilon_\text{FMM}$ and $\epsilon_\mathsf{E}$.[^17] Furthermore, using parameters that satisfy Eq. eliminates the inclusion of blocks that only contain field values that are on the same order as the solution error.
The values for $n^b$ and $N_b$ can also significantly affect the number of numerical operations performed by the NS-LGF method. Smaller values of $n^b$ typically result in smaller active domains, but require more frequent velocity updates and often require the use of LGF-FMM schemes with less than optimal computational rates. In practice, computationally efficient schemes are obtained by setting $N_b=1$ and determining the lower bound for $n^b$, denoted by $n_0^b$, from the prescribed value of $\epsilon_\mathsf{E}$. Next, starting from $n_0^b$, progressively larger values of $n^b$ are considered until an efficient LGF-FMM scheme that achieves the prescribed $\epsilon_\text{FMM}$ tolerance is obtained. The construction and computational performance of LGF-FMM schemes are discussed in [@liska2014].
Verification examples {#sec:verif}
=====================
The behavior of the NS-LGF method is verified through numerical simulations of thin vortex rings. We consider vortex rings of ring-radius $R$ and core-radius $\delta$, with circulation $\Gamma$ and Reynolds number $\text{Re}=\frac{\Gamma}{\nu}$, where $\nu$ is the kinematic viscosity of the fluid. Unless otherwise stated, simulations are initiated with a vorticity distribution given by $$\omega_\theta(r,z) = \frac{\Gamma}{\pi \delta^2}
\exp\left( \frac{ z^2 + (r-R)^2 } { \delta^2 } \right),
\quad
\omega_z(r,z) = 0,
\label{eq:initial_omega_gauss}$$ where $r=x^2+y^2$ and $\theta=\tan^{-1}(y/x)$. As a result, the vortex ring initially translates in the positive $z$-direction due to its self-induced velocity [@saffman1992].
The numerical experiments discussed in this section are initialized by first specifying an initial discrete vorticity, $\mathsf{w}_0$, and then using Eq. to obtain an initial discrete velocity perturbation, $\mathsf{u}_0$. This procedure naturally leads to a $\mathsf{u}_0$ that is compatible with the IF-HERK method, i.e. $\mathsf{G}^\dagger \mathsf{u}_0=0$. The initial active domain is chosen such that the $|\boldsymbol{\omega}|<10^{-10}$ outside the $D_\text{supp}$. In order to avoid significant numerical artifacts due to the jump in the direction of the vorticity field at the ring origin, we limit our attention to vortex rings for which $|\boldsymbol{\omega}_\text{center}|<10^{-10}\max|\boldsymbol{\omega}|$, where $\boldsymbol{\omega}_\text{center}$ is the value of $\boldsymbol{\omega}$ at the center of the ring. For the case of Eq. , this condition is satisfied for $\delta/R<0.2$.
Provided a sufficiently large initial active domain, any sufficiently accurate process for computing $\mathsf{w}_0$ from $\boldsymbol{\omega}_0$ can be used to initialize the numerical simulations. Yet it is convenient to use a process that naturally leads to a $\mathsf{w}_0$ such that $\mathsf{D}\mathsf{w}_0 \approx 0$. In the absence of any numerical errors, $\tilde{\mathsf{w}}_0 = \mathsf{C}\mathsf{u}_0$ is equal to $\mathsf{w}_0$ if and only if $\mathsf{D}\mathsf{w}_0=0$. For the case of $\mathsf{D}\mathsf{w}_0 \ne 0$, the support of $\tilde{\mathsf{w}}_0$ is typically larger than the support of $\mathsf{w}_0$, which in turn leads to larger active domains and complicates initial error estimates, i.e. $|\mathsf{w}_0|<\epsilon$ in $D_\text{supp}$ does not imply $|\tilde{\mathsf{w}}_0|<\epsilon$ in $D_\text{supp}$. Provided $\nabla \cdot \boldsymbol{\omega}=0$, it is possible to construct $\mathsf{w}_0$ such that the magnitude of $\mathsf{D}\mathsf{w}_0$ is less than a prescribed tolerance by computing approximate values of the vorticity flux over the faces of the dual grid and applying the Divergence theorem to each dual cell.[^18] For all test cases, a high-order quadrature scheme is used to integrate the initial vorticity distribution over the faces of the dual grid such that the resulting $\mathsf{w}_0$ satisfies $\|\mathsf{D}\mathsf{w}_0\|_\infty \approx 10^{-10}$.
Test cases are performed using $n^b=16$ and $N_b = 1$. This choice of parameters leads to $\epsilon_\text{FMM}<10^{-8}$ for all values of $\Delta x$, $\Delta t$, and $\text{Re}$ considered. The values of $\epsilon_\text{supp}$ and $\epsilon_\mathsf{E}$ are taken to be $\epsilon_\text{supp} = 0.1 \epsilon^*$ and $\epsilon_\mathsf{E} = \epsilon^*$. The value of $\epsilon^*$ is varied across different sets of simulations, but is always such that $10^{-8} \le \epsilon^* \le 10^{-2}$. The support domain $D_\text{supp}$ is computed using Eq. and Eq. , and the solution domain $D_\text{soln}$ is set to be equal to $D_\text{supp}$. It follows from our choice of parameters that the overall solution error is always bounded above by $\epsilon^*$.[^19]
With the exception of a few test cases discussed in Section \[sec:verif-conv\], all numerical experiments are performed using the IF-HERK scheme denoted as “Scheme A” in Section \[sec:temporal\_ifherk\]. The time-step size, $\Delta t$, is held fixed during each simulation and chosen such that the $\text{CFL}$, based on the maximum point-wise velocity magnitude, does not exceed 0.75. Unless otherwise stated, the freestream velocity, $\mathsf{u}_\infty$, is set to be zero.
Discretization error {#sec:verif-conv}
--------------------
The order of accuracy of the discretization techniques is verified using spatial and temporal refinement studies on the early evolution of a vortex ring at $\text{Re}_0=\numprint{1000}$ with initial vorticity distributions given by $$\omega_\theta(r,z) = \left\{ \begin{array}{cl}
\alpha \frac{\Gamma}{R^2} \exp\left( -4s^2/(R^2-s^2) \right) & \text{if}\,\,s \le R \\
0 & \text{otherwise}
\end{array} \right. , \quad
\omega_z(r,z) = 0,
\label{eq:initial_omega_bump}$$ where $s^2 = z^2 + (r-R)^2$ and $\alpha$ is chosen such that $\omega_\theta$ integrates to $\Gamma$, i.e. $\alpha \simeq 0.54857674$.[^20] Test cases are performed using fixed grids that are sufficiently large such that at any time-step of the simulation the active domain corresponds to a value of $\epsilon^*$ less than $10^{-8}$.
We use $\varepsilon_\mathbf{u} = \| \mathsf{u} - \mathsf{T}_\mathcal{F} \mathsf{u}^* \|_\infty / \| \mathsf{u}^* \|_\infty$ and $\varepsilon_\mathbf{p} = \| \mathsf{p} - \mathsf{T}_\mathcal{C} \mathsf{p}^* \|_\infty / \| \mathsf{p}^* \|_\infty$ to approximate the error at time $T$ of the velocity field, $\mathsf{u}$, and the pressure field, $\mathsf{p}$, respectively. The superscript ${}^*$ is used to denote grid functions obtained from the test case with the highest resolution, i.e. smallest $\Delta x$ or $\Delta t$, included in the corresponding refinement study. Point-wise comparisons between grid functions at different refinement levels are made possible through the use of the coarsening operators $\mathsf{T}_\mathcal{F}$ and $\mathsf{T}_\mathcal{C}$. Finally, we define $\|\mathsf{x}\|_\infty$ as the maximum value of $|\mathsf{x}(\mathbf{n})|$ for all $\mathbf{n}$ associated with grid locations in $D_\text{soln}$.
![ Velocity error, $\varepsilon_\mathbf{u}$, and pressure error, $\varepsilon_p$, for test cases. Spatial refinement study verifies second-order accuracy of the spatial discretization technique (*left*). Temporal refinement studies verify the expected order of accuracy of the three time integration schemes defined in Section \[sec:temporal\_ifherk\] (*right*). \[fig:verif-conv-xt\] ](fig_3.pdf){width="\textwidth"}
The spatial refinement study consists of seven test cases corresponding to $\Delta x / \Delta x_0 = 2^{0}, 2^{-1},\dots,2^{-6}$. Test cases are performed using the same $\Delta t$, and $\varepsilon_\mathbf{u}$ and $\varepsilon_\mathbf{p}$ are evaluated at $T=10\Delta t$. The computational grids are constructed such that the location of vertices of coarser grids always coincide with the location of vertices of finer grids. This enables the coarsened solution fields $T_\mathcal{C}\mathsf{p}^*$ and $T_\mathcal{F}\mathsf{u}^*$ to be computed by recursively averaging the values of the 8 (4) fine grid cells (faces) occupying the same physical region as the corresponding coarse grid cell (face). The slope of the error curves depicted in the left plot of Figure \[fig:verif-conv-xt\] verifies that the solutions are second-order accurate in $\Delta x$.
Temporal refinement studies are performed using the three IF-HERK schemes, Scheme A–C, included in Section \[sec:temporal\_ifherk\]. For each scheme, a series of eight test cases is performed using $\Delta t / \Delta t_0 = 2^{0}, 2^{-1},\dots,2^{-7}$. All test cases employ the same computational grid, and $\varepsilon_\mathbf{u}$ and $\varepsilon_\mathbf{p}$ are evaluated at $T=10\Delta t_0$. Consequently, $\mathsf{T}_\mathcal{F}$ and $\mathsf{T}_\mathcal{C}$ are taken to be identity operators. The slopes of the error curves depicted in the right plot of Figure \[fig:verif-conv-xt\] verify that the accuracy with respect to $\Delta t$ of each scheme is the same as the order of accuracy expected from the IF-HERK order-conditions.[^21]
Quality metrics for thin vortex rings {#sec:verif-conv-int}
-------------------------------------
In this section we consider the laminar evolution of a thin vortex ring at $\text{Re}_0=\numprint{7500}$ initiated with $\delta_0 / R_0 = 0.2$. Six test cases for different values of $\Delta x$ and $\Delta t$ are performed. The ratio $\Delta t / \Delta x = 0.5734 R_0/\Gamma_0$ is held constant across all test cases. Unlike the numerical experiments of Section \[sec:verif-conv\], the grid is allowed to freely adapt as the solution evolves. For all test cases, $\epsilon^*$ is taken to be $10^{-6}$, which is significantly smaller than the discretization error inferred from the discussion of Section \[sec:verif-conv\].
The evolution of isolated vortex rings is often characterized by the time-history of a few fundamental volume integrals. Quantities considered in the following numerical experiments include the hydrodynamic impulse $\boldsymbol{\mathcal{I}}$, the kinetic energy $\mathcal{K}$, enstrophy $\mathcal{E}$, the helicity $\mathcal{J}$, the Saffman-centroid $\boldsymbol{\mathcal{X}}$, and the ring-velocity $\boldsymbol{\mathcal{U}}$. Expressions for these quantities for unbounded fluid domains and exponentially decaying $\boldsymbol{\omega}$ fields are given by [@saffman1992]: $$\begin{aligned}
\begin{split}
\boldsymbol{\mathcal{I}}(t) &= \frac{1}{2} \int_{\mathbb{R}^3}
\mathbf{x} \times \boldsymbol{\omega}\, d\mathbf{x}, \\
\mathcal{K}(t) &= \int_{\mathbb{R}^3} \mathbf{u} \cdot
\left( \mathbf{x} \times \boldsymbol{\omega} \right) \, d\mathbf{x}, \\
\mathcal{E}(t) &= \frac{1}{2} \int_{\mathbb{R}^3}
\left| \boldsymbol{\omega} \right|^2 \, d\mathbf{x}, \\
\end{split} \quad
\begin{split}
\mathcal{J}(t) &= \int_{\mathbb{R}^3} \mathbf{u} \cdot \boldsymbol{\omega} \, d\mathbf{x}, \\
\boldsymbol{\mathcal{X}}(t) &= \frac{1}{2} \int_{\mathbb{R}^3}
\frac{ \left( \mathbf{x} \times \boldsymbol{\omega} \right) \cdot \boldsymbol{\mathcal{I}} }
{ |\boldsymbol{\mathcal{I}}|^2 } \mathbf{x}\, d\mathbf{x}
- \int_0^t \mathbf{u}_\infty(t^\prime)\,dt^\prime \\
\boldsymbol{\mathcal{U}}(t) &= \frac{d\boldsymbol{\mathcal{X}}}{dt}.
\end{split}
\label{eq:int_quantities}\end{aligned}$$ The hydrodynamic impulse, $\boldsymbol{\mathcal{I}}$, is a conserved quantity in the absence of non-conservative forces [@saffman1992]. As a result, $\boldsymbol{\mathcal{I}}$ provides a useful metric for assessing the accuracy and physical fidelity of numerical solutions. The time rate of change of $\mathcal{K}$ is related to $\mathcal{E}$ by the relationship $\frac{d}{dt}\mathcal{K} = - 2 \nu \mathcal{E}$. Differences in the time history of $\frac{d}{dt}\mathcal{K}$ between different numerical simulations of the same flow are commonly used to characterize the accuracy of solutions of unsteady flows [@stanaway1988; @archer2008; @cheng2015]. In the absence of viscosity, the helicity, $\mathcal{J}$, is an invariant of the flow and provides a measure for the degree of linkage of the vortex lines of the flow [@moffatt1992]. Although the present simulations consider viscous flows, differences in $\mathcal{J}$ between test cases of the same flow are used as part of our quality metrics. Our definitions for the vortex ring centroid, $\boldsymbol{\mathcal{X}}$, and propagation velocity, $\boldsymbol{\mathcal{U}}$, are equivalent to those used by Saffman [@saffman1970; @saffman1992]. Although all the integrals of Eq. are formally over $\mathbb{R}^3$, they can be accurately computed for solutions obtained by the NS-LGF method since the support of the integrands is approximately contained in $D_\text{soln}$.[^22]
![ Time histories of $\mathcal{E}$, $\mathcal{K}$, $\mathcal{I}_z$, and $\mathcal{U}_z$ (*respectively, left to right*) for a vortex ring at $\text{Re}_0=\numprint{7500}$ initiated with $\delta_0 / R_0 = 0.2$. Numerical experiments are performed using different values of $\delta_0 / \Delta x$ while holding $\Delta t / \Delta x$ constant. \[fig:verif-conv-int\] ](fig_4.pdf){width="\textwidth"}
$\delta_0 / \Delta x$ $\mathcal{E}$ $\mathcal{K}$ $\mathcal{I}_z$ $\mathcal{U}_z$
----------------------- --------------------- --------------------- --------------------- ---------------------
$4$ $1.8\times10^{-2 }$ $1.5\times10^{-2 }$ $7.5\times10^{-6 }$ $4.9\times10^{-3 }$
$8$ $4.0\times10^{-3 }$ $3.5\times10^{-3 }$ $6.6\times10^{-6 }$ $4.8\times10^{-4 }$
$12$ $1.5\times10^{-3 }$ $1.3\times10^{-3 }$ $4.8\times10^{-6 }$ $1.7\times10^{-4 }$
$16$ $6.0\times10^{-4 }$ $5.3\times10^{-4 }$ $4.4\times10^{-6 }$ $7.3\times10^{-5 }$
$20$ $2.0\times10^{-4 }$ $2.2\times10^{-4 }$ $2.3\times10^{-6 }$ $2.7\times10^{-5 }$
: Maximum difference in $\mathcal{E}$, $\mathcal{K}$, $\mathcal{I}_z$, and $\mathcal{U}_z$ during $t \Gamma_0 / R_0^2 \in [ 0, 40 ]$ between test cases with $\delta_0 / \Delta x < 24$ and the test case with $\delta_0 / \Delta x = 24$. Reported differences have been normalized by the maximum value of the respective quantity during $t \Gamma_0 / R_0^2 \in [ 0, 40 ]$. \[tab:verif-conv-int\]
The time history for the values of $\mathcal{E}$, $\mathcal{K}$, $\mathcal{I}_z$, and $\mathcal{U}_z$, where subscripts “$q$” denotes the component of a vector quantity in $q$-th direction, are shown in Figure \[fig:verif-conv-int\]. The values for $\mathcal{J}$ and the components of $\boldsymbol{\mathcal{I}}$ and $\boldsymbol{\mathcal{U}}$ in the $x$- and $y$-directions were also computed, but are not depicted since the magnitude of these values remained less than $10^{-8}$, which is significantly smaller than $\epsilon^*$, for all test cases. Visual inspection of the curves included in Figure \[fig:verif-conv-int\] suggests good agreement between all tests cases. This is quantified by Table \[tab:verif-conv-int\], which lists the maximum difference between test cases with $\delta_0 / \Delta x < 24$ and the test case with $\delta_0 / \Delta x = 24$.
Figure \[fig:verif-conv-int\] demonstrates that $\mathcal{E}$, $\mathcal{K}$, and $\mathcal{U}_z$ are most sensitive to changes in the resolution at early times, $t \Gamma_0 / R_0^2 \in [ 0, 15 ]$. We attribute this to the rapid changes in the vorticity distribution observed shortly after the ring is initiated. For cases initiated with finite values of $\delta/R$, it is well-known that flow undergoes an “equilibration” phase shortly after being initiated [@stanaway1988; @shariff1994; @archer2008].[^23] During this phase, vorticity starts to be shed into the wake and, over time, the core region of the ring assumes a more relaxed axisymmetric vorticity distribution in which $\omega_\theta$ is no longer symmetric, but instead skewed so as to concentrate the vorticity away from the ring center. After the equilibration phase, i.e. approximately after $t \Gamma_0 / R_0^2 > 15$ for test cases under consideration, the ring assumes a quasi-steady distribution that persists until the growth of linear instabilities causes the ring to transition into turbulence. This transition does not occur during the simulation time of the present study, but will be investigated in Section \[sec:verif-trunc\].
For each test case, the value of $\mathcal{I}$ remained nearly constant throughout the simulation time, only exhibiting deviations on the same order as $\epsilon^*$ (taken to be $10^{-6}$ for all test cases). Interestingly, the value $\mathcal{I}$ appears to be insensitive to changes in $\Delta x$, at least when maintaining $\Delta t/\Delta x$ constant, as demonstrated by Table \[tab:verif-conv-int\]. We refrain from speculating on whether the present method results in additional conservation properties beyond those mentioned in Section \[sec:spatial\_discrete\], since such investigations are beyond the scope of the present work. Instead, we simply note that $\mathcal{I}$ appears to be conserved approximately up to the solution error, i.e. $\epsilon^*$, which further verifies the physical fidelity of solutions obtained using the NS-LGF method.
The difference between the LHS and RHS of $\frac{d}{dt}\mathcal{K} = -2 \nu \mathcal{E}$ is often used as a metric for the spatial discretization error. The maximum value of $\left| \frac{d}{dt}\mathcal{K} - (-2 \nu \mathcal{E}) \right| / \left( 2 \nu \mathcal{E} \right)$ for $t \Gamma_0 / R_0^2 \in [ 0, 40 ]$ is $6.8 \times 10^{-2}$, $2.1 \times 10^{-2}$, $9.6 \times 10^{-3}$, $5.3\times 10^{-3}$, $3.4\times 10^{-3}$, and $2.3\times 10^{-3}$ for the tests cases considered, sorted in ascending order of $\delta_0/\Delta x$. Values for $\frac{d\mathcal{K}}{dt}$ and $2 \nu \mathcal{E}$ were computed at each half-time step using standard second-order differencing and averaging, respectively.
Propagation speed of thin vortex rings {#sec:verif-selfindc}
--------------------------------------
The results of this section verify that the solutions obtained using the NS-LGF method are indeed physical solutions to the incompressible Navier-Stokes equations. The translational speed of laminar vortex rings has been extensively studied through experimental, numerical, and theoretical investigations [@saffman1992; @stanaway1988; @akhmetov2009; @sullivan2008; @fukumoto2010]. @saffman1970 showed that the propagation speed of a viscous vortex ring with a vorticity distributions given by Eq. , in the limit of $\delta / R \rightarrow 0$, is $$U_\text{Saffman} = \frac{\Gamma_0}{4 \pi R_0} \left[
\log \left( \frac{8}{\varepsilon} \right)
- \beta_0
+ \mathcal{O} \left( \varepsilon \log \varepsilon \right) \right],
\label{eq:u_saffman}$$ where $\varepsilon = \delta / R$, $\beta_0 = \frac{1}{2} \left(1 - \gamma + \log 2 \right) \simeq 0.557966$, and $\gamma \simeq 0.577216$ is Euler’s constant. Subsequent numerical [@stanaway1988] and theoretical [@fukumoto2000] investigations have shown that the error term is actually smaller, and is given by $\mathcal{O} \left( \varepsilon^2 \log \varepsilon \right)$.
![ Propagation speed of a thin vortex ring at $\text{Re}_0=\numprint{7500}$ for the different values of $\varepsilon=\delta_0/R_0$ (*left*). Difference between the computed value, $\mathcal{U}_z$, and the theoretical estimate, $U_\text{Saffman}$, for the propagation speed of a vortex ring at $\text{Re}_0=\numprint{7500}$ (*middle*). Time history of the propagation speed of a vortex ring initiated with $\delta_0/R_0=0.1$ at different $\text{Re}$ (*right*). \[fig:verif-saff\] ](fig_5.pdf){width="\textwidth"}
The initial propagation speed of a vortex ring, taken to be $\mathcal{U}_z$ as defined in Eq. , is computed for test cases at $\text{Re}_0=\numprint{7500}$ that have been initiated with $\varepsilon = 0.2,\, 0.1,\, 0.05,\, 0.025$, and $0.0125$. For all test cases, $\delta_0/{\Delta x}=20$, $\Delta t \Gamma_0 / R_0^2 = 10^{-6}$ and $\epsilon^*=10^{-6}$. Values of $\mathcal{U}_z$ are computed via central differencing the values of $\boldsymbol{\mathcal{X}}$ between adjacent time-steps. The value $\mathcal{U}_z$ at $t^* = \Delta t /2$ for each test case is shown in the left plot of Figure \[fig:verif-saff\]. Visual inspection indicates good agreement between $\mathcal{U}_z$ and $U_\text{Saffman}$, which in turn verifies that numerical solutions obtained by the NS-LGF method approximate actual physical solutions.
We further verify the present formulation by confirming the form of the error term of $U_\text{Saffman}$, i.e. $\mathcal{O} \left( \varepsilon^2 \log \varepsilon \right)$. Theoretical estimates for the effective ring and core radii for early times[^24] indicate that, at time $t^*$, the ring and core size have not deviated enough from their initial values to significantly affect the value $U_\text{Saffman}$ as to hinder the present comparison. The middle plot of Figure \[fig:verif-saff\] shows the difference in the ring propagation speed between the numerical experiments, $\mathcal{U}_z$, and theoretical estimates, $U_\text{Saffman}$. For large values of $\varepsilon$, i.e. $\varepsilon>0.05$, the rate of change of $\Delta \tilde{U}_z = \left( U_\text{Saffman} - \mathcal{U}_z \right) R_0/\Gamma_0$ with respect to $\varepsilon$ is consistent with the theoretical $\mathcal{O} \left( \varepsilon^2 \log \varepsilon \right)$ error estimate. On the other hand, for $\varepsilon<0.05$ the rate of change of $\Delta \tilde{U}_z$ with respect to $\varepsilon$ is slightly faster than $\mathcal{O} \left( \varepsilon^2 \log \varepsilon \right)$. We refrain from attributing any physical meaning to the difference in the behavior of the error at smaller values of $\varepsilon$ since we have not thoroughly determined the numerical error for such test cases.[^25]
We further verify the present implementation by comparing the time and Reynolds number dependence of $\mathcal{U}_z$ with previously reported theoretical [@fukumoto2010] and numerical [@stanaway1988] results. To facilitate the comparisons, it is convenient to define $$t_\Gamma = \frac{\delta_0^2}{4\nu} + t.$$ The discussion of [@fukumoto2010] provides theoretical bounds on $\mathcal{U}_z$ based on the low and high $\text{Re}$ limits of a vortex ring initiated with $\delta/R \rightarrow 0$,
$$\begin{aligned}
{2}
U_\text{Fukumoto,0} &= \frac{\Gamma_0}{4 \pi R_0} \left[
\log \left( \frac{4R_0}{\sqrt{\nu t_\Gamma}} \right)
- \beta_0 - \frac{9}{5}\left(
\log \left( \frac{4R_0}{\sqrt{\nu t_\Gamma}} \right)
-\beta_1 \right) \frac{\nu t_\Gamma}{R_0^2} \right]
&& (\text{low-Re}), \\
U_\text{Fukumoto,1} &= \frac{\Gamma_0}{4 \pi R_0} \left[
\log \left( \frac{4R_0}{\sqrt{\nu t_\Gamma}} \right)
- \beta_0 - \beta_2 \frac{\nu t_\Gamma}{R_0^2} \right]
&& (\text{high-Re}),
\end{aligned}$$
where $\beta_0$ is the same as in Eq. , $\beta_1 \simeq 1.057967$, and $\beta_2 \simeq 3.671591$. For all test cases, $\delta_0 / \Delta x= 15$ and $\Delta t$ is determined by requiring the initial CFL to be $0.5$. Test cases correspond to a vortex ring at $\text{Re}_0 = 100,\, 200,\, \text{and}\,\, 400$ that are initiated with $\delta_0/R_0=0.1$. The right plot of Figure \[fig:verif-saff\] demonstrates that, for all test cases, $\mathcal{U}_z$ remains bounded between $U_\text{Fukumoto,0}$ and $U_\text{Fukumoto,1}$, except at early times for the case of $\text{Re}_0=400$ where the numerical $\mathcal{U}_z$ slightly exceeds the $U_\text{Fukumoto,1}$. This discrepancy is not surprising since the theory of @fukumoto2010 assumes that the vortex ring is initiated with $\delta/R \rightarrow 0$, and, as a result, does not properly account for the changes in the vorticity distribution that occur during the equilibration phase of a vortex ring initiated with a finite $\delta/R$ value. Although not shown in Figure \[fig:verif-saff\], the time history of $\mathcal{U}_z$ for all test cases has been compared to the numerical results of [@stanaway1988], and found to be in good agreement (overlaying the curves of both investigations reveal nearly identical results).
Finite active computational domain error {#sec:verif-trunc}
----------------------------------------
In this section, we investigate the effect that our adaptive grid technique has on the numerical solutions by considering the evolution of a thin vortex ring computed using different values of $\epsilon^*$. These test cases are used to verify that the solutions converge as $\epsilon^*$ tends to zero and to verify, via comparisons with numerical investigations of other authors, the physical fidelity of the solutions.
For all test cases, the vortex ring is initiated with $\delta_0/R_0=0.2$ and a constant uniform flow, $\mathsf{u}_\infty = \left[ 0, 0, u^{(z)}_\infty \right]$, is superimposed to partially oppose the translational motion of the vortex ring. The value of $u^{(z)}_\infty R_0 / \Gamma_0$ is taken to be $-0.18686$, which reduces the initial speed of the vortex ring by approximately 75%. Solutions are computed using $\delta_0/{\Delta x}=10$ and $\Delta t \Gamma_0 / R_0^2 \approx 0.01721$. The error estimates of Section \[sec:verif-conv-int\] indicate that, for all test cases, the discretization error is on the order of $10^{-3}$.
![ Time histories of $\mathcal{E}$, $\mathcal{K}$, $\mathcal{I}_z$, and $\mathcal{U}_z$ (*respectively, left to right*) for a vortex ring at $\text{Re}_0=500$ initiated with $\delta_0 / R_0 = 0.2$. All parameters, with the exception of $\epsilon^*$, are held constant across all test cases. \[fig:verif-conv-lowre\] ](fig_6.pdf){width="\textwidth"}
Figure \[fig:verif-conv-lowre\] depicts the time histories of $\mathcal{E}$, $\mathcal{K}$, $\mathcal{I}_z$, and $\mathcal{U}_z$ for a vortex ring at $\text{Re}=500$ computed using $\epsilon^* = 10^{-2},\, 10^{-3},\, 10^{-4},\, 10^{-5},\, \text{and}\,\, 10^{-6}$. The smooth decay of $\mathcal{E}$ and $\mathcal{K}$ indicates that the vortex ring remains laminar throughout the entire simulation time. This follows from the fact that a pronounced peak in $\mathcal{E}$ is observed during the transition to the early stages of turbulence resulting from a significant increase in the stretching of vortex filaments [@archer2008]. Figure \[fig:verif-conv-lowre\] verifies that, for laminar flows, numerical solutions converge as $\epsilon^*$ tends to zero. For all test cases with values of $\epsilon^*>10^{-2}$, the error[^26] in the computed values $\mathcal{E}$, $\mathcal{K}$ and $\mathcal{I}_z$ is inversely proportional to $\epsilon^*$ for $t \Gamma_0/R_0^2\in[10,80]$. The large oscillations in $\mathcal{U}_z$ are due to shifts in $\boldsymbol{\mathcal{X}}$ resulting from the addition or removal of a single layer blocks in the $z$-direction. For times at which all test cases exhibit an approximate local minimum in $\mathcal{U}_z$, e.g. $t \Gamma_0/R_0^2 \approx 70.5$, the error in $\mathcal{U}_z$ is also inversely proportional to $\epsilon^*$.
Next, we consider the effect $\epsilon^*$ has on solutions of unsteady flows that are sensitive to small perturbations. The numerical investigations of [@bergdorf2007; @archer2008] on thin vortex rings with Gaussian vorticity distributions at $\text{Re}_0=\numprint{7500}$ have shown that small sinusoidal perturbations to the vortex ring centerline result in the growth of azimuthal instabilities, which in turn facilitate the laminar to turbulent transition of the flow. Here, we consider the evolution of a vortex ring at $\text{Re}_0=\numprint{7500}$ computed using values of $\epsilon^* = 10^{-2},\, 10^{-3},\, 10^{-4},\, 10^{-5},\, \text{and}\,\, 10^{-6}$. Unlike the numerical experiments of [@bergdorf2007; @archer2008], the vortex ring is initiated without imposing any perturbations beyond those implied by the numerical scheme.
![ Time history of $\mathcal{E}$ for a vortex ring at $\text{Re}_0=\numprint{7500}$ initiated with $\delta_0 / R_0 = 0.2$ (*left*). Data points labeled as “Archer” correspond values reported in @archer2008. All parameters, with the exception of $\epsilon^*$, are held constant across all test cases. Vorticity iso-surfaces at $t \Gamma_0/R_0^2=137.6$ for test case $\epsilon^*=10^{-4}$ (*right*). \[fig:verif-conv-thres\] ](fig_7.pdf){width="\textwidth"}
The time history of $\mathcal{E}$ for all test cases is shown in the left plot of Figure \[fig:verif-conv-thres\]. The transition into the early stages of turbulence, characterized by a peak in $\mathcal{E}$ resulting from an increase in the stretching of vortex filaments, is observed for all test cases. The growth of azimuthal instabilities and the development of secondary or “halo” vortices occurring at beginning of the transition phase [@bergdorf2007; @archer2008] are depicted in the right plot of Figure \[fig:verif-conv-thres\].
As expected from the previous test cases for $\text{Re}_0=500$, the values of $\mathcal{E}$ during the laminar regime for all test cases converge as $\epsilon^*$ tends zero. Also included in Figure \[fig:verif-conv-thres\] are the values of $\mathcal{E}$ reported in the numerical investigations of @archer2008 for same vortex ring, which are nearly identical to values obtained from our test cases during the laminar regime.[^27] Additionally, the vorticity iso-surfaces shown in right plot of Figure \[fig:verif-conv-thres\] are qualitatively similar to the vorticity iso-surfaces provided by @archer2008 depicting the nonlinear growth of instabilities. In particular, the iso-surfaces of both investigations demonstrate the noticeable presence of the $n=1$ azimuthal Fourier mode and the presence of halo vortices (iso-surfaces of $\omega_z$ in Figure \[fig:verif-conv-thres\]) of similar magnitudes but alternating sign wedged between the approximately sinusoidally displaced inner-core (iso-surfaces of $\omega_\theta$ in Figure \[fig:verif-conv-thres\]).
![ Vorticity magnitude on the $y$-$z$ plane at $x=0$ for test cases of $\epsilon^* = 10^{-2},\, 10^{-4},\, \text{and}\,\, 10^{-6}$ at different times, $\tilde{t}=t \Gamma_0/R_0^2$. Contours correspond to values of $|\omega|R_0^2/\Gamma_0 = 4 \times \left( \frac{1}{2} \right)^i$ for $i=8,7,\dots,0$. Contours have been shifted the $z$-direction to account for the constant freestream velocity, $\tilde{z} = z - u^{(z)}_\infty t$. Thick lines depict the boundary of $D_\text{xsoln}$. \[fig:verif-contours\] ](fig_8.pdf){width="\textwidth"}
The time histories of $\mathcal{E}$ shown in Figure \[fig:verif-conv-thres\] indicate that the time at which $\mathcal{E}$ starts to increase prior to reaching its peak value, i.e. the time at which the flow starts to transition, increases as $\epsilon^*$ decreases, but converges as $\epsilon^*$ tend to zero. This trend is an expected consequence of the present adaptive grid technique since the flow field is slightly perturbed each time a block is removed, i.e. vorticity is implicitly set to zero outside $D_\text{supp}$. The magnitude of these perturbations is correlated to the value of $\epsilon^*$ used to compute the numerical solution. Over time, the perturbations introduced by the adaptive grid lead to changes in the flow field that break the axial symmetry of the solution, which in turn promotes the growth of instabilities. Figure \[fig:verif-contours\] provides vorticity contours at different times that depict the breakdown of axial symmetry and the subsequent laminar to turbulent transition for a few test cases.
Figure \[fig:verif-contours\] also depicts the computational domains that result from using different values of $\epsilon^*$. As expected, higher values of $\epsilon^*$ result in tighter domains, but lead to some significant changes in the flow that are potentially relevant to specific applications. For example, Figure \[fig:verif-contours\] indicates that using a value $\epsilon^*$ of $10^{-2}$ is sufficient to accurately track the laminar evolution of the vortex core, but does not adequately capture the large wake that develops behind the vortex ring.[^28] We recall that the computational domain is determined by the particular choice of $W_\text{supp}$ and $\epsilon_\text{supp}$, both of which can be readily modified to accurately and efficiently capture the relevant physics of specific applications.
![ Translucent iso-surfaces of the vorticity magnitude for the test case of $\epsilon^*=10^{-4}$ at different times. Iso-surfaces correspond to values of $|\omega|R_0^2/\Gamma_0 = 0.03125,\, 0.125,\, 0.5,\, \text{and}\,\, 2$. \[fig:verif-isos\] ](fig_9.pdf){width="\textwidth"}
Figure \[fig:verif-isos\] depicts vorticity iso-surfaces during the transition phase ($t \Gamma_0 / R_0^2 = 137.6$) and early turbulent regime ($t \Gamma_0 / R_0^2 = 206.4\,\, \text{and}\,\, 275.2$) for the test case of $\epsilon^*=10^{-4}$. At $t \Gamma_0 / R_0^2 = 206.4$ and $275.2$, the presence of multiple thin vortex filaments and the absence of a coherent core indicate that the vortex ring is in its early turbulent regime [@bergdorf2007; @archer2008]. A comparison of the vorticity iso-surfaces at $t \Gamma_0 / R_0^2 = 206.4$ and at $t \Gamma_0 / R_0^2 = 275.2$ demonstrates that interwoven vorticity filaments near the core region are gradually pushed into the wake. As some of these structures are convected into the wake, they form hairpin vortices which persist for some time in the wake region. The periodic shedding of hairpin vortices into the wake is consistent with the numerical investigations of [@bergdorf2007; @archer2008], which in turn further verifies the physical fidelity of our solutions.
Conclusions
===========
We have reported on a new fast, parallel solver for 3D, viscous, incompressible flows on unbounded domains based on LGFs. In this method, the incompressible Navier-Stokes equations are formally discretized on an unbounded staggered Cartesian grid using a second-order finite-volume scheme. This discretization technique has the advantage of enforcing discrete conservation laws and producing discrete operators with mimetic and commutativity properties that facilitate the implementation of fast, robust solvers. The system of DAEs resulting from the spatial-discretization of the momentum equation and the incompressibility constraint are integrated in time by using an integrating factor technique for the viscous terms and a HERK scheme for the convective term and the incompressibility constraint. Computationally efficient expressions for the integrating factors are obtained via Fourier analysis on unbounded Cartesian grids. A projection method that takes advantage of the mimetic and commutativity properties of the discrete operators is used to efficiently solve the linear system of equations arising at each stage of the time integration scheme. This projection technique has the advantage of being equivalent to the LU decomposition of the system of equations, and, as a result, does not introduce any splitting-error and does not change the stability of the discretized equations.
In our formulation, solutions to the discrete Poisson problems and integration factor that are required to advance the flow are obtained through LGF techniques. These techniques express the solutions to inhomogeneous difference equations as the discrete convolution between source terms and the fundamental solutions of the discrete operators on unbounded regular grids. Fast, parallel solutions to the expressions resulting from the application of LGF techniques to discrete Poisson problems and integrating factors are obtained using the FMM for LGFs of [@liska2014].
As a result of our LGF formulation, the flow is solved using only information contained in the grid region where the vorticity and the divergence of the Lamb vector have non-negligible values. An adaptive block-structured grid and a velocity refresh technique are used to limit operations to a small finite computational domain. In order to efficiently compute solutions to a prescribed tolerance, weight functions and threshold values are used to determine the behavior of the adaptive grid.
The order of accuracy of the discretization and solution techniques is verified through refinement studies. The physical fidelity of the method is demonstrated in comparisons between computed and theoretical values for the propagation speed of a thin vortex ring. Additionally, results for the evolution of a thin vortex ring at $\text{Re}_0=\numprint{7500}$ from the laminar to the early turbulent regime are shown to be in good agreement with investigations of other authors.
Acknowledgments {#acknowledgments .unnumbered}
===============
This work was partially supported by the United States Air Force Office of Scientific Research (FA950–09–1–0189) and the Caltech Field Laboratory for Optimized Wind Energy with Prof. John Dabiri as PI under the support of the Gordon and Betty Moore Foundation.
Discrete operators {#app:oprs}
==================
In this appendix we provide point-operator and Fourier representations for the discrete operators of the present formulation. For operators that map onto $\mathbb{R}^\mathcal{F}$ or $\mathbb{R}^\mathcal{E}$, expressions for only one component of the resulting vector fields are provided since expressions for the other components are readily deduced. In the following discussion $\mathsf{c}\in\mathbb{R}^\mathcal{C}$, $\mathsf{f}\in\mathbb{R}^\mathcal{F}$, $\mathsf{e}\in\mathcal{E}$, and $\mathsf{v}\in\mathbb{R}^\mathcal{V}$ are arbitrary grid functions.
Point-operator representation based on the indexing convention depicted in Figure \[fig:grid-cell\] are as follows:
- *Discrete gradient operators*: $\mathsf{G} : \mathbb{R}^\mathcal{C} \mapsto \mathbb{R}^\mathcal{F}$ and $\overline{\mathsf{G}}=-\mathsf{D}^\dagger : \mathbb{R}^\mathcal{V} \mapsto \mathbb{R}^\mathcal{E}$, where
$$\begin{aligned}
\Delta x [ \mathsf{G} \mathsf{c} ]_{i,j,k}^{(1)}
&= \mathsf{c}_{i+1,j,k} - \mathsf{c}_{i,j,k},
\\
\Delta x [ \overline{\mathsf{G}} \mathsf{v} ]_{i,j,k}^{(1)}
&= \mathsf{v}_{i,j,k}^{(1)}
- \mathsf{v}_{i-1,j,k}^{(1)}.
\end{aligned}$$
- *Discrete curl operators*: $\mathsf{C} : \mathbb{R}^\mathcal{F} \mapsto \mathbb{R}^\mathcal{E}$ and $\overline{\mathsf{C}}=\mathsf{C}^\dagger : \mathbb{R}^\mathcal{E} \mapsto \mathbb{R}^\mathcal{F}$, where
$$\begin{aligned}
\Delta x [ \mathsf{C} \mathsf{f} ]_{i,j,k}^{(1)}
&= \mathsf{f}_{i,j,k}^{(2)} - \mathsf{f}_{i,j,k+1}^{(2)}
+ \mathsf{f}_{i,j+1,k}^{(3)} - \mathsf{f}_{i,j,k}^{(3)},
\\
\Delta x [ \overline{\mathsf{C}} \mathsf{e} ]_{i,j,k}^{(1)}
&= \mathsf{e}_{i,j,k-1}^{(2)} - \mathsf{e}_{i,j,k}^{(2)}
+ \mathsf{e}_{i,j,k}^{(3)} - \mathsf{e}_{i,j-1,k}^{(3)}.
\end{aligned}$$
- *Discrete divergence operators*: $\mathsf{D} : \mathbb{R}^\mathcal{E} \mapsto \mathbb{R}^\mathcal{V}$ and $\overline{\mathsf{D}}=-\mathsf{G}^\dagger : \mathbb{R}^\mathcal{F} \mapsto \mathbb{R}^\mathcal{C}$, where
$$\begin{aligned}
\Delta x [ \mathsf{D} \mathsf{e} ]_{i,j,k}
&=
\mathsf{e}_{i+1,j,k}^{(1)}
+ \mathsf{e}_{i,j+1,k}^{(2)}
+ \mathsf{e}_{i,j,k+1}^{(3)}
- \sum_{q=1}^{3} \mathsf{e}_{i,j,k}^{(q)},
\\
\Delta x [ \overline{\mathsf{D}} \mathsf{f} ]_{i,j,k}
&=
- \mathsf{f}_{i-1,j,k}^{(1)}
- \mathsf{f}_{i,j-1,k}^{(2)}
- \mathsf{f}_{i,j,k-1}^{(3)}
+ \sum_{q=1}^{3} \mathsf{f}_{i,j,k}^{(q)}.
\end{aligned}$$
- *Discrete Laplace operators*: $\mathsf{L}_\mathcal{Q}:\mathbb{R}^\mathcal{Q} \mapsto \mathbb{R}^\mathcal{Q}$ for all $\mathcal{Q}$ in $\{\mathcal{C},\mathcal{F},\mathcal{E},\mathcal{V}\}$, where $$\mathsf{L}_\mathcal{C} = -\mathsf{G}^\dagger \mathsf{G},
\quad
\mathsf{L}_\mathcal{V} = -\mathsf{D} \mathsf{D}^\dagger,
\quad
\mathsf{L}_\mathcal{F} = -\mathsf{G} \mathsf{G}^\dagger - \mathsf{C}^\dagger \mathsf{C},
\quad
\mathsf{L}_\mathcal{E} = -\mathsf{D}^\dagger \mathsf{D} - \mathsf{C} \mathsf{C}^\dagger,$$ and $[\mathsf{L}_\mathcal{C} \mathsf{c}]$, $[\mathsf{L}_\mathcal{V} \mathsf{v}]$, $[\mathsf{L}_\mathcal{F} \mathsf{f}]^{(\ell)}$, and $[\mathsf{L}_\mathcal{E} \mathsf{e}]^{(\ell)}$ can be computed as $$(\Delta x)^2 [ \mathsf{L} \mathsf{a} ]_{i,j,k}
= - 6 \mathsf{a}_{i,j,k}
+ \sum_{q\in\{-1,1\}} \left(
\mathsf{a}_{i+q,j,k} + \mathsf{a}_{i,j+q,k} + \mathsf{a}_{i,j,k+q} \right).$$
- *Discrete nonlinear operator*: $\tilde{\mathsf{N}} : \mathbb{R}^\mathcal{F} \mapsto \mathbb{R}^\mathcal{F}$, where $$[\tilde{\mathsf{N}}(\mathsf{f})]_{i,j,k}^{(1)} =
\frac{1}{4} \sum_{q\in\{-1,0\}} \left[
\mathsf{e}_{i,j,k+q}^{(2)} \left(
\mathsf{f}_{i,j,k+q}^{(3)} + \mathsf{f}_{i+1,j,k+q}^{(3)} \right)
- \mathsf{e}_{i,j+q,k}^{(3)} \left(
\mathsf{f}_{i,j+q,k}^{(2)} + \mathsf{f}_{i+1,j+q,k}^{(2)} \right) \right],
\label{eq:opr_def_nonlin}$$ and $\mathsf{e} = \mathsf{C} \mathsf{f}$.[^29]
- *Linearized discrete nonlinear operator*: the linearized form of $\tilde{\mathsf{N}}(\mathsf{f})$ about a constant uniform base flow, $\mathsf{f}_\text{base}(\mathbf{n},t)=\mathbf{f}_\text{base}$, is given by $\mathsf{M}\mathsf{f}^\prime = [\mathsf{K}(\mathbf{f}_\text{base})] \mathsf{C} \mathsf{f}^\prime$, where $\mathsf{f}^\prime = \mathsf{f} - \mathsf{f}_\text{base}$ and $$\mathsf{K}(\mathbf{f}_\text{base}) : \mathbb{R}^{\mathcal{E}} \mapsto \mathbb{R}^{\mathcal{F}}, \,
[ [ \mathsf{K}(\mathbf{f}_\text{base}) ] \mathsf{e}^\prime ]_{i,j,k}^{(1)}
= \frac{1}{2} \sum_{q\in\{-1,0\}} \left(
f_\text{base}^{(3)} \mathsf{e}_{i,j,k+q}^{(2)}
- f_\text{base}^{(2)} \mathsf{e}_{i,j+q,k}^{(3)} \right).
\label{eq:lin_nonlin}$$
Discussions regarding the properties of discrete operators are often facilitated by using a block vector/matrix notation to describe the grid functions and linear operators. Consider the grid spaces $\mathcal{X}$ and $\mathcal{Y}$ corresponding to either $\mathcal{F}$ or $\mathcal{E}$. Using block vector notation, a vector-valued grid function $\mathsf{x}\in\mathbb{R}^\mathcal{X}$ is expressed as $$\mathsf{x} = \mathbb{S}_\mathcal{X}
[ \bar{\mathsf{x}}_1, \bar{\mathsf{x}}_2, \bar{\mathsf{x}}_3 ]^\dagger,$$ where the $q$-th block, $\bar{\mathsf{x}}_q$, corresponds to the values of the $q$-th component of $\mathsf{x}$. Each $\mathsf{x}_q$ is a scalar real-valued grid function defined on an infinite Cartesian reference grid, which we denote by $\mathbb{R}^\Lambda$.[^30] The shift operator $\mathbb{S}_\mathcal{X}:\mathbb{R}^\Lambda\mapsto\mathbb{R}^\mathcal{X}$ is used to transfer, or “shift”, the values of grid functions defined on $\mathbb{R}^\Lambda$ to $\mathbb{R}^\mathcal{X}$ such that $[\mathsf{x}]^{(q)}(\mathbf{n}) = \bar{\mathsf{x}}_q(\mathbf{n})$. Similarly, the transpose of $\mathbb{S}_\mathcal{X}$, denoted by $\mathbb{S}_\mathcal{X}^\dagger$, transfers values of grid functions defined on $\mathbb{R}^\mathcal{X}$ to $\mathbb{R}^\Lambda$. The block vector notation and shift operators readily extend to the case of linear operators. Using block matrix notation, a discrete linear operator $\mathsf{T}:\mathbb{R}^\mathcal{X}\mapsto\mathbb{R}^\mathcal{Y}$ is expressed as $$\mathsf{T} = \mathbb{S}_\mathcal{Y} [ \overline{\mathsf{T}}_{i,j} ] \mathbb{S}_\mathcal{X}^\dagger,\quad i,j = 1, 2, 3,$$ where $\overline{\mathsf{T}}_{i,j} : \mathbb{R}^{\Lambda} \mapsto \mathbb{R}^{\Lambda}$.
We now turn our attention to the Fourier representations of grid functions and discrete linear operators. Consider the Fourier series, $\mathfrak{F}$, and the inverse Fourier transform, $\mathfrak{F}^{-1}$, given by: $$[ \mathfrak{F} \bar{\mathsf{u}} ](\boldsymbol{\xi})
= \sum_{\mathbf{m}\in\mathbb{Z}^3} e^{i \mathbf{m} \cdot \boldsymbol{\xi}}
\bar{\mathsf{u}}, \quad
[ \mathfrak{F}^{-1} \hat{\mathsf{u}} ](\mathbf{m})
= \frac{1}{(2\pi)^3} \int_{\boldsymbol{\xi}\in\Pi}
e^{-i \boldsymbol{\xi} \cdot \mathbf{m}} \bar{\mathsf{u}}( \boldsymbol{\xi} )\,
d\boldsymbol{\xi},$$ respectively, where $\Pi = (-\pi,\pi)^3$, $\mathsf{u}:\mathbb{Z}^3\mapsto\mathbb{R}$, and $\hat{\mathsf{u}}:\Pi\mapsto\mathbb{C}$. Using block matrix notation, we extend $\mathfrak{F}$ and $\mathfrak{F}^{-1}$ to the case of grid functions in $\mathbb{R}^{X}$ by defining: $$\mathfrak{F}_\mathcal{X} =
\text{diag}(\mathfrak{F},\mathfrak{F},\mathfrak{F}) \mathbb{S}_\mathcal{X}, \quad
\mathfrak{F}^{-1}_\mathcal{X} =
\mathbb{S}^\dagger_\mathcal{X} \text{diag}(\mathfrak{F},\mathfrak{F},\mathfrak{F}).
\label{eq:grid_fourier}$$
Next, let $\Xi$ denote the set of all linear operators $\overline{\mathsf{Q}} : \mathbb{R}^{\Lambda} \mapsto \mathbb{R}^{\Lambda}$ such that the action of $\overline{\mathsf{Q}}$ on an arbitrary grid function $\bar{\mathsf{u}}\in\mathbb{R}^\Lambda$ is given by $$[ \overline{\mathsf{Q}} \bar{\mathsf{u}} ] (\mathbf{n})
= [ \overline{\mathsf{K}}_\mathsf{Q} * \bar{\mathsf{u}} ] (\mathsf{n})
= \sum_{\mathbf{m}\in\mathbb{Z}^3} \overline{\mathsf{K}}_\mathsf{Q}(\mathbf{m}-\mathbf{n})
\bar{\mathsf{u}}(\mathbf{m}),
\label{eq:kernel_q}$$ where $\overline{\mathsf{K}}_\mathsf{Q} : \mathbb{Z}^3 \mapsto \mathbb{R}$ is a well-behaved discrete kernel function. Any operator belonging to $\Xi$ is diagonalized using $\mathfrak{F}$ and $\mathfrak{F}^{-1}$, $$[ \overline{\mathsf{Q}} \bar{\mathsf{u}} ] (\mathbf{n})
= [ \overline{\mathsf{K}}_\mathsf{Q} * \bar{\mathsf{u}} ] (\mathsf{n})
= [ \mathfrak{F}^{-1} (\hat{\mathsf{K}}_\mathsf{Q} \hat{\mathsf{u}}) ] (\mathsf{n}),$$ where $\hat{\mathsf{K}}_\mathsf{Q} = \mathfrak{F} \mathsf{K}_\mathsf{Q}$ and $\hat{\mathsf{u}} = \mathfrak{F} \mathsf{u}$. The block operators of all linear operators used in the present method belong to $\Xi$.
Lattice Green’s functions representations {#app:lgfs}
=========================================
The NS-LGF method uses the LGFs $\mathsf{G}_{\mathsf{L}}$ and $\mathsf{G}_{\mathsf{E}}(\alpha)$ to computed the action of $\mathsf{L}^{-1}_\mathcal{Q}$ and $\mathsf{E}_\mathcal{Q}(\alpha)$, respectively. Fourier and Bessel integrals for $\mathsf{G}_{\mathsf{L}}$ and $\mathsf{G}_{\mathsf{E}}$ are given by
$$\begin{aligned}
(\Delta x)^2 \mathsf{G}_{\mathsf{L}}(\mathbf{n})
&= \frac{1}{8\pi^3}
\int_{\Pi} \frac{ \exp\left( -i \mathbf{n}\cdot\boldsymbol{\xi} \right) }
{ \sigma(\boldsymbol{\xi}) }\,d\boldsymbol{\xi}
= - \int_{0}^{\infty} e^{-6t} I_{n_1}(2t) I_{n_2}(2t) I_{n_3}(2t)\,dt \\
[\mathsf{G}_{\mathsf{E}}(\alpha)](\mathbf{n})
&= \frac{1}{8\pi^3}
\int_{\Pi} \exp\left( -i \mathbf{n}\cdot\boldsymbol{\xi}
- \sigma(\boldsymbol{\xi}) \right) \,d\boldsymbol{\xi}
= e^{-6\alpha} I_{n_1}(2\alpha) I_{n_2}(2\alpha) I_{n_3}(2\alpha)
\label{eq:lgf_intfac}
\end{aligned}$$
where $\sigma(\boldsymbol{\xi}) = 2\cos(\xi_1) + 2\cos(\xi_2) + 2\cos(\xi_3) - 6$, $\Pi=(-\pi,\pi)^3$, and $I_n(z)$ is the modified Bessel function of the first kind of order $n$.
Insights into the approximate behavior of $\mathsf{G}_{\mathsf{L}}(\mathbf{n})$ can be obtained by considering the case of $|\mathbf{n}|\rightarrow\infty$. Asymptotic expansions in terms of unique rational functions for $\mathsf{G}_{\mathsf{L}}(\mathbf{n})$ are provided in [@martinsson2002]. For example, $$(\Delta x)^2 \mathsf{G}_{\mathsf{L}}(\mathbf{n}) =
- \frac{ 1 } { 4 \pi |\mathbf{n}| }
- \frac{ n_1^4 + n_2^4 + n_3^4
- 3 n_1^2 n_2^2 - 3 n_1^2 n_3^2 - 3 n_2^2 n_3^2 }
{ 16 \pi |\mathbf{n}|^7 } + \mathcal{O}\left(|\mathbf{n}|^{-5}\right),$$ as $|\mathbf{n}|\rightarrow\infty$. As expected, the leading order term corresponds to the fundamental solution of the Laplace operator.
Numerical procedures for efficiently evaluating $\mathsf{G}_{\mathsf{L}}(\mathbf{n})$ are provided in [@liska2014]. Values for $[\mathsf{G}_{\mathsf{E}}(\alpha)](\mathbf{n})$ can be readily computed using its Bessel form given by Eq. . Although computing values of $\mathsf{G}_{\mathsf{L}}(\mathbf{n})$ and $[\mathsf{G}_{\mathsf{E}}(\alpha)](\mathbf{n})$ can potentially require a non-trivial number of operations, the LGF-FMM, used to compute the action of $\mathsf{L}^{-1}_\mathcal{Q}$ and $\mathsf{E}_\mathcal{Q}(\alpha)$, employs pre-processing techniques that limit the evaluation of point-wise values of LGFs to once per simulation.
Stability analysis {#app:stability}
==================
Consider the linearization of Eq. with respect to $\mathsf{v}$ about a uniform, constant base flow, $\mathsf{v}_\text{base}(\mathbf{n},t)=\tilde{\mathbf{u}}$, for the case of $\mathsf{u}_\infty=0$, $$\frac{ d \mathsf{v}^\prime }{ d t }
= [\mathsf{K}(\tilde{\mathbf{u}})] \mathsf{C} \mathsf{v}^\prime
+ \mathsf{G} \mathsf{b}^\prime,\quad
\mathsf{G}^\dagger \mathsf{v}^\prime = 0,
\label{eq:lin_eq}$$ where $\mathsf{v} = \mathsf{v}_\text{base} + \mathsf{v}^\prime$ and $\mathsf{K}(\tilde{\mathbf{u}})$ is defined by Eq. .[^31] The stability analysis of Eq. is facilitated by using a null-space approach to transform the original DAE index 2 system to an equivalent ODE, $$\frac{ d \mathsf{q} }{ d t }
= \mathsf{C}^\dagger [\mathsf{K}(\mathsf{v}_\text{base})] \mathsf{q}
\label{eq:lin_eq_trans}$$ where $\mathsf{q} = \mathsf{C} \mathsf{v}^\prime$, $\mathsf{v}^\prime = \mathsf{C}^\dagger \mathsf{s}$, and $\mathsf{L}_\mathcal{E} \mathsf{s} = \mathsf{q}$ with $\mathsf{s}\rightarrow 0$ as $|\mathbf{n}|\rightarrow 0$. The details regarding the feasibility and equivalence of this transformation will be discussed in Section \[sec:truncation\_active\]. It is readily verified that the discrete equations corresponding to the HERK method for Eq. and for Eq. are also equivalent; hence, Eq. and Eq. have the same stability region.
The ODE given by Eq. is diagonalized by the component-wise Fourier series $\mathfrak{F}_\mathcal{E}$, defined by Eq. ,
$$\frac{ d \hat{q}_k }{ d t }
= \frac{|\tilde{\mathbf{u}}|\Delta t}{\Delta x}
\sigma( \boldsymbol{\xi} ) \hat{q}_k \,\,\,
\forall i = 1,2,3,
\label{eq:lin_eq_trans_diag}$$
$$\sigma( \boldsymbol{\xi} ) = - i \sum_{j=1}^3 \frac{ \tilde{u}_j }
{ |\tilde{\mathbf{u}}| } \sin \xi_i,
\label{eq:lin_eq_eig}$$
\[eq:lin\_eq\_trans\_diag\_full\]
where $\boldsymbol{\xi}\in\Pi=(-\pi,\pi)^3$.[^32] It follows from Eq. that $\Re(\sigma( \boldsymbol{\xi} ))=0$ and $|\Im(\sigma( \boldsymbol{\xi} ))| \le \sqrt{3}$ for all $\boldsymbol{\xi}\in\Pi$. As a result, the linear stability Eq. is determined by the stability of the scalar ODEs: $$\frac{dy}{dt} = i \mu y \quad \forall\mu\in (-\gamma,\gamma),\quad
\gamma=\sqrt{3}\frac{|\tilde{\mathbf{u}}|\Delta t}{\Delta x}.
\label{eq:scalar_ode}$$
Consider integrating the ODE given by Eq. using the HERK method. In the absence of algebraic constraints, an HERK scheme reduces to a standard ERK scheme with the same tableau. Consequently, the region of absolute stability for the ODE of Eq. is given by $$\Omega = \left\{ \mu \in \mathbb{R} : |R(i\mu)| < 1 \right\},
\quad
R(z) = 1 + z \mathbf{b}^\dagger \left(\mathbf{I}-z\mathbf{A}\right)^{-1}\mathbf{e},
\label{eq:stability_region}$$ where $\mathbf{b}$ and $\mathbf{A}$ are defined by Eq. , and $\mathbf{e}=[\,1,\,1,\dots,\,1\,]$ [@hairer1996]. Eq. implies that the IF-HERK method is linearly stable if the following CFL condition is satisfied: $$\text{CFL} = \frac{|\tilde{\mathbf{u}}| \Delta t}{\Delta x} < \text{CFL}_{\text{max}},
\quad
\text{CFL}_\text{max}=\frac{\mu^*}{\gamma}
\label{eq:cfl_cond}$$ where $\mu^* = \text{sup}\left(\Omega\right)$ depends on the RK coefficients of the scheme. For all the IF-HERK schemes defined in Eq. , the value of $\text{CFL}_\text{max}$ is unity.
Error estimates for integrating factors operating on truncated source fields {#app:iferror}
============================================================================
In this appendix we provide estimates for the difference between $\mathsf{E}_\mathcal{Q}(\alpha)$ and $\mathsf{M}^{\gamma}_\mathcal{Q} \mathsf{E}_\mathcal{Q}(\alpha)\mathsf{M}^{\gamma}_\mathcal{Q}$ inside $D_\gamma$, which are pertinent to the discussion of Section \[sec:truncation\_refresh\]. Consider the constant uniform scalar field $\mathsf{u}\in\mathbb{R}^\mathcal{Q}$ and the domain $D_\gamma$, where $D_\gamma$ is infinite in the $x$- and $y$-directions and semi-infinite in the $z$-direction. For this simplified case, it is sufficient to consider the 1D problem of computing $$\mathsf{y} = \left[ \mathsf{E}^\prime(\alpha)
- \mathsf{M}^\prime \mathsf{E}^\prime(\alpha) \mathsf{M}^\prime \right] \mathsf{u}
= \left[ \mathsf{I} - \mathsf{M}^\prime \mathsf{E}^\prime (\alpha) \mathsf{M}^\prime \right] \mathsf{u},$$ where $\mathsf{I}$ is the identity operator, $$\mathsf{E}^\prime(\alpha) \mathsf{u} = \mathsf{G}_\mathsf{E}^\prime(\alpha) * \mathsf{u}, \quad
\mathsf{G}^\prime_\mathsf{E}(n) = e^{-2\alpha} I_n(2\alpha),$$ and $$[ \mathsf{M}^\prime \mathsf{u} ](k) = \left\{
\begin{array}{cc}
\mathsf{u}(k) & \text{if}\,\,k>0\\
0 & \text{otherwise}
\end{array} \right. .$$ As a result, the magnitude of the normalized difference, $\mathsf{d}$, at $k>0$ is given by $$\mathsf{d}(k) = \frac{\mathsf{y}(k)}{|u|} = \sum_{j=0}^{\infty} e^{-2\alpha} I_{k-j+1}(2\alpha),
\label{eq:if_finite_error}$$ where $|u|$ is the magnitude of the uniform field $\mathsf{u}$. Numerical approximations for $\mathsf{d}(k)$ are obtained by truncating the infinite sum of Eq. to a finite number of terms, $N$, such that $I_{k-N+1}(2\alpha)/I_{k+1}(2\alpha)$ is less than a prescribed value.[^33]
As discussed in Section \[sec:truncation\_refresh\], the current implementation of the NS-LGF method uses Eq. to estimate the error associated with approximating $\mathsf{E}_\mathcal{Q}(\alpha)\mathsf{u}$ by $\mathsf{M}^{\text{xsoln}}_\mathcal{Q} \mathsf{E}_\mathcal{Q}(\alpha)\mathsf{M}^{\text{xsoln}}_\mathcal{Q}\mathsf{u}$, where $\mathsf{u}$ is the velocity perturbation field. For this case, $|u|$ in Eq. is set to be the maximum value of any component of $\mathsf{u}$ in $D_\text{soln}$. Numerical experiments of flows similar to those considered in Section \[sec:verif\] demonstrate that this technique leads to fairly conservative error estimates; in all experiments the actual error was less than 10% of the estimated error. Tighter error bounds that account for the domain shape and the distribution of $\mathsf{u}$ can potentially be obtained, but are not explored in the present work.
[^1]: In the absence of sources and sinks, the velocity of an irrotational flow subject to zero boundary conditions at infinity is given by $\mathbf{v} = \nabla \phi$, where the leading order term of $\phi$ is $-\mathbf{M}\cdot\mathbf{x}/r^3$ [@saffman1992]. Consequently, $p = -\left( \frac{ \partial \phi }{ \partial t } + \frac{1}{2} | \nabla \phi |^2 \right) \rightarrow 0$ as $r \rightarrow \infty$, where we have taken the arbitrary time-dependent constant to be zero.
[^2]: In order to avoid a cumbersome notation, the prime symbols, ${}^\prime$, are omitted from variables denoting grid functions associated with the perturbations of the discrete velocity and pressure fields.
[^3]: No particular form (e.g. convection, rotational, divergence, skew-symmetric) or discretization scheme for the convection term is assumed by Eq. .
[^4]: The discretization of Eq. naturally assumes the form given by Eq. if the convection term is discretized in its rotational form, $\left(\nabla \times \mathbf{v} \right) \times \mathbf{v}+\frac{1}{2}\nabla \mathbf{v}^2$, with the gradient term approximated by $\frac{1}{2}\mathsf{G}\mathsf{P}(\mathsf{v},\mathsf{v})$.
[^5]: Consider $[\mathsf{G}_{\mathsf{E}}(\alpha)](\mathbf{n})$ for the case $\mathbf{n}=(n,0,0)$. As $n\rightarrow\infty$, $[\mathsf{G}_{\mathsf{E}}(\alpha)](\mathbf{n}) \sim \alpha^n/n!$. For $\alpha=0.1$ and $\alpha=1.0$, the value of $[\mathsf{G}_{\mathsf{E}}(\alpha)](\mathbf{n})/[\mathsf{G}_{\mathsf{E}}(\alpha)](\mathbf{0})$ is less than $10^{-10}$ at $n=7$ and $n=13$, respectively. The numerical simulations of Section \[sec:verif\] make use of integrating factors with $\alpha<1$, but larger values of $\alpha$ are allowed.
[^6]: For the run parameters of the numerical experiments of Section \[sec:verif\], the action of $\mathsf{E}_\mathcal{Q}$ only requires approximately 10% of the total number of operations required to compute $\mathsf{L}_\mathcal{Q}^{-1}$.
[^7]: An efficient implementation of the IF-HERK algorithm recognizes that the application of $s-1$ integrating factors can be avoided during final, $i=s$, stage by computing $\mathsf{r}_k^s = \mathsf{H}_\mathcal{F}^{i-1} \left( \mathsf{q}_k^{s-1} + \Delta t \sum_{j=1}^{i-1} \tilde{a}_{i,j} \mathsf{w}_k^{i-1,j}\right) + \mathsf{g}_k^{i}$, as opposed to Eq. . This modification avoids having to explicitly compute $\mathsf{q}_k^{s}$ and $\mathsf{w}_k^{s,j}$ for $j=1,2,\dots s-1$.
[^8]: One additional integrating factor is required during the last stage of the IF-HERK algorithm, but for the case of $c_s=1$ this additional integrating factor reduces to the identify operator.
[^9]: Without additional information the (scaled) total pressure perturbation, $\hat{\mathsf{d}}_k^{i}$, obtained from Eq. is unique up to a discrete linear polynomial. Yet, a unique $\hat{\mathsf{d}}_k^{i}$ is obtained by taking into account the compatibility condition $\mathsf{p}(\mathbf{n},t) \rightarrow 0$ as $|\mathbf{n}| \rightarrow \infty$, i.e. $\hat{\mathsf{d}}_k^{i}(\mathbf{n}) \rightarrow \mathsf{c}^i_k$ as $|\mathbf{n}| \rightarrow \infty$ where $\mathsf{c}^i_k = \frac{1}{2} |\mathsf{u}_\infty(t^i_k)|^2$, discussed in Section \[sec:spatial\_discrete\].
[^10]: Field values outside the finite region being tracked are treated as zero.
[^11]: Without further considerations Eq. implies that $\mathsf{a}$ is unique up to a discrete linear polynomial. Given that $\mathsf{w}$ is exponentially small at large distances and that $\mathsf{u}$ tends to zero at infinity, it follows that $\mathsf{a}$ is unique up to an arbitrary constant taken to be zero.
[^12]: For the test case of the extremely thin $\delta/R = 0.0125$ vortex ring discussed in Section \[sec:verif-selfindc\], the wall-time ratio of a vector to a scalar discrete Poisson solve is approximately 2.8, which is slightly less than the expected ratio of 3 based on operation count estimates of the LGF-FMM due to the larger parallel communication costs per problem unknown for the scalar case.
[^13]: Formally, $\epsilon_\text{supp}$ is only an approximate upper bound for the active domain case of Eq. since the source field for this problem is not exactly equal to $-\mathsf{G}^\dagger \tilde{\mathsf{N}}(\mathsf{u}+\mathsf{u}_\infty)$. Yet, for the present error estimates, numerical experiments of representative flows indicate that $-\mathsf{G}^\dagger\tilde{\mathsf{N}}(\mathsf{u}+\mathsf{u}_\infty)$ at $t=t_k$ is a good approximation to $\mathsf{G}^\dagger \mathsf{r}_k^{i}$ of each stage of the $k$-th time-step.
[^14]: Combinations of $n^b$, $N_b$, and $\beta$ resulting in $q_\text{max}=0$ are not allow. For a given $\beta$, the value of $q_\text{max}=0$ can always be increased by using larger values of $n^b$ or $N_b$.
[^15]: The factor of 3 that appears in the second and third terms of Eq. accounts for the additional operations required to solve vector Poisson problems and vector integrating factors.
[^16]: The expression $c_s=1$ is one of the HERK order-conditions associated with second-order accurate constraints. For the case of $c_s=\tilde{c}_{s-1}=1$, the integrating factor $\mathsf{H}_\mathcal{F}^s$, defined by Eq. , simplifies to the identity operator.
[^17]: Numerical experiments of representative flows have shown that $\alpha \approx 0.1$ is sufficiently small as to avoid most spurious and unnecessary changes to the computational grid.
[^18]: The dual grid corresponds to a copy of the original staggered grid that has been shifted by half a grid cell in each direction. Cells, faces, edges, and vertices of the original grid can be regarded as vertices, edges, faces, and cells, respectively, of the dual grid.
[^19]: The *solution error*, as defined in Section \[sec:algorithm\], should not be confused with the error of the solution.
[^20]: The computational cost of the spatial convergence tests are reduced by using “fat” vortex rings such as those given by Eq. , which, unlike similar “fat” rings given by Eq. , are continuous and differentiable at the origin.
[^21]: We note that the spatial discretization error associated with the computational grid is significantly larger than the temporal discretization error for some test cases. This does not affect the present refinement studies since the spatial discretization error is the same for all test cases and our error estimates are computed as the difference of two numerical solutions.
[^22]: Numerical solutions set the vorticity outside the computational to be zero. As a result, the only error involved in evaluating the integrals of Eq. is the error resulting from their discretization.
[^23]: A vortex ring initiated a with vorticity distribution given by Eq. is a solution to the Navier-Stokes equations only in the limit of $\delta/R \rightarrow 0$.
[^24]: The radius of the core and the vorticity centroid in the radial direction are approximately $2 \sqrt{vt}$ and $R_0 + 3 vt /R_0$ at $\sqrt{vt} \ll R_0$ [@fukumoto2010].
[^25]: Extrapolating from the results of Table \[tab:verif-conv-int\] to the present tests cases, we estimate that the error of $\mathcal{U}_z$ to be between $10^{-5}$ and $10^{-4}$. As a result, the assumption that $\mathcal{U}_z$ is more accurate than $U_\text{Saffman}$ might need to be revisited for test cases resulting in values of $\Delta \tilde{U}_z<10^{-4}$.
[^26]: The error is estimated by assuming that the test case corresponding to $\epsilon^*=10^{-6}$ is the true solution.
[^27]: In the discussion of @archer2008, the test case corresponding to a vortex ring at $\text{Re}_0=\numprint{7500}$ initiated with $\delta_0/R_0=0.2$ is denoted as case “B3”. Unlike the present test cases, the initial vorticity distribution for case B3 of @archer2008 was slightly perturbed to promote an early transition.
[^28]: The maximum length, in terms of $R_0$, of the computational in the $z$-direction for is approximately $10$, $26$, $34$, $46$, $46$ for test case with $\epsilon^*$ equal to $10^{-2}$, $10^{-3}$, $10^{-4}$, $10^{-5}$, and $10^{-6}$, respectively.
[^29]: The discrete nonlinear operator presented here is based on the discretization of the convective term in its rotational form, i.e. $\boldsymbol{\omega}\times\mathbf{u} - \frac{1}{2} \nabla (\mathbf{u}\cdot\mathbf{u})$, following the technique described in @zhang2002. As discussed in Section \[sec:spatial\_discrete\], $\tilde{\mathsf{N}}(\mathsf{f})$ is an approximation of $(\nabla \times \mathbf{f}) \times \mathbf{f}$.
[^30]: Grid functions in $\mathbb{R}^\Lambda$ can also be regarded as functions mapping $\mathbb{Z}^{3}$ to $\mathsf{R}$.
[^31]: It is not necessary to linearize the integrating factors present in Eq. , since they can be commuted and made to cancel out after the linearization of $\tilde{\mathsf{N}}$.
[^32]: In order to simplify the expression for $\sigma(\boldsymbol{\xi})$ to the form given by Eq. it is necessary to account for $\mathsf{D} \mathsf{q} = 0$.
[^33]: For a fixed $z>0$, $I_{n}(z)$ decreases as $n$ increases. For a fixed $z>0$, $I_{n}(z)$ decays faster than any exponential as $n\rightarrow\infty$.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
Motivated by problems in percolation theory, we study the following 2-player positional game. Let $\Lambda_{m \times n}$ be a rectangular grid-graph with $m$ vertices in each row and $n$ vertices in each column. Two players, Maker and Breaker, play in alternating turns. On each of her turns, Maker claims $p$ (as-yet unclaimed) edges of the board $\Lambda_{m \times n}$, while on each of his turns Breaker claims $q$ (as-yet unclaimed) edges of the board and destroys them. Maker wins the game if she manages to claim all the edges of a crossing path joining the left-hand side of the board to its right-hand side, otherwise Breaker wins. We call this game the $(p,q)$-crossing game on $\Lambda_{m \times n}$.
Given $m,n\in \mathbb{N}$, for which pairs $(p,q)$ does Maker have a winning strategy for the $(p,q)$-crossing game on $\Lambda_{m \times n}$? The $(1,1)$-case corresponds exactly to the popular game of Bridg-it, which is well understood due to it being a special case of the older Shannon switching game. In this paper, we study the general $(p,q)$-case. Our main result is to establish the following transition:
- if $p\geqslant 2q$, then Maker wins the game on arbitrarily long versions of the narrowest board possible, i.e. Maker has a winning strategy for the $(2q, q)$-crossing game on $\Lambda_{m \times(q+1)}$ for any $m\in \mathbb{N}$;
- if $p\leqslant 2q-1$, then for every width $n$ of the board, Breaker has a winning strategy for the $(p,q)$-crossing game on $\Lambda_{m \times n}$ for all sufficiently large board-lengths $m$.
Our winning strategies in both cases adapt more generally to other grids and crossing games. In addition we pose many new questions and problems.
author:
- 'A. Nicholas Day[^1]'
- 'Victor Falgas–Ravry'
bibliography:
- 'makerbreakerbiblio.bib'
title: 'Maker-Breaker Percolation Games I: Crossing Grids'
---
Introduction {#section: introduction}
============
Results and organisation of the paper {#subsection: results}
-------------------------------------
Biased Maker–Breaker games are a central area of research on positional games, in particular due to their intriguing and deep connections to resilience phenomena in discrete random structures. Much of the research on Maker–Breaker games has focussed on the case where the “board” is a complete hypergraph, or an arithmetically-defined hypergraph corresponding to all the solutions to a system of equations in some finite integer interval. Typically the “winning sets” that Maker seeks to claim in these games all have the same size.
In this paper we focus on boards and winning sets with rather different properties: we consider rectangular grid graphs, and our winning sets consist of *crossing paths*, whose sizes can vary wildly.
Explicitly, we define the $(p,q)$-crossing game as follows. Let $\Lambda_{m \times n}$ be the rectangular grid-graph with $m$ vertices in each row and $n$ vertices in each column - our convention is to call $m$ the *length* and $n$ the *width* of the board. Two players, Maker and Breaker, play in alternating turns, with Maker playing first. On each of her turns, Maker claims $p$ (as-yet unclaimed) edges of the board $\Lambda_{m \times n}$, while on each of his turns Breaker claims $q$ (as-yet unclaimed) edges of the board and destroys them. The game ends if either Maker manages to claim all the edges of a crossing path joining the left-hand side of the board to its right-hand side, in which case we declare her the winner, or if the board reaches a state where it is not longer possible for Maker to ever claim such a left-right crossing path, in which case we declare Breaker the winner. A natural question to ask is, given positive integers $m,n,p,q$, which player has a winning strategy for the $(p,q)$-crossing game on $\Lambda_{m \times n}$?
Our main result is the following two theorems, which show that the game undergoes a sharp transition at $p=2q$:
\[theorem: (2q,q)-game\] Let $m,n,p$ and $q$ be natural numbers. If $p \geqslant 2q$ and $n \geqslant q+1$, then Maker has a winning strategy for the $(p,q)$-crossing game on $\Lambda_{m \times n}$.
\[theorem: (2q-1,q)-game\] Let $m,n,p$ and $q$ be natural numbers. There exists a natural number $m_{0} = m_{0}(n,q)$ such that if $p \leqslant 2q-1$ and $m \geqslant m_{0}$, then Breaker has a winning strategy for the $(p,q)$-crossing game on $\Lambda_{m \times n}$.
In other words if Maker has at least twice the power of Breaker and the board is wide enough that Breaker cannot win in a single turn, then Maker wins the game no matter how long the board is. On the other hand, if Maker has strictly less than twice Breaker’s power, then Breaker has a winning strategy on all boards that are sufficiently long (with respect to the board width $n$ and Breaker’s power $q$). The proofs of Theorems \[theorem: (2q,q)-game\] and \[theorem: (2q-1,q)-game\] can be found in Sections \[section: (2q,q)-game\] and \[section: (2q-1,q)-game\] respectively. As we remark in Section \[section: other graphs and other games\], our strategies for these two games adapt to a number of other games and grids; see in particular Theorem \[theorem: general strip theorem\] for a generalisation of Theorem \[theorem: (2q-1,q)-game\].
The rest of this paper is organised as follows: in Section \[subsection: background and motivation\] we give some background and motivation for our problem. In Section \[section: preliminaries\] we go over some basic definitions and prove some elementary results on crossing games. For completeness, we also record a winning strategy for the $(1,1)$-crossing game on $\Lambda_{(n+1) \times n}$ which allows Maker to play any edge on her first move (this might be folklore — that Maker has a winning strategy is well-known, but we could not find a reference to the fact any first move will do). We end this paper in Section \[section: concluding remarks\] with a number of questions and open problems, including a discussion of connections to the study of fugacity in statistical physics and some enumeration problems in analytic combinatorics.
Background and motivation {#subsection: background and motivation}
-------------------------
Maker–Breaker games are a class of positional games which have attracted considerable attention from researchers in combinatorics and discrete probability. The set-up is simple: we have a finite board (a set) $X$, and a collection $\mathcal{W}$ of subsets of $X$ called *winning sets*. Two players, Maker and Breaker, take turns to claim as-yet unclaimed elements of $X$. Maker (typically) plays first, and claims $a$ elements in each of her turns, while Breaker claims $b$ elements on each of his. Maker’s aim is to claim all the elements of a winning set $W\in \mathcal{W}$, while Breaker’s aim is to thwart her, i.e. to claim at least one element from each winning set. Since the board is finite, no draws are allowed, and the main question is to determine who has a winning strategy.
Maker–Breaker games on graphs have been extensively studied since an influential paper of Chvátal and Erd[ő]{}s [@ChvatalErdos78] in the late 1970s. Important examples of such games include the connectivity game, the $k$-clique game and the Hamiltonicity game, where the board $X$ consists of the edges of a complete graph on $n$ vertices and the winning sets are spanning trees, $k$-cliques and Hamiltonian cycles respectively.
In their paper Chvátal and Erd[ő]{}s proved that, for a variety of such games, if $n$ is sufficiently large, then Maker has a winning strategy in the case where $a=b=1$. In each case they then asked how large a bias $b=b(n)$ was required for the $(1,b)$ versions of these games to turn into Breaker’s win and provided a surprising and influential *random graph heuristic* for determining the value of these *threshold biases*. Namely, according to this heuristic the threshold bias $b_{\star}$ at which Breaker has a winning strategy should lie close to the threshold $b$ for a set of $\frac{1}{b+1}\binom{n}{2}$ edges chosen uniformly at random to fail, with high probability, to contain any winning set. Almost twenty years after this heuristic was formulated, Bednarska and [Ł]{}uczak [@BednarskaLuczak00] were able to rigorously establish its correctness in the case of $H$-games, where the winning sets are finite graphs $H$ containing at least three non-isolated vertices. Their argument involved the construction of random Maker strategies, which they showed could win with high probability against any Breaker strategy — implying the existence of deterministic (albeit non-explicit) Maker strategies that win against all Breaker strategies.
In a different direction, Stojakovi[ć]{} and Szab[ó]{} [@StojakovicSzabo05] considered playing these Maker–Breaker games on random boards, by having $X$ consist of the edges of an Erd[ő]{}s–Rényi random graph $G_{n,p}$. As having fewer edges cannot help Maker, the natural question in this setting is: what is the threshold $p_{\star}$ such that if $p\gg p_{\star}$, then with probability $1-o(1)$ Maker has a winning strategy for the $(1,1)$-crossing game on $G_{n,p}$, while if $p\ll p_{\star}$, then with probability $1-o(1)$ Breaker has a winning strategy. Stojakovi[ć]{} and Szab[ó]{} showed that for some games, such as the connectivity games, $1/b_{\star}$ and $p_{\star}$ are of the same order, but that for others, such as the triangle game, no such relationship holds.
The intriguing connections between Maker–Breaker games and deep phenomena in discrete probability (in addition to their obvious combinatorial appeal) have led to an abiding interest in Maker–Breaker games. In addition to the graph-theoretic setting mentioned above, Maker–Breaker games have also been studied in arithmetic settings, where the board $X$ corresponds to some integer interval, and the winning sets are $r$-tuples of integers that are solutions to systems of linear equations in $r$ variables. We refer a reader to the 2008 monograph of Beck [@Beck08] for a summary and exposition of some of the many results in the area known up to that point, and to the preprint of Kusch, Rué, Spiegel and Szabó [@KuschRueSpiegelSzabo17] for some recent progress on hypergraph and arithmetic Maker–Breaker game, in particular establishing the tightness of the Bednarska–[Ł]{}uczak strategy for a very general class of games.
In this paper, we investigate $(p,q)$–crossing games on rectangular grid-graphs. These differ from previous Maker-Breaker games on graphs in a number of ways: grid-graphs are far sparser than previously considered boards; the ‘winning sets’, consisting of crossing paths, vary wildly in size, whereas in the previously studied examples they tended to all have the same size. Finally, we let both the aspect ratios ($m:n$) for our rectangular grids and the powers of both Maker and Breaker (the parameters $p$ and $q$) vary, whereas in previous games on graphs only Breaker’s power varied, and a notion of aspect ratio was absent.
Our motivation for investigating crossing games comes from percolation theory. Percolation theory is a branch of probability theory concerned, broadly speaking, with the study of random subgraphs of infinite lattices, and in particular the emergence of infinite connected components. Since its inception in Oxford in the late 1950s, it has blossomed into a beautiful and rich area of research. One of the most celebrated results in percolation theory is, without a doubt, the Harris–Kesten Theorem [@Harris60; @Kesten80] which we state below.
Let $\Lambda$ denote the square integer lattice, that is, the graph on $\mathbb{Z}^2$ whose edges consist of pairs of vertices $\mathbf{v}, \mathbf{w}\in \mathbb{Z}^2$ lying at Euclidean distance $\| \mathbf{v}-\mathbf{w}\|=1$ from each other. The $p$-random measure $\mu_p$ is, informally, the probability measure on subsets of $E(\Lambda)$ that includes each edge with probability $p$, independently of all the others. (We eschew some measure-theoretic subtleties here; for a rigorous definition of $\mu_p$ using cylinder events, see Bollobás and Riordan [@BollobasRiordan06 Chapter 1].)
Let $\Lambda_p$ denote a $\mu_p$-random subgraph of $\Lambda$. Then
- if $p\leqslant \frac{1}{2}$, then almost surely $\Lambda_p$ does not contain an infinite component;
- if $p>\frac{1}{2}$, then almost surely $\Lambda_p$ contains an infinite component.
We began investigating *Maker–Breaker percolation games*, where Maker tries to ensure the origin is contained in an infinite component, to see if some analogue of the Chvatál–Erd[ő]{}s probabilistic intuition could hold in this setting also, despite the presence of an infinite probability space. One of the key tools in modern proofs of the Harris–Kesten theorem are the so-called *Russo–Seymour–Welsh* lemmas giving bounds on the probability of crossing rectangles of various aspect ratios at $p=\frac{1}{2}$. Unsurprisingly, crossing games turned out to play an important role in our arguments when studying percolation games. In particular, the results we establish in this paper are key ingredients in the proofs of our main results on Maker–Breaker percolation games that we establish in the sequel [@DayFalgasRavry18b] to the present paper.
Besides the motivation from percolation theory, we should like to stress also that crossing games are paradigmatic representatives of an important class of positional games. Indeed they are related to the older and much-studied game of Hex, and the $(1,1)$-crossing game we study here is in fact the commercially available game of Bridg-it. Which of the players wins Bridg-it under perfect plays has been known since the late 1960s, thanks to Lehman’s resolution of the more general Shannon switching game [@Lehman64]. The relationship between our work in the present paper and these older games is discussed in greater detail in Sections \[subsection: (1,1)-crossing\] and \[section: other graphs and other games\].
Preliminaries {#section: preliminaries}
=============
Basic definitions and notation {#subsection: basic def and notation}
------------------------------
A graph is a pair $G=(V,E)$, where $V=V(G)$ is a set of vertices and $E=E(G)$ is a set of pairs from $V$ which form the edges of $G$. A subgraph $H$ of $G$ is a graph with $V(H)\subseteq V(G)$ and $E(H)\subseteq E(G)$. Given $n\in \mathbb{N}$, let $[n]=\{1,2, \ldots n\}$. In this paper, we often identify a graph with its edge-set when the underlying vertex-set is clear from context. For the remainder of this paper, unless stated otherwise, the variables $m,n,p,q,x$ and $y$ will always be natural numbers.
Let $\Lambda$ denote the square integer lattice, that is, the graph on $\mathbb{Z}^2$ whose edges consist of pairs of vertices $\mathbf{v}, \mathbf{w}\in \mathbb{Z}^2$ lying at Euclidean distance $\| \mathbf{v}-\mathbf{w}\|=1$ from each other. Given $m$ and $n$, let $\Lambda_{m \times n}$ be the finite subgraph of $\Lambda$ induced by the vertex set $\{(x,y):x \in [m], y \in [n]\}$. If $e$ is a horizontal edge in $\Lambda_{m \times n}$, that is $e = \{(x,y),(x+1,y)\}$ for some $x,y$, then we identify $e$ with its midpoint and write $e = (x+0.5,y)$. Similarly, if $e$ is a vertical edge in $\Lambda_{m \times n}$, that is $e = \{(x,y),(x,y+1)\}$ for some $x,y$, then we denote $e$ by its midpoint and write $e = (x,y+0.5)$. Let $S_{m \times n}$ be the graph obtained by taking $\Lambda_{m \times n}$ and removing all the edges from the set $$\label{equation: deleted edges}
\Big\{(1,y+0.5): y \in [n-1]\Big\} \bigcup \Big\{(m,y+0.5): y \in [n-1]\Big\},\notag$$ that is, all the leftmost and rightmost vertical edges in $\Lambda_{m \times n}$. We say a path in $S_{m \times n}$ or $\Lambda_{m \times n}$ is a *left-right crossing path* if it joins some vertex $(1,y)$ on the left-hand side of the board to some vertex $(n, y')$ on the right-hand side of the board, where $y,y' \in [n]$.
We define the $(p,q)$-crossing game on $S_{m \times n}$ (respectively $\Lambda_{m\times n}$) as follows. Two players, Maker and Breaker, play in alternating turns. Maker plays first and on each of her turns claims $p$ (as-yet-unclaimed) edges of the board $S_{m \times n}$ (respectively $\Lambda_{m\times n}$); Breaker on each of his turns answers by claiming $q$ (as-yet-unclaimed) edges of the board. The game ends if either Maker manages to claim all the edges of a left-right crossing path, in which case we declare her the winner, or if the board reaches a state where it is not longer possible for Maker to ever claim such a left-right crossing path, in which case we declare Breaker the winner.
We shall work with crossing games on the board $S_{m \times n}$ rather than $\Lambda_{m \times n}$ for technical reasons, but for all practical purposes the two games are the same — it can never be in a player’s interest to claim an edge in $E(\Lambda_{m\times n})\setminus E(S_{m\times n})$ (so as far as winning strategies the two games are identical) and the removal of these edges makes it easier to define a dual board, as we shall shortly do below.
As a convention, we only consider the outcomes of the games under perfect play. If Maker has a winning strategy for given values $m,n,p,q$, we say that the corresponding game is a *Maker win*, otherwise we say it is a *Breaker win*. Further, we follow the convention that edges claimed by Maker are coloured blue, while edges (and their dual) claimed by Breaker are coloured red.
We now define *duality* for our boards. The dual $\Lambda^{*}$ of $\Lambda$ is the graph obtained from $\Lambda$ by shifting its vertex set by $(0.5, 0.5)$, i.e. the graph with vertex-set $\{(x + 0.5,y + 0.5): x,y \in \mathbb{Z} \} = \mathbb{Z}^{2} + (0.5,0.5)$ and edge-set consisting of all pairs of vertices lying at Euclidean distance $1$ from each other. We refer to the vertices and edges of $\Lambda^{*}$ as *dual vertices* and *dual edges* respectively. Just as for $\Lambda$, we identify each dual edge with its midpoint. Given an edge $e\in E(\Lambda)$, its *dual* is defined to be the dual edge $e^{*}\in E(\Lambda^{*})$ such that $e$ and $e^{*}$ have the same midpoint. So for example the dual of the horizontal edge $e = (x + 0.5,y)\in E(\Lambda)$ is the vertical dual edge $e^{*} = (x + 0.5,y)^{*}$ that lies between the dual vertices $(x + 0.5,y-0.5)$ and $(x + 0.5,y+0.5)$, and the dual of the vertical edge $e = (x ,y+ 0.5)\in E(\Lambda)$ is the horizontal dual edge $e^{*} = (x ,y+ 0.5)^{*}$ that lies between the dual vertices $(x - 0.5,y+0.5)$ and $(x + 0.5,y+0.5)$.
Given a set of edges $E$ in $\Lambda$, let $E^{*} = \{e^{*} : e \in E\}$. Given a subgraph $\Gamma = (V,E)$ of $\Lambda$ (finite or infinite), we define its dual $\Gamma^{*}$ to be the graph with edge-set $E^{*}$, and vertex-set consisting of all dual vertices incident to some dual edge $e^{*} \in E^{*}$. As an example, we have $S_{m \times n}^{*}$ is a rotated and translated copy of the graph $S_{(n+1) \times (m-1)}$. In particular, $S_{(n+1) \times n}$ is *self-dual*, being isomorphic to its dual graph. Similarly, the square-integer lattice $\Lambda$ is self-dual.
In the context of Maker-Breaker crossing games, duality is important as it shows Breaker can be viewed as a “dual Maker" aiming to build a vertical crossing dual path.
\[lemma: duality\] Suppose that Maker and Breaker play the $(p,q)$-crossing game on $S_{m \times n}$. At the end of the game, let $M$ be the set of edges claimed by Maker and $B$ be the set of edges claimed by Breaker. Then precisely one of the two following statements holds:
- Maker has won the game and $M$ contains a left-right crossing path in $S_{m \times n}$,
- Breaker has won the game and $B^{*}$ contains a top-bottom crossing dual path of $S_{m \times n}^{*}$.
In particular, our original game is equivalent to the game where Maker’s aim is to build a left-right crossing path of $S_{m \times n}$ and Breaker’s aim is to build a top-down dual crossing path of $S^{*}_{m \times n}$.
While this Lemma is intuitively obvious, writing down a formal proof is not trivial. It follows however almost directly from [@BollobasRiordan06 Lemma 1, Chapter 3] which states that for any bipartition of $E(S_{m \times n})$ into disjoint sets, $E_{1}$ and $E_{2}$, precisely one of the two following statements holds:
- $E_{1}$ contains a left-right crossing path of $S_{m \times n}$,
- $E_{2}^{*}$ contains a top-bottom crossing dual path of $S_{m \times n}^{*}$.
Clearly if at the end of the game Maker has won, then $M$ contains a left-right crossing path in $S_{m \times n}$. If instead at the end of the game Breaker has won, then there is no left-right crossing path in $E(S_{m \times n}) \setminus B$ and so by the above dichotomy we have that $B^{*}$ contains a top-bottom crossing path of $S_{m \times n}^{*}$.
As Lemma \[lemma: duality\] shows, the two players in our game actually have similar aims when viewed throught the prism of duality: Maker and Breaker are competing for resources (edges/dual edges) to build their winning sets (left-right crossing paths/top-bottom crossing dual paths). To reflect the symmetry of their competing aims, we will sometimes refer to Maker as the *horizontal player*, denoting her by $\mathcal{H}$, and to Breaker as the *vertical player*, and denoting him by $\mathcal{V}$. Further we will often think of Breaker as playing on the dual board and claiming dual edges on each of his turns rather than the corresponding edges of the original board (as they do in the formal game definition).
With the help of duality, one can define the boundary of a connected component in $\Lambda$ or $\Lambda^*$.
For a finite connected subgraph of $\Lambda$ with vertex set $C$, there is a unique infinite connected component $C_{\infty}$ of the subgraph of $\Lambda$ induced by the vertices in $\mathbb{Z}^2\setminus C$. The *external boundary* $\partial^{\infty}C$ of $C$ is the collection of dual edges from $\Lambda^{*}$ that are dual to edges joining $C$ to $C_{\infty}$ in $\Lambda$. The external boundary for a set of dual vertices from a finite connected subgraph of $\Lambda^*$ is defined mutatis mutandis.
It can be shown (see [@BollobasRiordan06 Lemma 1, Chapter 1]) that the external boundary $\partial^{\infty}C$ of the vertex-set $C$ of a finite connected subgraph $H$ of $\Lambda$ is a dual cycle with $C$ in its interior. A key tool in our proof of Theorem \[theorem: (2q,q)-game\] will be the following bound on the size of the boundary cycle in terms of the number of edges in $H$.
\[lemma: isoperimetric lemma\] Let $k \in \mathbb{N}$. If $A$ is set of $k$ edges in $\mathbb{Z}^{2}$ forming a connected component with vertex set $C$, then the dual boundary cycle $\partial^{\infty}C$ contains at most $2k+4$ dual edges.
We prove the lemma by induction on $k$. The dual boundary cycle of a single edge has size $6$, so our claim holds in the base case $k=1$. Now assume that we have shown our claim holds for all components consisting of at most $k$ edges, and let $A$ be a set of $k+1$ edges forming a connected component in $\Lambda$ with vertex set $C$.
If $A$ contains a cycle, then there exists some edge $e \in A$ such that $A \setminus \{e\}$ also gives a connected subgraph with vertex-set $C$, and so by our inductive hypothesis $\vert \partial^{\infty}C\vert\leq 2k+4$. On the other hand if $A$ is acyclic, then the corresponding subgraph is a tree, and hence has at least one leaf (vertex of degree one). Thus there exists an edge $e \in A$ such that $A' = A \setminus \{e\}$ spans all but one vertex of $C$, say the vertex $v$. Let $B=\partial^{\infty}(C \setminus \{v\})$; by the inductive hypothesis we know that $\vert B\vert \leqslant 2k+4$. If $e$ is not dual to any dual edge in $B$, then $B$ is also the dual boundary cycle for $C$, and we are done. If on the other hand we have $e^{*} \in B$, let $f_{1},f_{2}$ and $f_{3}$ be the three dual edges that together with $e^{*}$ form the boundary cycle around the single vertex $v$. Since $e$ is the only edge of $A$ incident with $v$, none of $f_1,f_2,f_3$ lie in $A$. The union $$\big(B \setminus \{e^{*}\} \big) \cup \{ f_{1},f_{2},f_{3}\} \nonumber$$ of these dual edges with $B$ contains the external boundary $\partial^{\infty}C$ of $C$, and so this external boundary has size at most $\vert B\vert + 2 \leqslant 2(k+1)+4 $, as required.
Elementary bounds on winning boards for the $(p,q)$-crossing game {#subsection: (p,q)-crossing}
-----------------------------------------------------------------
In this subsection, we make some elementary observations about winning boards for crossing games for general $(p,q)$. We begin by giving some trivial bounds on the identity of the winner in the $(p,p)$-crossing game under optimal play on various different boards.
\[proposition: (p,p)-game\]
(i) Maker has a winning strategy for the $(p,p)$-crossing game on $S_{m \times n}$ for all $m\leqslant n +1$;
(ii) Breaker has a winning strategy for the $(p,p)$-crossing game on $S_{m \times n}$ for all $m \geqslant (p+1)(n+1)$.
Part (i) is immediate by strategy stealing: it is enough to show that Maker can win on the self-dual board $S_{(n+1) \times n}$ (playing on a narrower board can only help Maker). Suppose for contradiction that Breaker, playing second, had a winning strategy. Then Maker can player $p$ arbitrary moves on her first turn, and from then on pretend to be Breaker playing on $S_{(n+1) \times n}^{*}$, using Breaker’s putative winning strategy to respond to Breaker’s actual moves (and making arbitrary moves if ever asked to claim an edge she has already claimed). Maker’s initial moves can never hurt her, and thus this is a winning strategy — contradicting our assumption that Breaker has a winning strategy, since we know this game can never end in a draw. Thus Maker must have a winning strategy.
For part (ii), it is enough to show that Breaker can win on the board $S_{(p+1)(n+1) \times n}$ (playing on a wider board can only help Breaker). We divide up this board into $p+1$ copies of $S_{(n+1) \times n}$ (plus some extra edges which we ignore). On her first move, Maker must fail to claim an edge in at least one of these copies. Thereafter Breaker plays entirely in this copy. Since $S_{(n+1) \times n}$ is self-dual and Breaker is playing first in the $(p,p)$-crossing game on this copy, he has a winning strategy. (Formally, this is not quite the $(p,p)$-crossing game: by playing on other boards, Maker could play fewer than her $p$ moves in our chosen copy of $S_{(n+1) \times n}$ in any given turn — but this can never help her.)
\[proposition: (p, p+5r)-game\] Breaker has a winning strategy for the $(p, p+5r)$-crossing game on $S_{m\times n}$ for all $n > r$ and $m \geqslant \lceil \frac{p}{r}\rceil (n+1)$.
As before, it is enough to show that Breaker can win on the board $S_{m\times n}$ with $m=\lceil \frac{p}{r}\rceil (n+1)$ (playing on a wider board can only help Breaker). Divide the board into $\lceil \frac{p}{r}\rceil$ copies of $S_{(n+1)\times n}$. By our bounds on $n$ and $m$, Maker cannot have won on her first turn (since $m>p +1$). Also by the pigeonhole principle, there is one such copy on which Maker has played at most $r$ moves on her first turn. For the remainder of the game, Breaker shall solely focus his efforts on this board, and so we may view Breaker as playing first on an $(n+1)\times n$ board where $r$ edges have been pre-emptively claimed by Maker.
Breaker shall only use his extra power of $5r$ in his first turn, to ‘neutralise’ Maker’s edges by ensuring they can never be part of a left-right crossing path, and otherwise shall follow his winning strategy for the $(p,p)$-crossing game on an $(n+1)\times m$ board when he plays first — a strategy which exists by Proposition \[proposition: (p,p)-game\] and the self-duality of $S_{(n+1)\times n}$. (For completeness, other than on his first move, he plays arbitrary moves with his extra $5r$ edges and if ever requested to play a previously claimed edge). Provided his first-turn ‘neutralisation’ works, Breaker will clearly win the game.
Lemma \[lemma: isoperimetric lemma\] established that a connected subgraph of $\mathbb{Z}^2$ with $k\geqslant 1$ edges has a dual boundary cycle of size at most $2k+4$. Further, observe that if we claim all but one of the edges in the dual boundary cycle to one of Maker’s connected components $C$, then no left-right crossing path Maker makes can go through $C$, and it makes no difference to the outcome of the game if all of the edges inside $C$ had been claimed by Breaker instead. Thus to neutralise Maker’s (at most) $r$ initial edge in Breaker’s chosen subboard, Breaker claims all but one dual edge from the boundary cycles of each of the corresponding connected components. By our bound from Lemma \[lemma: isoperimetric lemma\], this requires a total of at most $5r$ edges, which is exactly the extra power Breaker has.
Clearly, the bounds on $m$ and $n$ in Proposition \[proposition: (p,p)-game\] and \[proposition: (p, p+5r)-game\] are quite unsatisfactory, and we do not believe for a moment that they are tight. See Section \[section: concluding remarks\] for a number of questions and conjectures pertaining to this.
The $(1,1)$-crossing game: Bridg-it and the Shannon switching game {#subsection: (1,1)-crossing}
------------------------------------------------------------------
The $(1,1)$-crossing game played on $S_{m \times n}$ is also known as *Bridg-it* (sometimes referred to as Bridge-it), and was first invented by David Gale. Traditionally Bridg-it is played on a self dual grid, usually $S_{6 \times 5}$ or $S_{7 \times 6}$, however here we relax the definition to allow play on any grid-size. Bridg-it bears some similarities to the celebrated game of *Hex*, which is another positional crossing game played on the faces of a hexagonal lattice (see [@HaywardRijswijck06] for a formal definition of *Hex*), however Bridg-it is much simpler and better understood.
By Proposition \[proposition: (p,p)-game\], we know that in Bridg-it there is always a winning strategy (via strategy-stealing) for the first player, $\mathcal{H}$, when $m \leqslant n+1$. When $m > n+1$ the vertical player $\mathcal{V}$ has a winning strategy which involves mirroring $\mathcal{H}$’s moves through an appropriate reflection of the grid. These two strategies (strategy-stealing and reflection strategy) have counterparts in *Hex* (see e.g. [@HaywardRijswijck06]). However the strategy-stealing argument does not provide an explicit winning strategy for $\mathcal{H}$, but merely proves its existence, and constructing such a strategy for $(n+1)\times n$ $\mathrm{Hex}$-boards is an extremely hard computational problem even for small $n$.
By contrast, there are several different *explicit* strategies that $\mathcal{H}$ can use to win in Bridg-it whenever $m \leqslant n+1$. The first of these to be discovered was a simple but elegant edge-pairing strategy due to Gross in 1961, see [@Beck08 p. 66] for a description. A different strategy can be read out of a winning strategy due to Lehman [@Lehman64] for a different combinatorial game, known as the *Shannon switching game*. In addition to the crossing games studied in this paper, ideas related Lehman’s winning strategy for the Shannon switching game play important role in our study of Maker-Breaker percolation games in our companion paper [@DayFalgasRavry18b]. For these reasons and for completeness, we describe the Shannon switching game and its application to Bridg-it in detail below.
That strategies for the Shannon switching game may be used to construct winning strategies for $\mathcal{H}$ in Bridg-it is a well-known folklore result, which has been recorded in a number of places, see e.g. [@Beck08 p. 67]. We present the argument below and offer the modest improvement that, on $S_{m \times n}$ with $ m \leqslant n+1$, Lehman’s strategy allows $\mathcal{H}$ the freedom of picking any edge of the board on her first move and still win the entire game. (As far as we are aware, this observation has not appeared in the literature before.)
### The Shannon switching game {#subsection: Shannon switching}
The *Shannon switching game* is a positional game invented by Claude Shannon. The game is played on a triple $(G,a,b)$, where $G$ is a multigraph and $a$ and $b$ are two distinguished vertices of $G$. At the start of the game every edge is classified as *unsafe*. Two players, *Cut* and *Join*, play in alternating turns in which they claim unsafe edges. Cut plays first, and on each of his turns picks an unsafe edge and deletes it from $G$. Join plays second, and on each of her turns picks an unsafe edge and marks it as *safe*. The game ends when there are no unsafe edges left. Join wins if, at the end of the game, there exists a path of safe edges from $a$ to $b$, and otherwise Cut wins if no such path exists. (Thus in our games Cut and Join correspond to Breaker and Maker respectively.)
The Shannon switching game was solved by Lehman [@Lehman64], who, for each graph, determined which of the players has a winning strategy and in addition gave an explicit description of a winning strategy in each case.
A multigraph is *$k$-positive* if it contains $k$ pairwise disjoint connected spanning subgraphs.
Lehman showed that there is a winning strategy for Join in the Shannon switching game played on $(G,a,b)$ if and only if $G$ has a $2$-positive subgraph that contains both $a$ and $b$. (In fact Lehman achieved his result by generalising the Shannon switching game to a game played on matroids and solving it in that more general setting, but we will not be concerned with matroids in this paper.)
It is the *if* direction of this statement that we will need and so we reproduce its (simple) proof here. For the interested reader, a relatively short and simple proof of the *only if* direction of the statement (in the language of graphs rather than matroids) was given by Mansfield in [@Mansfield01].
\[proposition: Join wins on 2-positive\] Suppose $a,b$ are vertices in a multigraph $G$ such that there exists a $2$-positive subgraph of $G$ containing both $a$ and $b$. Then Join has a winning strategy for the Shannon switching game played on $(G,a,b)$
Suppose $G$ has a $2$-positive subgraph that contains both $a$ and $b$. We may pass to this subgraph and assume that $G$ is itself $2$-positive. Let $G_{1}$ and $G_{2}$ be two edge-disjoint connected spanning subgraphs of $G$. For each $t \geqslant 0$, let $C^{t}$ be the set of the first $t$ edges that Cut deletes from $G$, and let $S^{t}$ be the set of the first $t$ edges of $G$ that Join marks as safe. Moreover, for each $i = 1,2$ let $$G_{i}^{t} = \big(G_{i} \setminus C^{t} \big)\cup S^{t}. \nonumber$$ Join’s strategy will be to ensure that, for all $t \geqslant 0$, the graphs $G_{1}^{t}$ and $G_{2}^{t}$ are both connected spanning subgraphs of $G$. We use induction on $t$ to show she can achieve this; it is clear that $G_{1}^{t}$ and $G_{2}^{t}$ are both connected spanning subgraphs of $G$ when $t = 0$. Suppose that $G_{1}^{t-1}$ and $G_{2}^{t-1}$ are both connected spanning subgraphs of $G$. Without loss of generality, we may assume that the next edge that Cut deletes is an edge of $G_{1}$, say the edge $e=\{x,y\}$. If $G_{1}^{t-1} \setminus \{e\}$ is still spanning and connected, then Join may play their next move arbitrarily. If $G_{1}^{t-1} \setminus \{e\}$ is not spanning and connected, then it consists of exactly two components, one containing $x$ and the other containing $y$. As $G_{2}^{t-1}$ is spanning and connected it contains a path from $x$ to $y$. As $G_{1}^{t-1}$ is spanning, we must have that there exists an edge $f$ of this path that lies between the two components of $G_{1}^{t-1} \setminus \{e\}$. As $f$ lies between the two components, $f \notin J^{t-1} \cup C^{t-1}$. On her move, Join marks the edge $f$ as safe and adds its to $S^{t-1}$ to form $S^t$. This ensures $G_{1}^{t}$ is once again a connected spanning subgraph of $G$, as required. Furthermore $G_{2}^{t}$ contains $G_{2}^{t-1}$ as a subgraph, and so remains a connected spanning subgraph of $G$. This proves our inductive statement. When the game ends, say after Join has marked $r$ edges as safe and all other edges are unsafe, we have that $G_{1}^{r} = G_{2}^{r} = J^r$, which forms a spanning connected subgraph of $G$. In particular, there is a path of safe edges from $a$ to $b$.
It is easy to extend Join’s winning strategy from Proposition \[proposition: Join wins on 2-positive\] to the $(k,k)$-Shannon switching games where each player is allowed to claim $k$ edges on each of their turns. We leave the proof as an exercise for the reader.
\[proposition: Join wins (k,k)-game on k-positive\] Suppose $a,b$ are vertices in a multigraph $G$ such that there exists a $(k+1)$-positive subgraph of $G$ containing both $a$ and $b$. Then Join has a winning strategy for the $(k,k)$-Shannon switching game played on $(G,a,b)$.
### Winning strategy for Maker in Bridg-it
\[theorem: Maker winds Bridg-it with any first move\] Maker has a winning strategy for the $(1,1)$-crossing game on $S_{(n+1)\times n}$ (i.e. the game of Bridg-it) that allows her to choose any edge she wants on her first move.
We begin by $2$-colouring the edges of $S_{(n+1) \times n}$. All horizontal edges (i.e. all edges of the form $(x, y+0.5)$) are assigned the colour green, while all vertical edges (i.e. all edges of the form $(x, y+0.5)$) are coloured orange. The horizontal player $\mathcal{H}$ (Maker) then picks an arbitrary edge $e$ as her first edge and colours it blue. Based on the choice of $e$, we define a set $A$ of green edges which $\mathcal{H}$ shall recolour and use in her strategy.
If $e$ is a green edge, we let $A$ be any set of $n-1$ green edges such that no two edges in $A\cup\{e\}$ have the same $x$-coordinate, and no two edges in $A\cup\{e\}$ have the same $y$-coordinate. If instead $e$ was an orange edge, say $e = (x,y+0.5)$, then let $f_{1} = (x+0.5,y)$ and $f_{2} = (x-0.5,y+1)$. Let $A'$ be any set of $n-2$ green edges such that no two edges in $A' \cup \{f_{1},f_{2}\}$ have the same $x$ coordinate, and no two edges in $A' \cup \{f_{1},f_{2}\}$ have the same $y$ coordinate. Let $A = A' \cup \{f_{1},f_{2}\}$.
In either case, we recolour all the edges in $A$ with the colour orange. Let $G$ be the graph formed from $S_{(n+1) \times n}$ by contracting all vertices $(1,y)$ into a single vertex $a$, and contracting all vertices of the form $(n+1,y)$ into a single vertex $b$. There is a one-to-one correspondence between the edges of $S_{(n+1) \times n}$ and $G$, so we may consider the colouring that $G$ inherits from $S_{(n+1) \times n}$. Let $e'$ be in edge in $G$ that corresponds to the edge $e$ in $S_{(n+1) \times n}$, i.e. the unique blue edge in the graph. Let $G_{1}$ be the subgraph of $G$ whose edge set consists of the set of green edges together with the unique blue edge $e'$. Similarly, let $G_{2}$ be the subgraph of orange edges together with the blue edge $e'$. See Figure \[Fig2.1\] for an example of these graphs when the first edge that $\mathcal{H}$ chose was an orange edge.
![An example of the initial colouring $\mathcal{H}$ uses for her winning strategy. In the first graph, the blue edge is first edge that $\mathcal{H}$ plays and the set of dark green edges are an example of a suitable set $A$ for the recolouring. The second and third diagrams show the graphs $G_{1}$ and $G_{2}$ respectively, which are the graphs that arise from the colouring inherited by $G$.[]{data-label="Fig2.1"}](1,1-Strategy)
It is easy to see that $G_{1}$ and $G_{2}$ are both connected spanning subgraphs of $G$ containing $a$ and $b$. Moreover, the only edge these two graphs share in common is the edge $e'$ that Maker has claimed on her first turn. Thus, if we consider this edge as ‘safe’, then we know by Propostion \[proposition: Join wins on 2-positive\] that Join has an explicit winning strategy on this graph when playing the Shannon switching game, where the two distinguished vertices $a$ and $b$ are the left- and right-most vertices. Thus $\mathcal{H}$’s strategy in Bridg-it is simply to lift Join’s strategy from the Shannon switching game on $G$ to the $(1,1)$-crossing game on $S_{(n+1) \times n}$. At the end of the Shannon switching game on $G$ we know that Join has constructed a path of safe edges from $a$ to $b$. When lifted back to $S_{(n+1) \times n}$ this path is a left-right crossing path of $S_{(n+1) \times n}$, as required.
The $(2q-1,q)$-crossing game: Breaker wins on sufficiently long boards {#section: (2q-1,q)-game}
======================================================================
In this section we prove Theorem \[theorem: (2q-1,q)-game\], which states that if $m$ is sufficiently large (with respect to $p$ and $n$), then Breaker, also referred to as the vertical player $\mathcal{V}$, has a winning strategy for the $(2q-1,q)$-crossing game on $S_{m \times n}$.
Let $T$ be the number of edges in $S_{(n+1) \times n}$, that is $T = n^{2} + (n-1)^2$. Let $$m_{0} = m_{0}(q,n) = (n+1)\big((6q-2)^{T} + 2q-1\big). \nonumber$$ We split the board $S_{m_{0} \times n}$ into $(6q-2)^{T} + 2q-1$ disjoint copies of $S_{(n+1) \times n}$, which we call strips. At any point during the game, we say a strip is $k$*-valid* if it contains exactly $k$ red edges and is in a winning position for $\mathcal{V}$ in the $(1,1)$-crossing game on $S_{(n+1) \times n}$ when $\mathcal{V}$ plays second. We say a strip is $k$*-neutral* if it contains exactly $k$ red edges and is in a winning position for $\mathcal{V}$ in the $(1,1)$-crossing game when $\mathcal{V}$ gets to play first. If a strip is neither $k$-valid nor $k$-neutral for any integer $k$ we say that it is *invalid*. Note that if a strip is $k$-valid, then it is also $k$-neutral.
We know by Proposition \[proposition: (p,p)-game\] that every strip is $0$-neutral at the start of the game. The game begins with $\mathcal{H}$ playing edges in up to $2q-1$ different strips, possibly making them invalid in the process. At this point, the vertical player $\mathcal{V}$’s strategy will proceed in $T+1$ phases, with phase $0$ starting after $\mathcal{H}$’s initial turn. For each $k \in\{ 0,1,\ldots,T\}$, $\mathcal{V}$’s strategy will ensure that at the beginning of phase $k$ (i) it is $\mathcal{V}$’s turn to play, and (ii) there are at least $(6p-2)^{T - k}$ $k$-neutral strips. Note that this implies that at the start of phase $T$ there will be at least one $T$-neutral strip, which by definition must contain a path of red dual edges from the top of the strip to the bottom of the strip, and thus $\mathcal{V}$ wins the game.
Clearly (i) and (ii) both hold for $k = 0$. Let us now show that if (i) and (ii) both hold at the beginning of phase $k$, then $\mathcal{V}$ can ensure they both hold at the beginning of phase $k+1$ too. On each turn in phase $k$, the vertical player $\mathcal{V}$ will choose $q$ different $k$-neutral strips and play a single edge in each that turns these $k$-neutral strips into $(k+1)$-valid strips. The horizontal player $\mathcal{H}$ can now distribute their $2q -1$ edges among all of the strips. Each edge that $\mathcal{H}$ plays can either turn a $(k+1)$-valid strip into a $(k+1)$-neutral strip, or turn a $k$-neutral or $(k+1)$-neutral strip into an invalid one (or can be played in another kind of strip, in which case we ignore it).
For each $t \in \mathbb{Z}_{\geqslant 0}$, let $A_{t}$ be the number of $(k+1)$-valid strips after a combined total of $t$ edges have been claimed by the two players in phase $k$ of the game (where for convenience we imagine the two players play the edges on their turn in some arbitrary order). Similarly, let $B_{t}$ be the number of $(k+1)$-neutral strips after a combined total of $t$ edges have been played by the two players in phase $k$. Let $R_{t} = 2A_{t} + B_{t}$.
How does $R_t$ vary with $t$? If the next edge to be claimed is one of $\mathcal{V}$’s, then $R_{t+1} = R_{t} + 2$. On the other hand, if the next edge to be claimed is one of $\mathcal{H}$’s, then $R_{t+1} \geqslant R_{t} -1$. As the two players claim a combined total of $3q-1$ edges on each turn of the game, we have that $R_{r(3q-1)} \geqslant r$ for all $r \in \mathbb{Z}_{\geq0}$, until either phase $k$ ends or $\mathcal{V}$ runs out of $k$-neutral strips.
Now $\mathcal{V}$ decides that phase $k$ ends (and phase $k+1$ begins) when $R_{r(3q-1)} \geqslant 2(6q-2)^{T-k-1}$ for some $r \in \mathbb{Z}_{\geqslant 0}$. Note that after $\mathcal{H}$ and $\mathcal{V}$ have both completed their turns, the number of $k$-neutrals strips has decreased by at most $3q-1$. Thus, as the number of $k$-neutral strips at the start of phase $k$ is at least $(6q-2)^{T-k}$, we know that the number of $k$-neutral strips for $\mathcal{V}$ to play in will not run out before $R_{r(3q-1)} \geqslant 2(6q-2)^{T-k-1}$. As $R_{r(3q-1)} \geqslant 2(6q-2)^{T-k-1}$ we have that the number of $(k+1)$-neutral strips at the start of phase $k+1$ is at least $(6q-2)^{T-k-1}$ and further that it is $\mathcal{V}$’s turn to play, so that (i) and (ii) both hold as required.
The $(2q, q)$-crossing game: Maker wins on arbitrarily long and narrowest possible boards {#section: (2q,q)-game}
=========================================================================================
In this section we prove Theorem \[theorem: (2q,q)-game\], which states that if $n \geqslant q+1$, then Maker, also referred to as the horizontal player $\mathcal{H}$, has a winning strategy for $(2q,q)$-crossing-game on $S_{m \times n}$, for any $m\in \mathbb{Z}_{\geqslant 0}$. Note that the condition $n \geqslant q+1$ is necessary as if $n \leqslant q$, then $\mathcal{V}$ could win the game in a single turn. We in fact prove a more general result, showing $\mathcal{H}$ can win the *$q$-double-response game* (defined below) — this will not complicate the argument, and the greater generality will allow us to apply these results to the study of percolation games in the sequel to this paper [@DayFalgasRavry18b]. A key idea in our proof will be to consider a third game, the *secure game*, where $\mathcal{V}$ plays one edge at at time but is given the extra power of reclaiming some of $\mathcal{H}$’s edges. This will allow us to treat a $(2q,q)$ game like a $(2,1)$ game, which is much more amenable to analysis, and we shall show that even with $\mathcal{V}$’s extra powers, $\mathcal{H}$ still has a winning strategy.
Let $S_{\infty \times n}$ be the infinite subgraph of $\Lambda$ induced by the vertex set $\{(x,y):x \in \mathbb{Z}, y \in [n]\}$. The $q$*-double-response-game* is a game played by two players, a horizontal player $\mathcal{H}$ and a vertical player $\mathcal{V}$, on the edges of $S_{\infty \times n}$. The game beings with $\mathcal{V}$ playing first. On each turn $t$, $\mathcal{V}$ picks an integer $r_{t} \in [q]$ and then claims $r_{t}$ as-yet unclaimed edges in $S_{\infty \times n}$ for himself; then $\mathcal{H}$ answers by claiming $2r_t$ as-yet unclaimed edges in response to $\mathcal{V}$’s move. In this game, $\mathcal{V}$’s aim is to claim a set of edges corresponding to a top-bottom crossing path of dual edges, and we say $\mathcal{V}$ wins if he is able to do so. The horizontal player $\mathcal{H}$’s aim is to prevent this from ever happening, and we say $\mathcal{H}$ wins the game if she is able to do so. We remark that throughout this section we will always view the game through the lens of duality, so that $\mathcal{V}$ always claims dual edges.
We will show that if $n \geqslant q+1$, then $\mathcal{H}$ has a winning strategy for the $q$-double-response-game. Clearly this implies $\mathcal{H}$ has a winning strategy in $(2q,q)$-crossing-game on $S_{m \times n}$, playing as Maker (and even surrendering her first move). Thus Theorem \[theorem: (2q,q)-game\] is immediate from the following.
\[theorem: double-response\] If $n \geqslant q+1$, then $\mathcal{H}$ can win the $q$-double-response-game on $S_{\infty \times n}$.
Before we prove Theorem \[theorem: double-response\], let us sketch the main ideas behind the proof and give some preliminary definitions. We define an *arch* to be a path of edges that starts and ends at a bottommost vertex or starts and ends at a topmost vertex in $S_{\infty \times n}$. Similarly, we define a *dual arch* to be a path of dual edges that starts and ends at a bottommost dual vertex or starts and ends at a topmost dual vertex in $S_{\infty \times n}^{*}$.
We may assume that $\mathcal{V}$ never claims either a dual edge as red that would create a cycle of red dual edges or a dual arch of red dual edges. Indeed, if $\mathcal{V}$ plays such a dual edge $e$, then at any stage later on in the game, if there exists a path $P$ of red dual edges from the top of the grid to the bottom of the grid, then there still exists such a path if we remove $e$. In particular, the result of the game cannot depend on the identity of the player who claimed $e$ (or equivalently its dual $e^*$). Therefore if such a dual edge $e$ was played, we can ignore it and pretend $\mathcal{V}$ has claimed some other edge.
A key ingredient in the proof will be Lemma \[lemma: isoperimetric lemma\], which stated that if $A$ is set of $k$ edges in $\mathbb{Z}^{2}$ that form a connected component $C$, then the dual boundary cycle to $C$ contains at most $2k+4$ dual edges. While we do not use Lemma \[lemma: isoperimetric lemma\] directly, it is the “explanation” for why our proof works, and it will be helpful for the reader to keep it in mind throughout this section.
Suppose that $\mathcal{H}$ was able to ensure that, at the end of each of her turns she has claimed every edge of every boundary cycle of every component created by $\mathcal{V}$’s red dual edges. If so, then as any top-bottom dual crossing path needs at least $n\geqslant q+1$ dual edges, $\mathcal{V}$ would be unable to win on any turn — and so $\mathcal{H}$ clearly wins the game. Unfortunately $\mathcal{H}$ cannot always claim all the edges of every boundary cycle. For example, if $\mathcal{V}$ plays $q$ pairwise disjoint and sufficiently spaced-out dual edges, then $\mathcal{H}$ would need $6q$ edges to claim all the edges in each of the boundary cycles of the $q$ components formed by these dual edges. However what $\mathcal{H}$ can hope for, given Lemma \[lemma: isoperimetric lemma\], is to claim all but at most $4$ edges in every boundary cycle of every component of red dual edges. Our strategy will show that $\mathcal{H}$ can indeed do this, and can do it in such a way that $\mathcal{V}$ will never be able to create a top-bottom dual crossing path, even by connecting up components created over many different turns.
To make this precise, we need some definitions.
For the rest of this section, whenever we refer to a *component* we mean, at a given stage in the game, a maximal set of two or more dual vertices connected by some path of red dual edges. We say a component is a *top component* if it contains at least one dual vertex from the top of the grid, while we say a component is a *bottom component* if it contains at least one dual vertex from the bottom of the grid. If a component is neither a top nor a bottom component, then we say that it is a *floating component*.
Given a closed cycle $C$ of blue edges, we define the *interior* of $C$ to be the set of dual vertices $v$ such that every dual path from $v$ to a top- or bottommost dual vertex must contain the dual of an edge in $C$. If $A$ is an arch that starts and ends at a bottommost vertex, then we define the *interior* of $A$ to be the set of dual vertices $v$ such that every dual path from $v$ to a topmost vertex contains the dual of an edge in $A$. Similarly, if $A$ is an arch that starts and ends at a topmost vertex, then we define the *interior* of $A$ to be the set of dual vertices $v$ such that every dual path from $v$ to a bottommost vertex contains the dual of some edge in $A$.
We now come to the key definition of *brackets*. Underpinning our strategy for $\mathcal{H}$ is the fact that she can ensure the $4$ edges in a component’s boundary cycle she is unable to claim have a nice form, namely that of one of the following brackets. See Figure \[Fig1\] for a picture of these different bracket types, together with their corners and interior dual vertices, as defined below.
We say the edges $$\{(x+0.5,y),(x+1.5,y),(x+2,y+0.5),(x+2,y+1.5)\} \nonumber$$ form a *bracket* of *Type $1$* if none of them are red. We call the vertices $(x,y)$ and $(x+2,y+2)$ the *corners* of the bracket, and we call the dual vertices $(x+0.5,y+0.5),(x+1.5,y+0.5)$ and $(x+1.5,y+1.5)$ the *interior* dual vertices of the bracket. We say the edges $$\{(x+0.5,y),(x+1,y+0.5),(x+1.5,y+1),(x+2,y+1.5)\} \nonumber$$ form a *bracket* of *Type $2$* if none of them are red. We call the vertices $(x,y)$ and $(x+2,y+2)$ the *corners* of the bracket, and we call the dual vertices $(x+0.5,y+0.5)$ and $(x+1.5,y+1.5)$ the *interior* dual vertices of the bracket. We say the edges $$\{(x,y - 0.5),(x+0.5,y-1),(x+1,y-0.5),(x+1,y+0.5)\} \nonumber$$ form a *bracket* of *Type $3^{+}$* if none of them are red. We call the vertices $(x,y)$ and $(x+1,y+1)$ the *corners* of the bracket, and we call the dual vertices $(x+0.5,y-0.5)$ and $(x+0.5,y+0.5)$ the *interior* dual vertices of the bracket. Finally, we say the edges $$\{(x+0.5,y),(x+1.5,y),(x+2,y+0.5),(x+1.5,y+1)\} \nonumber$$ form a *bracket* of *Type $3^{-}$* if none of them are red. We call the vertices $(x,y)$ and $(x+1,y+1)$ the *corners* of the bracket, and we call the dual vertices $(x+0.5,y+0.5)$ and $(x+1.5,y+0.5)$ the *interior* dual vertices of the bracket.
![The four different bracket types, represented by the black edges in each picture. For each bracket the circled blue vertices are its corners, while the circled red dual vertices are its interior dual vertices.[]{data-label="Fig1"}](Floating-and-Lower-Brackets)
\[remark: bracket symmetry\] Any bracket of Type $1$ or Type $2$ is preserved under the reflection that switches its two corners. Moreover, if you reflect a bracket of Type $3^{+}$ through the reflection that switches its two corner vertices, then you end up with a bracket of Type $3^{-}$, and vice-versa. More generally the set of brackets is closed under reflections through lines parallel to $x+y=0$.
We will make use of the above remark to reduce the amount of (tedious but necessary) case-checking required in the proof of Theorem \[theorem: double-response\].
At any stage of the game, we say a floating component $C$ is *secure* if
(i) there exists a bracket $B$ (of any type) and a path $P$ of blue edges such that the corners of $B$ are the end points of the path $P$;
(ii) the interior dual vertices of the bracket $B$ are in $C$;
(iii) $C$ is in the interior of the cycle formed by $P\cup B$;
(iv) for every edge $f \in P$, at least one of the dual vertices of the dual edge $f^{*}$ is in $C$.
If $C$ is a bottom component, then (as we may assume $\mathcal{V}$ never plays a dual edge that creates a dual arch of red dual edges) $C$ contains a unique bottommost dual vertex $v = (x+0.5,0.5)$. We say that $C$ is *secure* if there exist a non-red edge $e = (x',1.5)$ for some $x' \in \mathbb{Z}$ with $x' \geqslant x+1$, and a path $P$ of blue edges from the vertex $(x',2)$ to the vertex $(x,1)$, such that
(i) if $x' > x+1$, then $e$ is in fact a blue edge;
(ii) $C$ is contained in the interior of $P \cup \{e\}$;
(iii) for every edge $f \in P$, at least one of the dual vertices of the dual edge $f^{*}$ is in $C$.
We say that the edge $e$ is the bottom component $C$’s *gate*, and if this edge $e$ is blue, then we say that $C$ is *extra secure*.
If $C$ is a top component, then (as we may assume that $\mathcal{V}$ never plays a dual edge that creates a dual arch of red dual edges) $C$ contains a unique topmost dual vertex $v = (x+0.5,n+0.5)$. We say that $C$ is *secure* if there exist a non-red edge $e = (x',n-0.5)$ for some $x' \in \mathbb{Z}$ with $x' \geqslant x+1$, and a path $P$ of blue edges from the vertex $(x',2)$ to the vertex $(x+1,n-1)$, such that
(i) if $x' > x+1$ then $e$ is in fact a blue edge;
(ii) $C$ is contained in the interior of $P \cup \{e\}$
(iii) for every edge $f \in P$, at least one of the dual vertices of the dual edge $f^{*}$ is in $C$.
We say that the edge $e$ is the top component $C$’s *gate*, and if this edge $e$ is blue, then we say that $C$ is *extra secure*.
We say the grid is *secure* at a given stage of the game if every component is secure. Note that if the grid is secure, then no component can simultaneously be a top component and a bottom component, i.e. there is no top-bottom red dual crossing path. See Figure \[Fig2\] for an example of a grid in a secure position.
![An example of a section of the grid $S_{\infty \times 5}$ in a secure position. The black edges are the unclaimed edges that form the brackets or gates of the various components.[]{data-label="Fig2"}](Secure-Position)
\[Lemma1\] Let $n \geqslant q+1$. If the grid is in a secure position at the start of $\mathcal{V}$’s turn in the $q$-double-response-game played on $S_{\infty \times n}$, then $\mathcal{V}$ cannot win in a single turn.
Let us suppose that the grid is in a secure position and that $\mathcal{V}$ claims $l$ dual edges and thereby creates a path of red dual edges $P$ that connects the top of the grid to the bottom of the grid. Let $\{e_{1},\ldots,e_{l}\}$ be the dual edges in $P$ that $\mathcal{V}$ claimed in the order that they appear when one travels along $P$ from the bottom of the grid to the top of the grid. We may assume that none of the dual edges in $\{e_{1},\ldots,e_{l}\}$ has both of its end-points in the same component (before $\mathcal{V}$ takes his turn), as such an edge would be superfluous with respect to creating a top-down dual crossing path. We will show that $l \geqslant n$, which in turn proves the lemma as $n \geqslant q+1$.
For each dual edge $e_{i}$, let $v_{i}^{-}$ and $v_{i}^{+}$ be the end dual vertices of $e_{i}$ such that, when travelling along $P$ from the bottom of the grid to the top of the grid, one traverses $v_{i}^{-}$ before $v_{i}^{+}$. For each such dual edge, let $x_{i} \in \mathbb{Z}$ and $y_{i}\in \mathbb{N}$ be such that $v_{i}^{+} = (x_{i}+0.5,y_{i} + 0.5)$. We will show by induction on $i$ that $y_{i} \leqslant i$. The statement is clear when $i = 1$ as either $e_{1}$ must be a dual edge that meets the bottom of the grid, and so $y_{1} = 1$, or $e_{1}$ meets a bottom component, say $C$. As $C$ is secure, we must have that $e$ lies across $C$’s gate, say the edge $f = (x,1.5)$. As such we have that $e = f^{*}$ and so $y_{1} = 1$.
Now suppose that $y_{k} \leqslant k$ and consider the dual edge $e_{k+1}$. If $e_{k}$ and $e_{k+1}$ have a dual vertex in common, then it is clear that $y_{k+1} \leqslant y_{k} + 1$ and so we are done. If $e_{k}$ and $e_{k+1}$ do not share a dual vertex, then there must be some floating component $C$ such that both $e_{k}$ and $e_{k+1}$ are adjacent to a vertex in $C$, and the dual edges $e_{k}$ and $e_{k+1}$ each lie across some edge from $C$’s bracket $B$. If $B$ is a bracket of Type $1$ or Type $2$ with corners $(x,y)$ and $(x+2,y+2)$, then we must have that $y_{k} \geqslant y$ and $y_{k+1} \leqslant y + 1$. Similarly, if $B$ is a bracket of Type $3^{+}$ with corner vertices $(x,y)$ and $(x+1,y+1)$, then we must have that $y_{k} \geqslant y-1$ and $y_{k+1} \leqslant y$. Finally, if $B$ is a bracket of Type $3^{-}$ with corner vertices $(x,y)$ and $(x+1,y+1)$, then we must have that $y_{k} \geqslant y$ and $y_{k+1} \leqslant y+1$. In all cases we have shown that $y_{k+1} \leqslant y_{k} + 1$ and so we have proved our inductive claim.
We now show that we have $l \geqslant n$. Suppose for contradiction that $l \leqslant n-1$, and consider the dual edge $e_{l}$. We must have that $e_{l}$ meets either the top of the grid or a top component. As we showed above that $y_{l} \leqslant l \leqslant n-1 $, we can rule out the first of these two possibilities: $e_{l}$ cannot meet the top of the grid. Thus $e_{l}$ meets a top component. Moreover, as this top component is secure, $e_{l}$ is a horizontal dual edge, $y_{l} = n-1$, and $v_{l}^{-}$ lies to the right of $v_{l}^{+}$. Note, we cannot have $v_{l}^{-} = v_{l-1}^{+}$ as this would contradict the fact that $y_{l-1} \leqslant l-1 \leqslant n-2$. Thus the vertex $v_{l}^{-}$ must be part of some floating component $C$, and the dual edge $e_{l}$ must lie across $C$’s bracket $B$. However, the only way this would be possible is if $B$ were a bracket of Type $3^{+}$ with corner vertices $(x_{l}+1, n)$ and $(x_{l}+2,n+1)$. If this were the case, we must have that the dual vertex $(x_{l} + 1.5, n + 0.5)$ is also part of $C$, as it is an interior dual vertex of the bracket $B$. However this would tell us that $C$ is a top component and not a floating component. Therefore no such component $C$ can exist, which gives the desired contradiction.
Lemma \[Lemma1\] tells us that if the grid is secure at the start of $\mathcal{V}$’s turn, then it is not possible for $\mathcal{V}$ to win in a single turn. We now show that if the grid is secure at the start of $\mathcal{V}$’s turn, then, after $\mathcal{V}$ has claimed $r \leqslant q$ edges, $\mathcal{H}$ can return the grid to a secure state by placing at most $2r$ blue edges. This immediately implies that $\mathcal{H}$ wins the $q$-double-response game on $S_{\infty \times n}$, whenever $n \geqslant q+1$, and thus proves Theorem \[theorem: double-response\].
To show that $\mathcal{H}$ can always return the grid to a secure state at the end of each of her turns, we introduce the *secure-game*. The main idea behind this game is that it allows $\mathcal{H}$ to respond to $\mathcal{V}$’s edges one at a time.
The secure-game is played by $\mathcal{H}$ and $\mathcal{V}$ on the graph $S_{\infty \times n}$. At any point in the game, some edges will be unclaimed, some red (claimed by $\mathcal{V}$), some blue (claimed by $\mathcal{H}$), and some will have become blue double-edges (claimed twice by $\mathcal{H}$).
On each of his turns, $\mathcal{V}$ claims an edge and colours it red. The edge he claims may be unclaimed, or may already be a blue edge or a blue double-edge, in which case $\mathcal{V}$ breaks these blue edges and replaces them by a red single edge. However $\mathcal{V}$’s choice of an edge is subject to three restrictions:
(a) $\mathcal{V}$ is not allowed to claim an edge if doing so would create a red dual cycle or a red dual arch;
(b) $\mathcal{V}$ is not allowed to claim an edge if doing so connects a top component to a bottom component;
(c) if $C$ is a floating component and $P$ is the path of blue edges that helps secure $C$, then $\mathcal{V}$ is not allowed to claim an edge from $P$ if doing so turns $C$ into either a top or a bottom component.
Once $\mathcal{V}$ has played an edge $e$, $\mathcal{H}$ responds by claiming $b+2$ edges and colouring them blue, where $b$ is the number of blue edges broken by $e$, counting multiplicity. Thus $\mathcal{H}$ may respond with $2$, $3$ or $4$ edges.
At any stage of this game, we say the grid is *secure* if two conditions are met. The first condition is that the grid is in a secure position as far as the $q$-double-response-game is concerned (treating all blue double-edges as blue simple edges for that purpose). The second condition is that if $C$ and $C'$ are distinct red components and $P$ and $P'$ are the blue paths securing them, then every edge in the intersection $P\cap P'$ is a blue double-edge.
We say $\mathcal{H}$ wins the game if she can ensure that at the end of each of her turns the game is in a secure position (i.e. the board remains secure however long we play). Otherwise, we say $\mathcal{V}$ wins.
\[Lemma2\] For any $n\geqslant 2$, the horizontal player $\mathcal{H}$ has a winning strategy for the secure-game on the grid $S_{\infty \times n}$.
We will show how $\mathcal{H}$ can win the secure-game by supposing the grid is in a secure position, and describing how $\mathcal{H}$ should respond to any dual edge that the vertical player $\mathcal{V}$ claims. Suppose that $\mathcal{V}$ has played his single red dual edge $e=\{v_{1},v_{2}\}$. We split into a number of different cases, determined by whether or not the dual vertices $v_1$ and $v_2$ are part of pre-existing components. Some of these cases are then split into further sub-cases depending on whether or not $e$ lies across an existing blue edge or a bracket.
\[Case1\] Before $e$ is played, neither of the dual vertices $v_{1}$ or $v_{2}$ are part of a component.
If one of $v_{1}$ or $v_{2}$ is a bottommost dual vertex, then, without loss of generality, we can write $v_{1} = (x+0.5,0.5)$ and $v_{2} = (x+0.5,1.5)$ for some $x \in \mathbb{Z}$. In this case we know that the three edges $(x,1.5)$, $(x+0.5,2)$ and $(x+1,1.5)$ are not red (as otherwise $v_{2}$ would be part of some component before $e$ was played). The horizontal player $\mathcal{H}$ plays the edges $(x,1.5)$ and $(x+0.5,2)$. The grid is still secure as the new component that $\mathcal{V}$ created is a bottom component and is secured by the blue path $P = \{(x,1.5),(x+0.5,2)\}$ and the gate $G = \{(x+1,1.5)\}$.
Similarly, if one of the $v_{i}$ is a topmost dual vertex, then, without loss of generality, we can write $v_{1} = (x+0.5,n + 0.5)$ and $v_{2} = (x+0.5,n - 0.5)$ for some $x\in \mathbb{Z}$. The edges $(x,n- 0.5), (x+0.5,n-1)$ and $(x+1,n - 0.5)$ are not red (as otherwise $v_{2}$ would be part of some component before $e$ was played). The horizontal player $\mathcal{H}$ plays the edges $(x,n-0.5)$ and $(x+0.5,n-1)$. The grid is still secure as the new component that $\mathcal{V}$ created is a top component and is secured by the blue path $P = \{(x,n-0.5),(x+0.5,n-1)\}$ and the gate $G = \{(x+1,n-0.5)\}$.
If neither of the dual vertices $v_{1}$ or $v_{2}$ is a bottom or topmost dual vertex, then the dual edge $e=\{v_1,v_2\}$ forms a new floating component. If $e$ is a vertical dual edge, say $e = (x+0.5,y)^{*}$, then none of the six following edges are red (as otherwise one of $v_{1}$ or $v_{2}$ would have been part of a component before $e$ was played): $(x,y+0.5)$, $(x+0.5,y+1)$, $(x+1,y+0.5)$, $(x+1,y-0.5)$, $(x+0.5,y-1)$, $(x,y-0.5)$. The horizontal player $\mathcal{H}$ now claims the edges $(x,y+0.5)$ and $(x+0.5,y+1)$ and colours them blue. The grid is now secure as the new component created by $\mathcal{V}$ is a floating component secured by the blue path $P = \{(x,y+0.5),(x+0.5,y+1)\}$ and the bracket $B$ of Type $3^{+}$ with corner vertices $(x,y)$ and $(x+1,y+1)$.
Similarly, if $e$ is a horizontal dual edge, say $e = (x,y+0.5)^{*}$, then none of the following edges are red (as otherwise one of $v_{1}$ or $v_{2}$ would have been part of a component before $e$ was played): $(x-1,y+0.5)$, $(x-0.5,y+1)$, $(x+0.5,y+1)$, $(x+1,y+0.5)$, $(x+0.5,y)$, $(x-0.5,y)$. The horizontal player now claims the edges $(x-1,y+0.5)$ and $(x-0.5,y+1)$ and colours them blue. The grid is now secure as the new component created by $\mathcal{V}$ is a floating component secured by the blue path $P = \{(x-1,y+0.5),(x-0.5,y+1)\}$ and the bracket $B$ of Type $3^{-}$ with corner vertices $(x-1,y)$ and $(x,y+1)$.
We have now dealt with all possibilities in Case \[Case1\].
\[Case2\] Before $e$ is played, the vertex $v_{1}$ is part of some component $C$ while the vertex $v_{2}$ is not part of any component.
Let $P$ be the path of blue edges that secures the component $C$. If $C$ is a floating component, let $B$ be the bracket that, together with $P$, secures $C$. If instead $C$ is a bottom or top component, let $G$ be the be the gate that helps secure $C$. If $v_{2}$ lies in the interior of the cycle formed by $P \cup B$, or the arch formed by $P \cup G$, then the grid is still secure after $e$ has been played, and so $\mathcal{H}$ may play her edges arbitrarily. If $v_{2}$ is not in the interior of the cycle formed by $P \cup B$ or the arch formed by $P \cup G$, then we note that the edge $e$ must lie across either $P$, $B$ or $G$, as if this were not the case, then $v_{2}$ would be a vertex that contradicts the fact that $C$ was secure before $\mathcal{V}$’s move.
We first deal with the case that $e$ crosses an edge of $P$, say the edge $f \in P$. We cannot have that $v_{2}$ is a top or bottommost dual vertex as $\mathcal{V}$ is not allowed to break a blue edge with a red edge that contains a top or bottommost vertex. As such, there exist three edges, let us call them $g_{1},g_{2}$ and $g_{3}$, such that the set $\{f,g_{1},g_{2}, g_{3}\}$ forms a closed loop around the vertex $v_{2}$. As $\mathcal{V}$ played a red edge that breaks a single blue edge we have that $\mathcal{H}$ is allowed to play $3$ edges in response. As $v_{2}$ is not part of any component, we have that the three edges $\{g_{1},g_{2},g_{3}\}$ are not red edges, and so $\mathcal{H}$ claims all three of them. These three edges, together with $P \setminus \{f\}$, form a path that, together with the bracket $B$, secures $C \cup \{v_{2}\}$.
Suppose next that $C$ is a bottom component, and that the dual edge $e$ lies across its gate $G$. Then $e$ is of the form $e = (x,1.5)^{*}$, and the edges $(x+0.5,2)$ and $(x+1,1.5)$ are not red (as otherwise $v_{2}$ would be part of some pre-existing component). Thus the horizontal player $\mathcal{H}$ can claim these two edges and $C \cup \{v_{2}\}$ is now extra secure. Similarly, if $C$ is a top component, and the dual edge $e$ lies across $G$ then, writing $e = (x,n- 0.5)^{*}$, we see that neither of the edges $(x+0.5,n-1)$ and $(x+1,n- 0.5)$ is red (as otherwise $v_{2}$ would be part of some pre-existing component). Thus the horizontal player $\mathcal{H}$ can claim these two edges and $C \cup \{v_{2}\}$ is now extra secure.
We now deal with the case where $C$ is a floating component and $e$ crosses an edge of its bracket $B$. We divide here into sub-cases, depending on the type of the bracket $B$. For each sub-case, there are some further sub-sub-cases to consider, depending on which edge of the bracket $B$ is crossed by $e$.
In all cases we will list the two edges $f_1$ and $f_2$ that constitute $\mathcal{H}$’s response, as well as a new bracket $B'$ or a new gate $G'$. The blue path $P \cup \{f_{1},f_{2}\} \setminus \{e\}$ together with $B'$ or $G'$ will then secure the new red component $C\cup\{v_2\}$. Since $v_2$ is not part of a pre-existing red component, it will follow that the two edges $f_1$ and $f_2$ are not red edges (so that $\mathcal{H}$ is free to claim them, or to turn them into blue double-edges if she had already claimed them in the past) and further that none of the edges in the new bracket $B'$ or gate $G'$ are red (so that $\mathcal{H}$’s move does indeed secure $C\cup\{v_2\}$, as claimed).
In our analysis, we will make use of Remark \[remark: bracket symmetry\] on the closure of the family of brackets under reflections swapping their corners, which will allow us to greatly reduce the number of cases we need to check. Finally, before we dive into the case analysis, we would advise the reader to look at Figures \[Fig7\] (Cases 2a, 2b), \[Fig8\] (Case 2c), \[Fig9\] (Cases 3a, 3b, 3c), and \[Fig10\] (Cases 3d, 3e, 3f) in parallel with the proof, as the pictures there may greatly aid visualising the argument.
**Case 2a.** The bracket $B$ is a bracket of Type $1$, with corner vertices $(x,y)$ and $(x+2,y+2)$ for some $x,y$.
Suppose $y = 1$. If $e$ is the dual edge $(x+0.5,1)^{*}$, then $\mathcal{H}$ plays the two edges $(x+2,1.5)$ and $(x+2,2.5)$. The new component $C \cup \{v_{2}\}$ is a bottom component that is extra secured by the path $P \cup \{(x+2,2.5)\}$ and the gate $G' = \{(x+2,1.5)\}$. If $e$ is the dual edge $(x+1.5,1)^{*}$, then $\mathcal{H}$ plays the edges $(x+2,2.5)$ and $(x+0.5,1)$. The new component $C \cup \{v_{2}\}$ is a bottom component that is secured by the path $P \cup \{(x+2,2.5),(x+0.5,1)\}$ and the gate $G' = \{(x+2,1.5)\}$.
Suppose instead $y \geqslant 2$. If $e$ is the dual edge $(x+0.5,y)^{*}$, then $\mathcal{H}$ plays the two edges $(x,y-0.5)$ and $(x+2,y+1.5)$. The new bracket $B'$ is a bracket of Type $2$ with corner vertices $(x,y-1)$ and $(x+2,y+1)$. If $e$ is the dual edge $(x+1.5,y)^{*}$, then $\mathcal{H}$ plays the two edges $(x+0.5,y)$ and $(x+2,y+1.5)$. The new bracket $B'$ is a bracket of Type $3^{+}$ with corner vertices $(x+1,y)$ and $(x+2,y+1)$. Finally, if $e$ is the dual edge $(x+2,y+0.5)^{*}$ or the dual edge $(x+2,y+1.5)^{*}$, then we consider the dual edge $e$ and the bracket $B$ under the reflection that switches the corners of $B$ and determine our response by that given in the cases $e = (x+0.5,y)^{*}$ or $(x+1.5,y)^{*}$, reflected back. By Remark \[remark: bracket symmetry\], the new bracket $B'$ thus obtained is a valid bracket.
![The horizontal player $\mathcal{H}$’s response when the dual edge $e$ lies across the edge of a bracket of Type $1$ or $2$, as described in Cases $2$a and $2$b. In each case the top picture shows the original bracket $B$, together with the dual edge $e$, while the picture below it shows the new bracket $B'$ and any newly claimed blue edges.[]{data-label="Fig7"}](Bracket-Moves)
**Case 2b.** The bracket $B$ is a bracket of Type $2$, with corner vertices $(x,y)$ and $(x+2,y+2)$ for some $x,y$.
As in Case 2a, as $B$ is fixed under the reflection that switches its corners, it is only necessary to deal with the cases $e = (x+0.5,y)^{*}$ and $e = (x+1,y+0.5)^{*}$.
Suppose first that $e$ is the dual edge $(x+0.5,y)^{*}$. In this case $\mathcal{H}$ plays the two edges $(x+1.5,y+1)$ and $(x+2,y+1.5)$. If $y = 1$, then the component $C \cup \{v_{2}\}$ is a bottom component secured by the path $P \cup \{x+2,2.5),(x+1.5,2)\}$ and the gate $G' = \{ (x+1,1.5)\}$. If $y \geqslant 2$, then the new bracket $B'$ is a bracket of Type $3^{+}$ with corner vertices $(x,y)$ and $(x+1,y+1)$.
If instead $e$ is the dual edge $(x+1,y+0.5)^{*}$, then there is in fact no need for $\mathcal{H}$ to play any edges (so she may play them arbitrarily). The new bracket $B'$ is a bracket of Type $1$ with corner vertices $(x,y)$ and $(x+2,y+2)$.
**Case 2c.** The bracket $B$ is a bracket of Type $3^{+}$, with corner vertices $(x,y)$ and $(x+1,y+1)$ for some $x,y$.
Suppose first that $v_{2}$ is a bottommost dual vertex. In this case, we have that $y = 2$ and $e = (x + 0.5,1)^{*}$. The horizontal player $\mathcal{H}$ plays the edges $(x,1.5)$ and $(x+1,y+2.5)$. The component $C \cup \{v_{2}\}$ is now a bottom component secured by the path $P \cup \{(x,1.5),(x+1,y+2.5)\}$ and the gate $G' = \{(x+1,y+1.5)\}$.
Suppose instead that $v_{2}$ is not a bottommost dual vertex. If $e$ is the dual edge $(x,y-0.5)^{*}$, then $\mathcal{H}$ plays the two edges $(x-0.5,y)$ and $(x-1,y-0.5)$. The new bracket $B'$ is a bracket of Type $1$ with corner vertices $(x-1,y-1)$ and $(x+1,y+1)$. If $e$ is the dual edge $(x+0.5,y-1)^{*}$, then $\mathcal{H}$ plays the two edges $(x,y-0.5)$ and $(x+1,y+0.5)$. The new bracket $B'$ is a bracket of Type $3^{+}$ with corner vertices $(x,y-1)$ and $(x+1,y)$. If $e$ is the dual edge $(x+1,y-0.5)^{*}$, then $\mathcal{H}$ plays the two edges $(x,y-0.5)$ and $(x+1,y+0.5)$. The new bracket $B'$ is a bracket of Type $3^{-}$ with corner vertices $(x,y-1)$ and $(x+1,y)$. Finally, if $e$ is the dual edge $(x+1,y+0.5)^{*}$, then $\mathcal{H}$ plays the two edges $(x,y-0.5)$ and $(x+1.5,y+1)$. The new bracket $B'$ is a bracket of Type $2$ with corner vertices $(x,y-1)$ and $(x+2,y+1)$.
![The horizontal player $\mathcal{H}$’s response when the dual edge $e$ lies across the edge of a bracket of Type $3^{+}$, as described in Case $2$c. In each case the top picture shows the original bracket $B$, together with the dual edge $e$, while the picture below it shows the new bracket $B'$ and any newly claimed blue edges.[]{data-label="Fig8"}](Bracket-Moves2)
**Case 2d.** The bracket $B$ is a bracket of Type $3^{-}$, with corner vertices $(x,y)$ and $(x+1,y+1)$ for some $x,y$.
Suppose first that $v_{2}$ is a bottommost dual vertex. In this case we have that $y = 1$ and $e = (x + 0.5,1)^{*}$ or $(x + 1.5,1)^{*}$. If $e = (x + 0.5,1)^{*}$, then $\mathcal{H}$ plays the edges $(x+1.5,2)$ and $(x+2,y+1.5)$. The component $C \cup \{v_{2}\}$ is now a bottom component that is extra secured by the path $P \cup \{(x+1.5,2)\}$ and the gate $G = \{x+2,1.5\}$. If $e = (x + 1.5,1)^{*}$, the horizontal player $\mathcal{H}$ plays the edges $(x + 0.5,1)$ and $(x + 1.5,2)$. The component $C \cup \{v_{2}\}$ is now a bottom component that is secured by the path $P \cup \{(x + 0.5,1),(x + 1.5,2)\}$ and the gate $G' = \{x+2,1.5\}$.
We next suppose that $v_{2}$ is a topmost dual vertex. We must have that $y = n-1$ and $e = (x+1.5,n)^{*}$. The horizontal player $\mathcal{H}$ plays the edges $(x+0.5,n-1)$ and $(x+1.5,n-1)$. The component $C \cup \{v_{2}\}$ is now a top component secured by the path $P \cup \{(x+0.5,n-1),(x+1.5,n-1)\}$ and the gate $G' = \{x+2,n-0.5\}$.
Finally, suppose that $v_{2}$ is not a bottommost or topmost vertex. Then we consider the dual edge $e$ and the bracket $B$ under the reflection that switches the corners of $B$ and determine our response using Case $2$c (since $B$ is mapped to a bracket of Type $3^{+}$), reflected back. By Remark \[remark: bracket symmetry\], the new bracket $B'$ thus obtained is a valid bracket.
\[Case3\] Before $e$ is played, the vertex $v_{1}$ is part of some component $C_{1}$ while the vertex $v_{2}$ is part of some component $C_{2}$.
We first note that $C_{1}$ and $C_{2}$ cannot be the same component, as $\mathcal{H}$ cannot claim any dual edges that would create a closed cycle of red dual edges (violating restriction (a) from the secure game definition). For each $i = 1,2$, let $P_{i}$ and $B_{i}$ (or $G_{i}$) be the respective path and bracket (or gate) that makes $C_{i}$ a secure component.
We first deal with the case where $e^{*}\in P_{1}\cap P_{2}$. Observe that $C_{1}$ and $C_2$ must then both be floating components. Indeed, suppose one of the components, say $C_{1}$, were a bottom component. Then $C_{2}$ cannot be a top or a floating component (else $\mathcal{V}$’s move would violate restriction (b) or (c)) and further, $C_{2}$ cannot be a bottom component (else $\mathcal{V}$’s move would create a red dual arch, violating restriction (a)), a contradiction. Thus neither of $C_{1}$, $C_2$ can be a bottom component, and in a similar way neither of them can be a top component.
As $e^{*}\in P_{1}\cap P_{2}$ and the board was secure before $\mathcal{V}$’s turn, $e^{*}$ must have been a blue double-edge, and so $\mathcal{H}$ has $4$ edges to respond with. Thus $\mathcal{H}$ plays all the edges in $B_{1}$, of which there are at most $4$. It is easy to see that the new component $C_{1} \cup C_{2}$ is secured by some path contained in the set of edges $(P_{1} \cup P_{2}\cup B_{1}) \setminus \{ e^{*}\}$ and the bracket $B_{2}$.
We next deal with the case where $e^{*}$ is an edge that lies in both $P_{1}$ and $B_{2}$ (or $G_{2}$). By the same arguments as above (based on restrictions (a), (b) and (c)), we must have that $C_{2}$ is a floating component. The horizontal player $\mathcal{H}$ has $3$ edges to respond with, and so she plays the three edges in $B_{2} \setminus \{e^{*}\}$. Once again, it is easy to see that the new component $C_{1} \cup C_{2}$ is secured by some path contained in the set of edges $(P_{1} \cup P_{2}\cup B_{1}) \setminus \{ e^{*} \}$ and the bracket $B_{1}$ (or gate $G_{1}$).
Finally we need to deal with the case where $e^{*}$ is an edge that lies in both $B_{1}$ (or $G_{1}$) and $B_{2}$ (or $G_{2}$). We first note that it is not possible for either $C_{1}$ or $C_{2}$ to be top components. Indeed, if say $C_{1}$ was a top component, then $C_{2}$ must be a floating component (by restrictions (a) and (b)), yet there is no possible bracket for $C_{2}$ that can have an edge in common with $G_{1}$. Next, let us suppose that $C_{1}$ is a bottom component whose gate $G_{1}$ consists of the edge $(x,1.5)$. By restrictions (a) and (b), $C_{2}$ is a floating component and $B_{2}$ must be a bracket of type $3^{+}$ with corners $(x,2)$ and $(x+1,3)$ (no other bracket type is compatible with $G_{1}$). The horizontal player $\mathcal{H}$ then plays the edges $(x+1,1.5)$ and $(x+1,2.5)$. The new component $C_{1} \cup C_{2}$ is a bottom component extra-secured by a path contained in the set of edges $P_{1} \cup P_{2} \cup \{(x+1,2.5)\}$ and the gate $\{(x+1,1.5)\}$.
Finally, if $C_{1}$ and $C_{2}$ are both floating components, then we have to split into sub-cases, depending on the bracket-types of $B_{1}$ and $B_{2}$ are, and on which edge they share. Note to begin with that if $B_1$, $B_2$ are both of Type $1$ or $2$ and have an edge in common, then they must share an interior vertex, contradicting the fact that $C_1$ and $C_2$ are distinct components. Thus without loss of generality, we may assume that $B_{1}$ is a bracket of Type $3^{+}$ or $3^{-}$. We deal below with the case where $B_{1}$ is a bracket of Type $3^{+}$ with corner vertices $(x,y)$ and $(x+1,y+1)$. The case where $B_{1}$ is a bracket of Type $3^{-}$ will then follow by considering the reflection switching $B_{1}$’s two corner vertices and making use of Remark \[remark: bracket symmetry\].
In each of the following sub-cases we will list the set $P_3$ of two (or fewer) edges from $B_{1}\cup B_{2}\setminus\{e^*\}$ that $\mathcal{H}$ plays and the location and type of a new bracket $B$. It is easy to check then that there is a blue path $P$ contained within the edges of $P_{1} \cup P_{2}\cup P_3$ such that $P$ and $B$ together secure the new component $C_{1} \cup C_{2}$. In the sub-cases below, we cover all ways in which a bracket of Type $3^{+}$ and another bracket could share an edge. In each sub-case (except Case $3$g, which we deal with via a reflection and Remark \[remark: bracket symmetry\]), we let $e=(x, y-0.5)^*$ be the dual edge played by $\mathcal{V}$.
![The horizontal player $\mathcal{H}$’s response when the dual edge $e$ lies across an edge of two brackets, $B_{1}$ and $B_{2}$, as described in Cases $3$a, $3$b and $3$c. In each case the top picture shows the original brackets, together with the dual edge $e$, while the picture below it shows the new bracket $B$ and any newly claimed blue edges.[]{data-label="Fig9"}](Bracket-Moves3)
**Case 3a.** The bracket $B_{2}$ is a bracket of Type $1$, with corner vertices $(x-2,y-2)$ and $(x,y)$. In this case $\mathcal{H}$ plays the edges $(x-1.5,y-2)$ and $(x+1,y+0.5)$. The bracket of $B$ is a bracket of Type $2$ with corner vertices $(x-1,y-2)$ and $(x+1,y)$.
**Case 3b.** The bracket $B_{2}$ is a bracket of Type $1$, with corner vertices $(x-2,y-1)$ and $(x,y+1)$. In this case $\mathcal{H}$ plays the edge $(x-1.5,y-1)$. The bracket of $B$ is a bracket of Type $1$ with corner vertices $(x-1,y-1)$ and $(x+1,y+1)$.
**Case 3c.** The bracket $B_{2}$ is a bracket of Type $2$, with corner vertices $(x-2,y-2)$ and $(x,y)$. In this case $\mathcal{H}$ plays the edges $(x-1.5,y-2)$ and $(x-1,y-1.5)$. The bracket of $B$ is a bracket of Type $1$ with corner vertices $(x-1,y-1)$ and $(x+1,y+1)$.
![The horizontal player $\mathcal{H}$’s response when the dual edge $e$ lies across an edge of two brackets, $B_{1}$ and $B_{2}$, as described in Cases $3$d, $3$e and $3$f. In each case the top picture shows the original brackets, together with the dual edge $e$, while the picture below it shows the new bracket $B$ and any newly claimed blue edges..[]{data-label="Fig10"}](Bracket-Moves4)
**Case 3d.** The bracket $B_{2}$ is a bracket of Type $3^{+}$, with corner vertices $(x-1,y-1)$ and $(x,y)$. In this case $\mathcal{H}$ plays the edges $(x-1,y-1.5)$ and $(x+1,y+0.5)$. The bracket of $B$ is a bracket of Type $2$ with corner vertices $(x-1,y-2)$ and $(x+1,y)$.
**Case 3e.** The bracket $B_{2}$ is a bracket of Type $3^{+}$, with corner vertices $(x-1,y)$ and $(x,y+1)$. In this case $\mathcal{H}$ plays the edge $(x-1,y-0.5)$. The bracket of $B$ is a bracket of Type $1$ with corner vertices $(x-1,y-1)$ and $(x+1,y+1)$.
**Case 3f.** The bracket $B_{2}$ is a bracket of Type $3^{-}$, with corner vertices $(x-2,y-1)$ and $(x-1,y)$. In this case $\mathcal{H}$ plays the edges $(x-1.5,y-1)$ and $(x-0.5,y)$. The bracket of $B$ is a bracket of Type $1$ with corner vertices $(x-1,y-1)$ and $(x+1,y+1)$.
**Case 3g.** The bracket $B_{2}$ is a bracket of Type $3^{-}$, with corner vertices $(x-1,y-2)$ and $(x,y-1)$, and $e$ is the dual edge $(x+0.5, y-1)^*$. This case is in fact already dealt with, as the situation is identical to the previous case up to the reflection switching the corners of $B_{2}$.
With Cases $3$a-g above, we have covered all possible cases and shown that $\mathcal{H}$ has a winning strategy for the secure-game.
With a winning strategy for the secure-game in hand, we now show that in the $q$-double response game $\mathcal{H}$ can ensure that the grid is secure at the end of each of her turns.
Suppose the grid is secure and let $D$ be the set of dual edges claimed by $\mathcal{V}$ on his turn, where $|D| = r \leqslant q$. The horizontal player $\mathcal{H}$ begins by picking a judicious ordering (to be specified later) of the elements of $D$ as $\{e_1, e_2, \ldots, e_r\}$ , and then proceeding as if she was playing the secure game, pretending that $\mathcal{V}$ plays $e_1$, $e_2$, $\ldots$, $e_r$ in that order and responding to each $e_i$ in turn.
For each $i = 1,\ldots, r$, let $L_{i}$ be the set of blue edges (including blue double-edges) that $\mathcal{H}$ has claimed after $i$ of her turns have occurred in this auxiliary secure-game. We do not include in $L_{i}$ any blue edge broken by $\mathcal{V}$ in any of his first $i$ turns.
Recall that in the secure game, $\mathcal{H}$ responds to $\mathcal{V}$’s claim of the dual edge $e_i$ with $2$, $3$ or $4$ edges, depending on whether $e_i$ breaks $0$, $1$ or $2$ blue edges. As such, we have that $|L_{i}| \leqslant 2i$ for all $i = 1,\ldots,r$. Once $\mathcal{H}$ has gone through every dual edge of $D$ she has a set $L_{r}$ of at most $2r$ edges such that if $\mathcal{H}$ claims all the edges in $L_{r}$ in response to $\mathcal{V}$’s claim of $D$, then the grid is back to a secure position in the $q$-double response game. The only problem that could occur in this scenario is that during some turn of the auxiliary secure-game, say turn $i$, the dual edge $e_i$ claimed by $\mathcal{V}$ breaks one of the restrictions (a)–(c) we imposed on the secure-game. We show below that this can be avoided by picking a judicious ordering on $D$. Combined with Lemma \[Lemma2\], this will complete the proof Theorem \[theorem: double-response\].
The first restriction (a) on $\mathcal{V}$’s moves in the secure game is that $\mathcal{V}$ is not allowed to claim a dual edge as red if doing so would create a cycle or an arch of red dual edges. As we have shown one can assume $\mathcal{V}$ never plays such an edge in the $q$-double-response game, this restriction will not be broken by any of the dual edges in $D$.
The second restriction (b) is that $\mathcal{V}$ may not claim a dual edge if doing so connects a top component to a bottom component. We know by Lemma \[Lemma1\] that if the grid is secure at the beginning of a turn of the $q$-double response game, then $\mathcal{V}$ cannot win in that turn. As such, there cannot be a dual edge in $D$ that connects a top component to a bottom component, and so this restriction is not broken either.
The third and final restriction (c) is that if $C$ is a floating component and $P$ is the path of blue edges that helps secure $C$, then $\mathcal{V}$ may not claim a red dual edge that breaks a blue edge from $P$ if claiming that dual edge would turn $C$ into either a bottom or top component. It is here that our judicious ordering of $D$ comes into play and ensures restriction (c) is respected.
We order the dual edges in $D$ as follows. Due to restrictions (a) and (b), every top (respectively bottom) component is a rooted tree whose root is a topmost (respectively bottommost) dual vertex. Let $D$ be ordered in any way such that, if $e, e' \in D$ are two dual edges that are part of the same bottom or top component $C$ and $e$ is strictly closer in graph distance in $C$ to the root of $C$ than $e'$, then $e$ appears before $e'$ in the ordering of $D$. (Such an ordering clearly exists, by proceeding component by component.) We claim that ordering $D$ in this way guarantees that $\mathcal{V}$ never breaks the third restriction when we play the dual edges one by one. Indeed, suppose there is a dual edge $e_i \in D$ such that before $e_i$ is played in the secure game, there exists a floating component $C$, secured by a path $P$ and a bracket $B$, that becomes a bottom or top component once $e_i$ has been played. Given our ordering on $D$, no other edge of $D$ meeting $C$ can have been played in the secure game before $e_i$. In particular, all the edges of $P$ were present before $\mathcal{V}$ played the dual edge-set $D$ in the $q$-double response game and $\mathcal{H}$ introduced the auxiliary secure game. In particular $e_i$ cannot break an edge of $P$ (i.e $e$ must lie across $B$), and restriction (c) is respected.
Other graphs and other games {#section: other graphs and other games}
============================
The crossing games we study in this paper may be viewed as special cases of the following generalisation of the Shannon switching game.
\[definition: (p,q)-Shannon switching game\] A Shannon game-triple is a triple $(G,A,B)$, where $G$ is a finite multigraph (possibly with loops) and $A,B$ are sets of vertices from $G$. For $p,q\in \mathbb{N}$, the $(p,q)$-Shannon switching game on $(G,A,B)$ is played on the board $E(G)$ as follows.
Two players, Maker and Breaker, play in alternating turns. Maker plays first and in each of her turns claims $p$ (as-yet-unclaimed) edges of the board $E(G)$; Breaker in each of his turns answers by claiming $q$ (as-yet-unclaimed) edges of the board. Maker wins the game if she manages to claim all the edges of a path joining $A$ to $B$ (i.e. a path from some $a\in A$ to some $b\in B$ — we call such a path an *$A$–$B$ crossing path*). Otherwise, Breaker wins.
The $(p,q)$-crossing games we study in this paper are instances of the $(p,q)$-Shannon switching game on $(G,A,B)$, where $G=S_{m \times n}$ and $A$ and $B$ are the sets of left-hand side and right-hand side vertices of $S_{m \times n}$ respectively.
The generalised Shannon switching game satisfies some obvious monotonicity properties with regard to the board, which we record in Proposition \[proposition: monotonicity for Maker/Breaker\] below. Given a multigraph $G$ and two distinct vertices $u,v \in V(G)$, let $m_{G}(u,v)$ denote the number of edges between $u$ and $v$, and let $m_{G}(v)$ denote the number of loops at $v$. Let $G'$ be any multigraph obtained by taking $G$, deleting some vertex $v \in V(G)$, replacing it with two adjacent vertices $v_{1},v_{2}$, and then adding in edges adjacent to $v_{1}$ or $v_{2}$ until the relations $$m_{G'}(v_{1},v_{2}) + m_{G'}(v_{1}) + m_{G'}(v_{2}) = m_{G}(v) + 1, \nonumber$$ and $$m_{G'}(u,v_{1})+m_{G'}(u,v_{2}) = m_{G}(u,v), \nonumber$$ are satisfied for all $u \in V(G)\setminus \{v\}$. We refer to this process as *vertex-separation*. Vertex separation may be thought of as an inverse operation to performing an *edge-contraction* of the edge $\{v_{1},v_{2}\}$ in $G'$.
\[proposition: monotonicity for Maker/Breaker\] Let $(G, A, B)$ and $(G',A',B')$ be Shannon game-triples. Suppose $(G',A',B')$ may be obtained from $(G,A,B)$ by a sequence of vertex-deletions, edge-deletions and vertex-separations.
Then the following hold:
(i) if Maker has a winning strategy for the $(p,q)$-Shannon switching game on $(G',A',B')$, then Maker also has a winning strategy for the $(p,q)$-Shannon switching game on $(G,A,B)$;
(ii) if Breaker has a winning strategy for the $(p,q)$-Shannon switching game on $(G,A,B)$, then Breaker also has a winning strategy for the $(p,q)$-Shannon switching game on $(G',A',B')$.
One can build a natural rooted tree structure on game-triples: we say that a generalised Shannon-switching game is played according to the *Breaker First rule* (BF rule) if Breaker is allowed to make the first move instead of Maker. Then starting with a game triple $(G,A,B)$, we build a tree by letting $(G,A,B)$ be the root. The children of $(G,A,B)$ are all game-triples $(G',A',B')$ with BF rule obtained from $(G,A,B)$ by contracting $p$ edges. The children of a game triple $(G', A', B')$ with BF rule are then all game-triples $(G'',A'', B'')$ obtained by deleting $q$ edges. Repeating this operation, we build a game tree, whose leaves will consist of game-triples $(G_l, A_l,B_l)$ with $A_{l}\cap B_l\neq \emptyset$ (Maker’s win) or with no $A_l$–$B_l$ paths (Breaker’s win). Ultimately, the Maker’s win leaves are equivalent under optimal play to the game $(K_1, \{1\}, \{1\})$ played on the one-vertex graph $K_1$, while Breaker’s leave are equivalent to the game $((K_2)^c, \{1\}, \{2\})$ played on the two-vertex non-edge graph $K_2^c$.
By considering Proposition \[proposition: monotonicity for Maker/Breaker\] and the game-tree described above, our results on $(p,q)$-crossing games give winning strategies for a number of related Shannon-switching games. In a slightly different direction, our winning strategy for the $(2q, q-1)$-crossing game on sufficiently long strips is easily adapted to a much more general setting.
\[theorem: general strip theorem\] Let $(G,A,B)$ be a Shannon game-triple. Assume that
(i) the game-board $E(G)$ may be split up into $m$ edge-disjoint strips $S_1, \ldots, S_m$;
(ii) on each strip $S_i$ we have a local game $(G_i, A_i,B_i)$ such that if Breaker wins that game, then Breaker wins the global game on $(G,A,B)$;
(iii) for each $i$, Breaker has a winning strategy for the $(p,q)$-Shannon switching game on $(G_i,A_i, B_i)$ under BF rules that ensures Breaker’s victory in at most $T$ turns.
Then provided $$m \geqslant (s(p+1))^{T} + l(p+1)-1 \nonumber$$ Breaker has a winning strategy for the $(l(p+1)-1 , lq)$-Shannon switching game on $(G,A,B)$, where $s = l(p+q+1)-1$.
We generalise the arguments of Theorem \[theorem: (2q-1,q)-game\] as follows. At any point in the game, for $0 \leqslant j \leqslant p$ we say a strip $S_{i}$ is $(k,j)$*-valid* if it contains exactly $kq$ red edges and is in a winning position for Breaker in the $(p,q)$-Shannon switching game on $(G_{i},A_{i},B_{i})$, with Maker getting to play any $j$ edges first before the game resumes with it being Breaker’s turn to play. If a strip is not $(k,j)$-valid for any $0 \leqslant j \leqslant p$, then we say that it is *invalid*. As Breaker has a winning strategy for the $(p,q)$-Shannon switching game on $(G_{i},A_{i},B_{i})$ under BF rules for each $i$, we have that each strip starts as $(0,0)$-valid. Note that for $j' \geqslant j$, we have that any strip that is $(k,j')$-valid is also $(k,j)$-valid. Thus, we say that a strip is *exactly* $(k,j)$-valid if it is $(k,j)$-valid but not $(k,j+1)$-valid. Note that if any strip $S_{i}$ is $(k,j)$-valid, then Breaker can play $q$ edges in $S_{i}$ and turn it in into a $(k+1, p)$-valid strip. Indeed, as $S_{i}$ is $(k,j)$-valid, we know that it is also $(k,0)$-valid, and so it is in a winning position for Breaker in the $(p,q)$-Shannon switching game where it is Breaker’s turn to play. Breaker plays the $q$ edges that a winning strategy would prescribe, so that the strip is in a winning position for Breaker in the $(p,q)$-Shannon switching game, even though it is Maker’s turn to play. Thus Breaker has turned $S_{i}$ into a $(k+1,p)$-valid strip.
The game begins with Maker playing edges in up to $l(p+1)-1$ different strips, possibly making them invalid. From here we split the game in to a number of different phases. We will show by induction on $k$ that for each $k = 0,1,\ldots,T$, at the start of phase $k$ it will be Breaker’s turn to play and the number of $(k,0)$-valid strips will be at least $(s(p+1))^{T-k}$. As noted above, our inductive statement is clear when $k = 0$. Suppose the statement is true for $k$. On each turn in phase $k$, Breaker will choose $l$ different $(k,0)$-valid strips and play $q$ edges in each, turning them into $(k+1,p)$-valid strips. Maker can now distribute their $l(p+1)-1$ among all the strips as she likes. In the worst case scenario, each edge that Maker plays can either turn a strip that is exactly $(k+1,j)$-valid into one that is exactly $(k+1,j-1)$-valid (when $j \geqslant 1$), or turn a $(k,0)$-valid or $(k+1,0)$-valid strip into an invalid one. For each $j = 0,1,\ldots,p$, let $R_{t}(j)$ be the number of exactly $(k+1,j)$-valid strips on the board after a total of $t$ combined edges in round $k$ have been played by the two players. Moreover, let $R_{t} = \sum_{j = 0}^{p}(j+1)R_{t}(j)$.
We have that, if after $t$ edges have been played it is Breakers turn to play and he plays $q$ edges, then $R_{t+q} = R_{t} + p+1$. On the other hand, if after $t$ turns it is Maker’s turn to play, then we have that $R_{t+1} \geqslant R_{t}-1$. As Breaker plays a total of $lq$ edges while Maker plays a total of $l(p+1)-1$ edges on their respective turns for a combined total of $s$ edges, we have that $R_{rs} \geqslant r$ for all $r \in \mathbb{Z}_{\geqslant 0}$, at least until phase $k$ ends. Breaker decides that phase $k$ has finished and phase $k+1$ has begun when $R_{rs} \geqslant (p+1)((p+1)s)^{T-k-1}$ for some $r \in \mathbb{Z}_{\geqslant 0}$. Note that after Maker and Breaker have both completed their turns, the number of $(k,0)$-valid strips has decreased by at most $s$. Thus, as the number of $(k,0)$-valid strips at the start of phase $k$ is at least $(s(p+1))^{T-k}$, we know that the number of $(k,0)$-valid strips for Breaker to play in will not run out before $R_{rs} \geqslant (p+1)((p+1)s)^{T-k-1}$. As $R_{rs} \geqslant (p+1)((p+1)s)^{T-k-1}$ we have that the number of $(k+1,0)$-valid strips at the start of phase $k+1$ is at least $((p+1)s)^{T-k-1}$ and it is Breakers turn to play, as required.
To finish the proof, we note that at the start of phase $T$, there is at least one strip $S_{i}$ that is $(T,0)$-valid, and has been obtained by Breaker following a winning strategy on this strip for the $(p,q)$-Shannon switching game under the BF rules. As Breaker can win the $(p,q)$-Shannon switching game on $S_{i}$ in at most $T$ moves, he has in fact won the local $(p,q)$-Shannon switching game on this strip $S_{i}$, and with it the global $(l(p+1)-1 , lq)$-Shannon switching game on $(G,A,B)$.
Just as we have been able to generalise our winning Breaker strategy for the $(2q-1,q)$-crossing game to other Shannon game-triples, we believe our winning Maker strategy for the $(2q,q)$-crossing game on $S_{m \times (q+1)}$, as described in the proof of Theorem \[theorem: (2q,q)-game\], can be adapted to a number of other planar lattices. The key idea here is that if $\Gamma$ is a planar lattice where an isoperimetric inequality similar to that of Lemma \[lemma: isoperimetric lemma\] holds, then a Maker strategy similar to that in the proof of Theorem \[theorem: (2q,q)-game\] should work in $\Gamma$. More precisely, suppose $\Gamma$ is a planar lattice such that there exist a constant $a$ such that for all $k \in \mathbb{N}$ and for all connected components $C$ comprised of $k$ edges, the dual boundary cycle to $C$ consists of at most $ak + (a+2)$ edges[^2]. In this case, we believe that there exists some constant $c$ such that Maker has a winning strategy for $(aq,q)$-crossing games on all arbitrarily long substrips of $\Gamma$ of “width” at least $c$. Of course, modifying our proof of Theorem \[theorem: (2q,q)-game\] to adapt it to a given planar lattice $\Gamma$ will require a careful definition of brackets and a large amount of case-checking (as is already the case in the proof of Theorem \[theorem: (2q,q)-game\] itself), and so we make no attempt to do so here.
Given our original motivation from percolation theory, it would be natural to study Shannon-switching games on strips of any of the standard $2$-dimensional lattices studied in percolation. For instance, who wins crossing games on ‘rectangular-shaped’ subgraphs of the triangular, honeycomb or Kagome lattices? More generally, this is a natural problem for any of the $11$ Archimedean lattices.
In a different direction, one could consider *site-percolation* rather than *bond-percolation*, by playing variants of our generalised Shannon switching games where the players take turns claiming vertices rather than edges. One famous example of such a game is the game of Hex, where the players take turn claiming vertices on a subset of the triangular lattice, both trying to create certain crossing paths. It is easy to prove that a vertex-analogue of Lemma \[lemma: isoperimetric lemma\] holds in this lattice — i.e. that any set of $k$ vertices inducing a connected component of the triangular lattice can be surrounded by a bounding cycle consisting of at most $2k+4$ vertices — and we guess that our Maker winning strategy for the $(2q,q)$-crossing game should carry over without excessive technicalities (but not without care and case-checking)
Concluding remarks {#section: concluding remarks}
==================
There are many questions arising from our work. Outside of the special cases $(1,1)$, $p\geqslant 2q$ and $p\leqslant \frac{q}{2}$, the problem of determining which boards are Maker wins (and, by exclusion, which are Breaker wins) is completely open, and seems to be both challenging and quite interesting.
\[problem: (p,q)-crossing game\] Given natural numbers $p,q,n\in \mathbb{N}$, determine the greatest $m \in \mathbb{N}$ such that Maker has a winning strategy for the $(p,q)$ game on $S_{m \times n}$.
As a special, easier case, one could consider the following problem of determining the optimal value of $m$ in the variant of the $(1,1)$ game where Maker get an extra edge every $M$ turns.
\[question: extra power\] Suppose we play a variant of the $(1,1)$-crossing game where every $M$ turns Maker gets to claim an extra edge. Given $n, M\in \mathbb{N}$, what is the greatest $m$ such that Maker has a winning strategy when playing on $S_{m \times n}$?
It is not hard to show Maker can win in this variant for some $m=n+ \Omega(\log n)$, and it would be very interesting to determine whether she has a winning strategy for $m=\lfloor (1+\varepsilon)n\rfloor$ for some constant $\varepsilon=\varepsilon(M)>0$.
In a similar spirit, setting one’s sights slightly lower than Problem \[problem: (p,q)-crossing game\], one could try to prove that having extra power allows one to win on a significantly longer board.
\[conjecture: extra power means an epsilon longer board\] The following hold:
(i) for every $q\in \mathbb{N}$, there exists $\varepsilon >0$ such that for all $n$ sufficiently large, Maker wins the $(q+1, q)$ game on $S_{\lceil (1+\varepsilon)n\rceil \times n}$;
(ii) for every $p\in \mathbb{N}$, there exists $\varepsilon >0$ such that for all $m$ sufficiently large, Breaker wins the $(p, p+1)$ game on $S_{m \times \lceil (1+\varepsilon)m\rceil}$.
An even more basic problem is showing that when the powers are balanced, Breaker should win on a narrower board, overcoming Maker’s first-player advantage.
\[conjecture: at equal power Breaker wins on an epsilon -narrowerboard\] For every $\varepsilon >0$ and every $p\in \mathbb{N}$, there exists $m_0\in \mathbb{N}$ such that for all $m\geqslant m_0$, Breaker wins the $(p,p)$-crossing game on $S_{m \times \lceil (1-\varepsilon)m\rceil}$.
In a different direction, one may ask for optimal bounds on $m$ in Theorem \[theorem: (2q-1,q)-game\].
\[question: optimal m for (2q-1,q)\] Let $n,q \in \mathbb{N}$. What is the smallest $m_0=m_0(n,q)$ such that Breaker wins the $(2q-1,q)$-crossing game on $S_{m_0 \times n}$? In particular, for $q$ fixed, is $m_0(n,q)$ subexponential in $n$?
A related question, which would help improve the bounds on $m$ for the Breaker strategy we developed in the proof of Theorem \[theorem: (2q-1,q)-game\] is the following:
\[question: length of Bridg’it\] Under perfect play, how long does a game of Bridg-it last?
We make no attempt to answer this question here, however we believe that it may be possible to shed some light on the answer through careful analysis of the Maker-win strategy recorded in Theorem \[theorem: Maker winds Bridg-it with any first move\].
In yet another direction, efforts to apply the biased Erd[ő]{}s–Selfridge [@Beck82; @ErdosSelfridge73] criterion to Problem \[problem: (p,q)-crossing game\] leads to some intriguing questions on weighted sums over crossing paths, connected to the study of fugacity in statistical physics and to problems in analytic combinatorics (see e.g. [@BousquetGuttmannJensen05]). Explicitly, let $\mathcal{H}(m,n)$ denote the collection of all left-to-right crossing paths in the rectangle $S_{m \times n}$. Given a path $\pi \in \mathcal{H}(m,n)$, let $\ell(\pi)$ denote its length. Then the biased Erd[ő]{}s–Selfridge criterion due to Beck implies that if $$\begin{aligned}
\label{inequality: biased Erdos--Selfridge criterion}
\sum_{\pi \in \mathcal{H}} (1+q)^{-\frac{\ell(\pi)}{p}} < \frac{1}{1+q}, \end{aligned}$$ then Breaker has a winning strategy for the $(p,q)$-crossing game on $S_{m \times n}$. In particular, suppose $m=\rho n$ for some $\rho>0$, and that we knew that, as $n\rightarrow \infty$, the number of crossing paths of $S_{m \times n}$ of length $\ell$ grew no faster than $(\lambda_{\rho}+o(1))^{\ell}$, for some $\rho$-dependent constant $\lambda_{\rho}$ (this would be a “crossing path” analogue of the connective constant familiar from the study of self-avoiding walks). Then (\[inequality: biased Erdos–Selfridge criterion\]) would imply that Breaker has a winning strategy whenever $\lambda_{\rho} < (1+q)^{\frac{1}{p}}$. If for some $\rho<3$ the value of $\lambda_{\rho}$ were found to be sufficiently small so that $\lambda_{\rho} < 2$, this would show that Breaker wins the $(2,3)$-crossing game on $S_{m \times n}$ for all $n$ sufficiently large, giving a non-trivial improvement on what we know about that game. Of especial interest would be the case $\rho=1$ — one would guess that Breaker’s extra power in the $(2,3)$-crossing game would allow him to win on $S_{n \times n}$, say, but we have currently no proof of even this weakening of Conjecture \[conjecture: extra power means an epsilon longer board\](ii).
Finally, variants of our games on other lattices or where the players claim vertices rather than edges, as discussed in Section \[section: other graphs and other games\], are both interesting and almost completely open.
[^1]: Ume[å]{} Universitet, 901 87 Ume[å]{}, Sweden. Emails: [email protected] and [email protected]. Research supported by Swedish Research Council grant 2016-03488.
[^2]: The quantity $ak + (a+2)$ comes from considering how $\mathcal{H}$’s strategy for the secure-game, as described in the proof of Lemma \[Lemma2\], might adapt to other lattices.
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- |
Brandon Laughlin[^1]\
Ontario Tech University
- |
Christopher Collins[^2]\
Ontario Tech University
- |
Karthik Sankaranarayanan[^3]\
Ontario Tech Univeristy
- |
Khalil El-Khatib[^4]\
Ontario Tech University
bibliography:
- 'main.bib'
title: A Visual Analytics Framework for Adversarial Text Generation
---
In recent years with advances in techniques including deep neural networks and transfer learning, performance of many natural language processing (NLP) has been rapidly improving [@nlpprogress2019]. The methods used in these state-of-the-art results however assume that the classifier input has not been manipulated [@liu2018survey]. With adversarial machine learning there is the potential for situations in which input data can be purposely generated by an adversary that wishes to manipulate the results of a classifier. By using data from outside the trained distribution, malicious users can exploit the system by changing the assigned output class without significantly changing the content.
Beginning with adversarial examples for computer vision [@szegedy2013intriguing], the increasing popularity of deep learning has brought more attention to the susceptibility of deep learning models to these attacks. While adversarial environments have been studied for some time [@Biggio2018], it is only recently with these increasingly complex classifiers that the issue has become more prevalent. While the majority of works have centered around computer vision [@akhtar2018threat], recently more works have been considering NLP [@Zhang2019]. Generating adversaries for text brings additional challenges not found in the continuous data space of computer vision. For example, the common evasion technique for computer vision is to slightly change each feature (pixel) by a small amount. Each pixel can have its colours shifted very slightly without being noticed by a human [@akhtar2018threat]. This is more challenging for NLP as the data features are discrete. One cannot simply alter a word slightly without being noticed. Either a misspelled word or an entirely new word must be swapped with the original. This means that instead of moving all words slightly one needs to choose specific words that will be modified. While words can be converted to vectors in a continuous space, a slight shift in the vector space is unlikely to land on another word [@Zhang2019].
Due to the large vocabulary space of languages, the same idea can be expressed in many ways, providing the flexibility to construct alternate phrasings. With this ability the goal can be to change a text so that the target class is changed but a human would still obtain the same meaning as the original document. When developing NLP applications, replacing a word to a word with a very different meaning should alter the classification. However with adversarial examples, score changes can occur even when swapping semantically equivalent words [@Alzantot2018]. While this can be done entirely through an automated attack algorithm, we advocate semi-automation where a human remains in the loop. This is important for the generation of adversarial texts as semantics are largely context dependent and therefore in need of manual user review.
Trying to troubleshoot the reasons for a lack of model robustness is complicated because of poor model interpretability. Deep learning classifiers essentially act as a black box with seemingly random reasoning as to the decision of the model [@adadi2018peeking]. Due to this it can be hard to determine how a change to the input will influence the output. Recently there has been a lot of attention on creating methods of better explaining complex machine learning systems [@mittelstadt2018explaining]. It has become a popular research topic with initiatives such as Explainable Artificial Intelligence (XAI) [@DavidGunning2017] that has the objective of exploring methods for increasing transparency in AI systems.
We propose a framework that combines an attack algorithm and a visual analytics dashboard for a human centered approach to generating adversarial texts. Starting with an automated evolutionary attack process, the system builds an adversarial example over a set of generations. After the algorithm has completed, the user can use an interactive dashboard to make adjustments to the resulting text to correct the semantics. The end objective is a more automated and efficient way to craft adversarial examples while allowing the user to adjust poor changes by the attack algorithm. Since the system uses an approach that is both black box and model agnostic, it is flexible and can be transferred to other classification tasks. The provided example in this paper is classifying document sentiment.
From the perspective of an attacker, adversarial examples can be used for tasks such as spamming, phishing attacks, online harassment and the spread of misleading information. This framework could be used as a way to combat such malicious activities through several uses.
- - -
Background and Related Works
============================
Our framework involves research into the robustness and interpretability of machine learning models and the ways humans can be involved to improve results. In this section we start with a review of related works on adversarial machine learning, followed by research on how visual analytics has been used to help address these issues.
Adversarial Machine Learning
----------------------------
There are two main forms of adversarial attacks: white box and black box. White box attacks involve having access to information about a classifier including the training data, the features, the scoring function, the model weights and the hyper parameters [@Biggio2018]. Black box attacks do not require any of this information and is the approach our framework supports. The only feedback required is the classifier score. Another distinction in attack types is the source from which an adversary can inject samples. The two options available are poisoning and evading [@Biggio2018]. Poisoning involves the ability to place adversarial examples in the training data that then becomes part of the data distribution the model is trained on. Evasion attacks only have access to a model after training has been complete. Our work deals with evasion as our task is to cause mistakes in classifiers that would already be trained. In instances where the classifier is updated over time on future iterations of data, some evasion samples may be part of future training, in part achieving a poisoning attack.
The underlying issue that enables adversarial examples to pose a threat to machine learning classifiers is a lack of robustness [@liu2018survey]. We define robustness as the degree to which a change in the input of the classifier results in a change to the output score. A robust model is more stable and therefore more predictable in the scores it generates. A less robust model can have a drastically different score for seemingly very similar inputs. It has been found that NLP classifiers often provide unrealistic confidence estimates that result in seemingly random inputs having high probability scores [@Feng2018]. This was tested by using input reduction that involves iteratively removing a word from a document until the classification score changes greatly [@Feng2018]. The authors found that all of the relevant words can be removed while leaving seemingly random words to a human observer. The solutions end up as several nonsensical words unrelated to the original document. This demonstrates the risks involved with model over-fitting .
{width="\textwidth"}
\[fig:framework\_overview\]
The most common adversarial attack method in NLP is the white box scenario with this attack type more easily able to manipulate examples [@Zhang2019]. Black box attacks are more challenging but have still found success. Attack types can be divided into two main groups: character based and word based. Character level attacks perform perturbations on individual characters within a word. Basic examples include attacks that use insertion, deletion, swapping, and substitution [@Li2019]. More advanced versions can use rare characters such as from other languages that are similar visually to the original character [@Eger2019]. Word level attacks involve modifications at the level of entire words, most often replacing it with a semantically similar one. One example is replacement using neural machine translation [@Ribeiro2018]. The examples demonstrated in our work are based on a technique that swaps words using a genetic algorithm [@Alzantot2018].
Visual Analytics
----------------
There have been many works that involve the use of interactive visualizations for the explanation of machine learning models [@liu2017towards]. However, visual analytics work on adversarial attacks are limited. Related work pertaining to visualization of adversarial attacks is Adversarial Playground [@norton2017adversarial] which specifically works with white box attacks for computer vision. Our work evaluates the use of adversarial examples in the discrete domain of NLP using black box and model agnostic approaches. To our knowledge there does not yet exist any visual analytics research exploring the generation of adversaries for NLP classification.
Visual analytics systems have been designed to help users examine machine learning classifiers to help improve classifier performance though methods such as explanatory debugging [@kulesza2015principles]. Providing explanations for classifier errors has been found to improve trust in the predictions made by the model [@dzindolet2003role]. Our framework employs similar techniques by explaining behaviour with scores built from embedding distances and language models. In our task we have the specific debugging objective of making semantic corrections to the output by improving word swaps made by the attack algorithm.
Similar to our framework, other works have used visualizations to explain the behavior of classifiers. To explain model decisions, feature heat maps have been built that colour encode the words based on their importance to the classifier score [@Feng2018]. Other works have visualized partial dependence plots [@krause2016interacting] and provided instance-level explanations [@tamagnini2017interpreting]. Interpretability for a model can be defined as a global or local process [@Molnar2018]. A global explanation would provide details of how the system works as a whole to generate all of the results. Local scope explanations provide context to specific subsets or individual instances of the dataset, such as LIME [@ribeiro2016should]. Our framework provides local explanations for individual word selections. This means that the impact of word replacements are calculated at that specific location with the surrounding context taken into consideration with language models.
To help users more easily find word replacements, we provide scatterplots to explore the word embedding space. Related works have that have explored embedding spaces include a tool for concept building [@park2017conceptvector] as well as comparing word relationships and embedding spaces [@heimerl2018interactive].
Framework Description
=====================
The framework is designed as a combination of a client facing web dashboard and an attack server on the backend. The server contains the attack algorithms, word embeddings and language model. The user interacts with the server through the web dashboard which translates the interactions into commands for the server. The server then attacks the target classifier using an attack configuration chosen by the user. An overview of the framework can be seen in Figure \[fig:framework\_overview\]. This architecture design was chosen to easily facilitate communication between the client and the attack process. The analytics can be done on a powerful server that can deliver the results to a web browser on any device. The web browser enables an interactive visualization dashboard to present the results of the service to the user.
In order to support uses in a wide variety of environments and use cases, flexibility was a central design goal. To support a flexible approach, the framework is model-agnostic and supports any black box attack. All parts of the architecture are delivered as a black box where the user does not need to know the underlying details of any component used as they are abstracted into the service. There are many benefits to model-agnostic systems including model flexibility and representation flexibility [@Ribeiro2016]. Model flexibility means that the system must be able to work with any machine learning algorithm and this ensures that our attack algorithm will work against any type of classifier. As an example whether the target classifier being attacked is a rule-based system, a neural network or any other classifier the attack will work in the same way. The only requirement from the classifier being attacked is that the assigned class needs to come with a numerical score. Representation flexibility means that the system supports many explanation methods. Having different explanations can help the user adjust to different objectives and domains. Our framework supports such flexibility by allowing the user to easily switch in different word embeddings and language models.
Attack Selection {#Attack Selection}
----------------
When generating adversarial texts there are many factors to consider that impact the quality of the resulting examples. These constraints can be described as the attacker’s action space and describe the set of constraints the adversarial examples must meet [@Gilmer2018]. They are some of the considerations needed when defining how an attack should operate. The following is a list of what we consider to be some of the most important factors to consider when defining an attack:
- **Content-preserving:** The text must preserve the content of the message. For example, if the text is about a particular named entity, it must still be about that entity.
- **Semantic-preserving:** The attacker may make any perturbation to the example they want, as long as the meaning is preserved.
- **Syntax-preserving:** The grammatical elements of the text should be the same, the structure of the writing should remain unchanged.
- **Suspicion:** To what extent the text appears to be purposely manipulated. An example would be replacing characters with alternative symbols.
- **Legibility:** The text is in a form that can be read by humans. For example, visually perturbing text such as through *captcha* techniques would degrade legibility.
- **Readability:** The text can still be easily understood by a human. For example, replacing text with words beyond the average person’s lexicon would degrade readability.
The extent to which an attack matches the above criteria is often a subjective question and therefore is likely to be placed somewhere on a spectrum for each of these aspects. For example, spelling errors or poor grammar might increase suspicion but how much is uncertain as these could be considered legitimate mistakes. This could also possibly impair syntax, semantics or readability. Depending on the importance of the various constraints, different attack strategies need to be implemented. The framework is designed to allow the user to use many attacks types so that these constraints can be considered.
With all of these different constraints there might be multiple attack options to decide between when choosing an attack strategy. An attack agnostic framework makes comparing and switching between options more easy. It may be difficult to compare the effectiveness for two attacks such as a character-based versus word-based attack. With our attack-agnostic system, both options can be fed into the system and provided the same type of assessment. With the same representation used, a direct comparison becomes more easy. Additionally the flexibilities afforded by an attack-agnostic system offers the ability to switch between them more easily. During the same attack a user could switch attack strategies. The demonstration in this paper assumes that the resulting text must not be suspicious and so we use a word swapping approach. Semantics, syntax and content might be still be impaired; this is why the user is involved in making appropriate adjustments with the dashboard.
The attacks are implemented as a genetic algorithm which emulates the process of natural selection. This is done by iteratively building stronger generations of adversarial texts over time. The solutions evolve through crossover from parent reproduction and mutations of the text. The purpose of the mutations is to add diversity to the documents to more effectively explore the search space. Reproduction is used to increase the propagation of favourable outcomes and reduce the frequency of poor performing documents. The likelihood of each parent reproducing is determined by a fitness score. The fitness score is based on the output score generated from the target classifier we are attacking. The fitness score improves as the output score gets closer to the target class. The specific conditions for mutation and reproduction vary by attack strategy. The word swap approach demonstrated in this work is detailed in the use case section.
When an attack has been selected the user then chooses the evolutionary parameters including the number of generations, the population size and the word swap settings for how many nearest neighbours to return and a cutoff threshold based on the distance in the embedding space. With the attack chosen and parameters set the last step is to define the completion conditions. The following is a list of completion conditions that can be set:
- **Classifier Score:** In situations where the text needs to reach a specific classification score such as passing through a filtering system. When the score passes this threshold such as negative to positive (above zero) the process will stop.
- **Word Mover’s Distance [@Kusner2015]:** Can be used as a way to roughly approximate the overall extent of change between the current adversary and the original document.
- **Duration:** Once the attack has continued past a specified amount of time, end the attack after the next generation completes.
- **Performance Acceleration:** Once the rate of improvement in classifier score between generations drops past a specified threshold the attack ends.
{width="0.75\columnwidth"}
\[fig:Interaction\_Methods\]
Interaction Methods
-------------------
As the framework is built for black box evasion attacks, the attack process involves repeatedly sending slightly perturbed inputs until the target class is reached. For this reason an automated approach is virtually essential as it would be extremely time consuming for a human to repeatedly craft new inputs. Additionally, when humans test subjects are given the task of creating adversarial texts they have difficulty coming up with examples. When automated approaches were tested they were found to be much better at this task [@Ribeiro2018].
However, while the algorithms are good at generating candidate solutions, they are unable to always make the best decisions. While humans cannot build examples easily they have a skillset complementary to the machine which is to easily select the best text among several options [@Ribeiro2018]. Therefore, while some form of automation is needed it is important to have the user involved in the process. The combination of both human and machine can outperform just the attack algorithm alone. Human intervention is also needed because of the complexities of human language. At least some user feedback is needed to guide the algorithms as context plays a large role in text analysis. Since word similarity is very dependent on the context of surrounding words we still need to rely on a human for final review. Adjustments are needed in situations where the classifier has chosen words that change the semantics of the text.
The assumption when working with word embeddings is that the nearest neighbours of the words will be the ones that are the most similar semantically. However this does not always ensure that true synonyms will be the nearest neighbours. For example, antonyms and hypernyms can be found close in the embedding space as they are used in similar contexts as the original word. In this work we prevent stop words from being swapped and filter the results through WordNet [@miller1995wordnet] to check for antonyms close in the embedding space. These additions, however, do not guarantee nearest neighbours are truly similar. Even when words are proper synonyms, other challenges such as words with multiple meanings complicate the simple word swapping approach. It is for this reason that the attack has been integrated with a human-centered visual analytics dashboard to allow the user to make changes as needed. The automation of the attack algorithm needs to be combined with the subjective insights of a human user.
For these reasons the framework supports three methods of interaction: using the evolutionary attack algorithm (automated), with nearest neighbour exploration (guided) and manually through the text form view (direct). These options can be seen in Figure \[fig:Interaction\_Methods\]. The suggested interaction order is to first run an automated evolutionary attack followed by guided scatterplot suggestions. If there are still issues with the text then the user can directly edit the text. The user is free to use any combination of these methods and in any order. The manual and guided interactions offer direct manipulation of the system without having to launch an entire attack. This can be useful to test a quick hypothesis or troubleshoot the system. The user can also begin another automated stage using the evolutionary attack using the current edits as a new starting position. To prevent the algorithm from simply switching back the words the user has changed, any words edited by the user are automatically locked. This lock prevents the algorithm from making any further changes to these words.
Dashboard Description
=====================
The combination of an automated algorithm working together with a human analyst provides a good opportunity to use visual analytics to more easily integrate together the two parts. As seen in Figure \[fig:teaser\] the dashboard is organized into seven parts. All of these components are connected together in a single linked visualization dashboard. Across the top is the attack configuration settings (A). Below this the line chart (B) tracks the score of the classifier or any other completion condition. Below this an interactive table logs progress and enables the loading of previous snapshots (C). The center displays the adversarial (D) and original (E) documents. On the right is the scatterplot view (F) for selecting word replacements. Manual word replacements can be done with the input field (G). When an attack has been started, the algorithm iterates through all of the generations and provides an update after each generation is complete. The best example from each generation is known as the *elite* and is used to represent the progress of the attack. The server updates the dashboard with this elite. The document view, the linechart and the event log are updated for each generation in real time as the attack progresses.
{width="\columnwidth"}
{width="\columnwidth"}
{width="\columnwidth"}
{width="\columnwidth"}
\[fig:document\_view\]
Document View
-------------
The document view shows the current state of the adversarial example (Figure \[fig:teaser\]D) as well as the original text (Figure \[fig:teaser\]E). While the attack algorithm is running, this view is updated to display the best example (elite) from each generation. Once the attack algorithm has been completed, the final adversary is presented to the user. The words within the adversarial document can be visually encoded according to several objectives: classifier score influence, word selection probability or semantic quality.
The words that have been changed between the original and adversary are coloured blue in the original document for quick identification. When the user hovers over each word, both words in that same position are highlighted. This allows the user to easily orient themselves with the same word position in both texts at once
The score encoding shows the impact the words have on the classifier score. The score is calculated for every word by replacing each word with the word’s nearest neighbour in the embedding space. The document is scored before and after the word has been swapped. The final score is the difference in score from the first word and the new swapped word. For example, if the current word is ‘bad’, the nearest neighbour ‘terrible’ is put in that position instead. Two instances of the document are then scored, one for ’bad’ and once as ‘terrible’. The difference in scores is kept and are represented as the opacity of the original word. This comparison provides a rough approximation as to the importance of each word and lets the user easily spot good candidates for score improvements. If the swap improved the score it would be given a higher opacity and a reduced score would have a lower opacity. As seen Figure \[fig:document\_view\]A, words such as ‘Clearly’ and ‘certainly’ would be good candidates whereas words such as ‘acting’ and ‘story’ would not.
The word selection encoding represents the probability of each word being chosen by the attack algorithm. This is based on a count of how many nearest neighbours each word has. This is calculated as the number of words nearby in the embedding space within the threshold specified by the user. The number of nearby words is converted to a probability based on the relative word counts of the other words in the document. Words that share a similar meaning to many other words are likely to have a much higher count than more unusual words. This view can enable a user to quickly see which words are more likely to have suitable replacement suggestions available by the system without having to load the scatterplot for each word. The background colour of the text is used to represent the probability using the Viridis blue-yellow color scale. Words with a higher probability are given a more yellow (brighter) background colour. As seen in Figure \[fig:document\_view\]B, ‘very’ has many options, ‘recommend’ has some options and ‘acting’ has very few.
The last document view option is the semantic perspective that visually encodes the words according to their probability score from a language model. This view can be used when the user wants to improve the semantics of the text. Each word is processed by the language model with its surrounding context to determine a probability score that reflects how appropriate each word is in that spot. This view can help a user identify words that are not appropriate for the sentence and that need to be changed. The brightness of the text colour is used to represent the semantic score. Here, lower semantic score is more blue, which through luminance contrast with the background drives attention to words that are better candidates for editing. As seen in Figure \[fig:document\_view\]C, the majority of the words have an average score and the word ‘abysmal’ is one of the least appropriate.
The user can choose to represent one or any combination of these encodings at any time. Once the word encodings have been calculated the user can begin to select individual words to swap. The user can activate any word from the text by clicking on it. This word now appears in the top right corner of the dashboard and the word replacement view is activated with this word which is described further in the next subsection. To help the user easily identify the selected word within the text, the selected word is given a dark background within the document text. Whenever the user swaps a word the document view is updated. Each word in the text is again scored by the classifier and then the encodings are updated.
{width="\textwidth"}
\[fig:scatterplots\]
Word Replacements {#Word Replacements}
-----------------
The word replacement section is the right side of the dashboard and is where the user can choose word replacements with either the scatterplot suggestions (Figure \[fig:teaser\]F) or manually with the text field (Figure \[fig:teaser\]G). The scatterplot view is used as a guided interaction to help users more easily identify suitable word replacements. The purpose of the scatterplot is to see what would happen if any of the nearest neighbour candidates was chosen to replace the current word in the adversarial text instead. This enables the user to quickly find any appropriate replacements for the current word selected.
When a user selects a word the attack server retrieves all of the nearest neighbours of that word within the defined distance threshold by using the word embedding space. For each of the nearest neighbours three scores are computed: a classifier score from the model we are attacking, a probability from our language model and a similarity score. The classifier score is calculated as the difference in score between this word and the original word (the word that was clicked on). These score encodings function the same way as classifier scores in the document view. That is, each word is compared to the current word in the text by replacing it in the document and running it through the classifier. The embedding space similarity score for each candidate word is computed based on the embedding distance to the current word in the text. The similarity scores range from 0 to 1 and our implementation is based on the Euclidean distance in the Google News corpus word2vec embedding [@mikolov2013efficient]. Larger numbers indicate more similarity between this word and the current word. The language model probability scores are compared between all the replacement candidates. For the language model we use the Google 1 billion words model [@chelba2013one]. Words that fit most appropriately in the surrounding context will have larger scores than those that do not. The scores are normalized between the range of 0 and 1.
Once all of the words have been retrieved and their scores computed, they are placed on the scatterplot. The x-axis plots the similarity score and the y-axis plots the language model probabilities. The axis for both of the scatterplot features starts at zero and increases towards one. This means words near the origin are the least desirable for both features. Farther out upwards and towards the right improves the semantic score and the similarity of the words respectively. The colour brightness of the words is encoded with the classifier scores using the d3 plasma blue to yellow color scale. With all three features considered, a user would ideally find bright words in the top right corner indicating similar words that fit the context and that also boost classifier performance. When a suitable word is found the user can click on that word to use it as a replacement. The new selected word now takes the place of the old one and the document view updates.
Examples of scatterplots can be seen in Figure \[fig:scatterplots\]. For ‘awful’ (\[fig:scatterplots\]-left) both the words ‘terrible’ and ‘horrible’ would be decent replacement options. They do however, reduce the classifier score which may render the options unusable if the classifier score is near the decision boundary. The options for ‘disappointment’ (\[fig:scatterplots\]-right) are more disappointing as there are no clear winning candidates within the top right quadrant.
The other human intervention method is to manually edit the text directly by using the text form that allows the user to edit the underlying text directly. The user may want to make manual edits if the word they want to use as a replacement is not suggested in the scatterplot. Even if a word is in the scatterplot the exact version may not be appropriate and they may want to take a word and make some small adjustments such as editing the prefix or suffix of a word to more appropriately match the surrounding text. For instance the use may wish to make adjustments for issues such as proper word tense or switching between singular and plural versions of a word. In these situations the user types in the desired word replacement into the text box and clicks the swap word button. This achieves the same end as clicking a word in the scatterplot.
Event Log
---------
The event log (Figure \[fig:teaser\]C) is an interactive data table that records every action made by both the user and the attack algorithm. For the algorithm, an update is sent after every evolutionary generation has completed. For the user, any word replacements either by manual text edits or word swaps with the scatterplot view are added to the table. For each action the following are recorded and stored in the table: a timestamp, an event description, the total swap count, the word mover’s distance (WMD) [@Kusner2015] relative to the original document, and the score from the classifier. Each table column can be sorted by clicking on the column header. The event log enables the user to review the impact on the document by sorting over time, interaction type or changes on the text (swaps, WMD, score).
When using non-linear classifiers, the user may wish to step through several interactions in a sequence to see if subsequent choices impact past decisions. If a user wishes to revert any changes done they can do so through the data table log. By clicking on any entry in the table they can return to this snapshot. This allows users to easily revert back to previous decisions, allowing for non-permanent interactions. This can more easily facilitate what if analysis by the user where they may wish to explore different options.
Use Cases
=========
In this section we demonstrate an implementation of the attack algorithm and the process involved in adjusting an adversarial text using the dashboard. The end objective of the attack is to take a document which in this case is a negative movie review and convert this to a positive review without changing the semantics of the text. Since the review was originally negative, a human reading the review should still believe the review is negative even after it becomes classified positive by the machine learning model. The attack algorithm implemented in this work is based on an existing word swapping attack [@Alzantot2018]. For our evolutionary attack algorithm the mutations occur as words swaps for semantically similar words based on a word2vec embedding from the Google News corpus [@mikolov2013efficient]. Nearest neighbour lists are built for each word in the document with a cut off over a specified distance in the embedding space. The more neighbours a word has under the specified threshold the more likely it will be chosen as the mutation. Reproduction is implemented as crossovers involving two parents with a child randomly receiving half of the words from each parent.
Adversarial Dataset Building
----------------------------
In this example the user wishes to construct adversarial examples in order to experiment with adversarial training [@goodfellow2014explaining]. With adversarial training the classifier is trained on adversarial examples in order to increase robustness against them. By using the attack algorithm alone the user might be training the model on adversaries that were actually not semantically similar to the original. This would mean training would be done on improper adversarial examples so the results would not be as effective. By making corrections to the texts with poor semantics, the training set quality for the adversarial training can be improved.
{width="\textwidth"}
\[fig:adversary\_selection\]
To start the user selects the classifier score threshold as the completion constraint. For our scenario the objective is to achieve a score of at least 0 (neutral). The user then generates many adversarial examples using different documents as the starting point, thus building a diverse set of adversarial examples. With an adversarial dataset built, the user needs to select which data records to investigate.
With a specific record now chosen the user will begin to examine the text and improve the semantics. The adversarial text chosen had successfully switched classes from negative to positive. The user now wants to confirm that the example is truly similar semantically to the original. To quickly check for poor word substitutes that have been made, the user selects the language model encoding in the document view. As seen in Figure \[fig:document\_view\]C the user sees that the word ‘abysmal’ which replaced ‘awful’ has been identified as a word with a poor language model score. The user also sees another replacement they wish to fix: ‘disappointment’ has been replaced by ‘surprise’. The user feels that there are more appropriate substitutes for these words. As discussed in Section \[Word Replacements\], the user selects both of these words within the document view and the results can be seen in Figure \[fig:scatterplots\]. The user chooses replacement words and repeats the process for each word they wish to correct. When the user does not wish to use any of the suggested replacements they insert their own word manually via the text field. In instances where the user is unsatisfied with the change in scores, they can revert back to the previous snapshot using the event log.
The user continues to search for other words in the adversary to replace until all the poor semantic words have been fixed. The user has noticed that the classifier score has dropped beyond the threshold needed. They could launch another evolutionary attack or make changes themselves. Since the score only needs a slight upgrade they decide to fix it themselves. They search for the best opportunity for score changes by enabling the performance encoding to find words that have replacements that can improve the score. They also add the word selection probability encoding to find words that are likely to have replacements. This can be seen in Figure \[fig:optimizing\_adversary\]. They find the word ‘movie’ has a good opportunity to increase the classifier score (high opacity) and has many suitable replacements (bright colour). They then repeat the process of looking for replacements in the scatterplots. When the adversary has been fixed they can continue to search through other adversaries, returning to the data table in Figure \[fig:adversary\_selection\] and prioritize based on the summary scores.
{width="\columnwidth"}
\[fig:optimizing\_adversary\]
Attack Algorithm Adjustments
----------------------------
The dashboard can also be used as a way to review the attack process of the evolutionary algorithm. If used in this way the dashboard can be used as a troubleshooting tool to help debug errors or better optimize the attack results. To do this the user can change the encoding option in the document view to the word selection probability. This will visualize the influence of each word during the attack to provide the user a better understanding of how the attack chooses which words to perturb. The development of the attack example can be reviewed after each evolution by looking at each generation of the attack using the event log. By stepping through each stage the user can see which words are being replaced at any time in the attack. The user can jump to a snapshot in the event log and bring up the document view.
In this example the user wants to troubleshoot the attack algorithm. Specifically they want to know why the word ‘It’ has a high selection chance. As seen in Figure \[fig:optimizing\_adversary\] line 2, the user observes that the selection probability for the word ‘It’ is very high which they find strange as they thought it was added to the list of stop words to ignore. The stop list is used for words in which there are no conceivable replacements as the word is uncommon or has no synonyms of any form. The user notices that other instances of the word ‘it’ in this document were scored much lower, but then realizes that this one was at the start of the sentence so it was capitalized. This capitalized version of the word was not part of the stop list. To fix this issue the user now adds this specific version of the word to the stop list. Alternatively they could make the stop list case insensitive.
To improve the attack performance the user can look for words that have large discrepancies between the classifier score influence and the word selection probability. That is, the user can look for words that have selection probabilities that do not properly reflect their importance to the classification. As an example in Figure \[fig:optimizing\_adversary\] bright, bold words are important to the score and likely to be changed. Faded dark words are unimportant and unlikely to be chosen. These are optimal conditions for the attack. However the word ‘acting’ (line 3) is not important due to the low opacity but is likely to be modified due to the bright colouring. The user therefore might want to prevent this word from being modified and instead give greater emphasis to words such as ‘boring’ (line 4) that are important (high opacity) but are not likely to be chosen (dark colour). This re-weighting of the evolutionary process can help the attack more quickly converge to better results.
{width="\textwidth"}
\[fig:attack\_results\]
Robustness Testing
------------------
Another use of the framework is to compare the robustness of different classifiers. In this example the user tests three different classifiers by running the attack algorithm for each one and comparing the differences. The attack algorithm runs 10 generations for each record. The user is assessing how feasible it is for our attack to generate adversaries and to what extent each document is changed.
The data we use to demonstrate the attacks is the IMDB sentiment analysis dataset [@Maas]. This dataset has 50,000 movie reviews that are evenly split between negative and positive classes. Each review is scored out of 10 by a human reviewer. The review is negative if the score is less than or equal to 4 and positive if the score is greater than or equal to 7. Neutral reviews are not available in the dataset. We test three different classifiers: VADER, a LSTM and ULMFiT.
VADER [@hutto2014vader] is a sentiment analysis classifier that uses a rules based approach built with a human curated lexicon of words. Each word placed on a spectrum from negative to positive. The LSTM is our implementation of an average performing deep learning classifier. The ULMFiT classifier [@Howard2018] is a transfer learning method which was trained on the IMDB dataset [@Maas]. As a baseline comparison between the models, we run the classifiers through the entire dataset without any adversarial testing. VADER scores 59%, the LSTM scores 84% and ULMFiT scores 95.4% (the highest accuracy of all existing works published on the dataset [@nlpprogress2019]).
As see in Figure \[fig:attack\_results\], for the results of the attack we measure word mover’s distance, word swaps and sentiment improvement. The word mover’s distance is the difference between the final adversary at generation 10 and the original document. The word swap is the percentage of words replaced between the final adversary and the original. The sentiment improvement is the difference in classifier score between the original and final adversary. Scores from all classifiers are normalized in a range from -1.0 (100% negative) to 1.0 (100% positive) with a score of 0 considered a neutral score. As an example a change from negative sentiment (-0.50) to positive (+0.25) would be a change of 75%. All the scores presented are the averages across all the records attacked. A more robust model will require a larger word mover’s distance and more swapped words to reach the same classifier score as a weaker model. A more robust model would also have a smaller classifier score improvement.
The ULMFiT classifier is the most robust because it had the largest word mover’s distance and highest word swap percentage. VADER had the least robust performance as very little perturbation needs to be done in order to trick the classifier. As little as 5% of the words can be swapped with VADER compared to over 20% for ULMFiT, the word mover’s distance is also more than triple for ULMFiT (0.289) compared to VADER (0.091). These results exemplify the importance of human edits for more robust models. Since a more robust model changes the documents more, there is a greater number of potential edits in need of fixing. These tests also demonstrate that new attack strategies are needed to effectively attack the more complex models. In addition to these robustness tests, another type of model assessment can be to compare different attack strategies against the same classifier in order to choose the best attack for further evaluation.
Discussion and Future Work
==========================
In this work we have presented a visual analytics framework that helps users build adversarial text examples. While we have demonstrated that the framework can craft adversaries, there are many possible extensions and directions for future work. Most importantly the system will undergo a more formal evaluation in which both quantitative and qualitative aspects of the work can be assessed. In this section we discuss some limitations and extensions of our approach as well as evaluation and other potential future work.
Limitations
-----------
The framework assumes we have unlimited access to the NLP classifier we are attacking. This may not always be the case if for example, an online service has a maximum attempt lockout precaution or interprets our repeated queries as a denial of service attack. Mitigation techniques could include rate limiting our requests over time, a distributed attack, or slowly building a surrogate model that emulates the online system. With a surrogate model made the attack can continue indefinitely in an offline setting.
The visual encodings used for the words are done by querying the classifier with each word to measure the influence of swapping each word. When attacking a non-linear model, if any word is changed it can influence the results of any other subsequent changes. Therefore each word must be reevaluated after any modification. This becomes increasingly computationally expensive as we increase the number of words in the text. Some methods to mitigate this issue could include filtering unimportant words, intelligent prefetching, or only encoding words upon user request.
Evaluation
----------
The robustness testing use case was a quantitative way of assessing our proposed framework. However such calculations cannot be made as easily for more subjective matters such as text quality or model interpretability. This means that qualitative assessments in the form of user studies would also have to be done. Methods of quantifying machine learning explanations have been considered in the evaluation of the XAI framework [@Aha2018]. Methods have been suggested for developing a “goodness scale” using a 7-point Likert scale to assess factors such as plausibility, usefulness, clarity, causality, completeness, applicability, contrast and local as well as global observability.
For our future work we plan on conducting user studies through a method such as Amazon Mechanical Turk [@mturk2019]. Subjects would be provided samples of output from the attack algorithm after a set number of generations or the attack algorithm results plus edits made by a human reviewer. The subjects would be asked to rate the semantic quality of the texts. With ratings for both automation alone and human combined with machine we can compare the difference in ratings to assess the impact of human involvement in the generation process.
Framework Extensions
--------------------
An extension to the views could include a filter to remove options such as different parts of speech, word tenses, or proper nouns. This could let a user more quickly find a suitable word replacement. Another area for future work is the use of contextual word embeddings [@peters2018deep] that could provide more appropriate nearest neighbour by considering the local context of the word within the text. An extension to the current evolutionary algorithm can include a user-steerable stage of speculative execution [@El-Assady2019]. This extension would track the quality of the text and will interrupt the process if a quality metric degrades past a certain threshold. At that point the system could present to the user various previews of new generations to allow the user to select the best path forward.
Other potential future work involves defensive measures for adversarial texts. The framework can be extended to test various defense strategies to help strengthen the models against adversarial examples. Most research on adversarial defense has been for computer vision and as discussed previously computer vision techniques do not often transition well to the discrete space of NLP. Some recent works however have evaluated methods for adversaries using sequential data [@rosenberg2019defense]. These techniques were tested for cybersecurity and not NLP but their use of sequential methods could prove promising for NLP defense. As future work our system could test methods such as these and incorporate some auto machine learning techniques to search for optimal parameter settings. These suggestions can be added to the system for directing the user in choosing the best defensive measures against the attacks crafted by the user.
Conclusion
==========
Acknowledgements {#acknowledgements .unnumbered}
================
This research was supported by the Communications Security Establishment, the government of Canada’s national cryptologic agency and the Natural Sciences and Engineering Research Council of Canada (NSERC).
[^1]: e-mail: [email protected]
[^2]: e-mail: [email protected]
[^3]: e-mail: [email protected]
[^4]: e-mail: [email protected]
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We study the implications of a criterion of naturalness for a simple two Higgs doublet model in the context of the discovery of a Higgs like particle with a mass at 125 GeV. This condition which measures the amount of fine-tuning further limits the parameter space of this particular model and together with other phenomenological constraints lead to an allowed range of masses for the other neutral or charged Higgs bosons: H, $a^{\pm}$, $a^0$.'
author:
- 'Renata Jora $^{\it \bf a}$ '
- 'Salah Nasri$^{\it \bf b}$ '
- 'Joseph Schechter $^{\it \bf c}$ '
title: |
\
[Naturalness in a simple two Higgs doublet model]{}
---
Introduction
============
Recent experimental data from the LHC [@LHC1]-[@CMS2] suggests that a Higgs like particle with a mass of 125-126 GeV has been found. Although this particle is consistent with a standard model Higgs boson it would be interesting to explore the consequences of this discovery for various extensions of the standard model. Among these one of the most natural is the two Higgs doublet model. Many authors [@Posch]-[@Geller] have studied the parameter space of this type of model for the three possible scenarios: the 125 GeV Higgs boson is the lightest scalar in the model, the heaviest or the pseudoscalar $a^0$.
In the present work we analyze a particular case of the two Higgs doublet models introduced in [@Jora]. We will study the quadratic divergences of the scalars involved and in connection to the latest experimental data. More exactly we suggest that a criterion of naturalness should be applied also to this class of models.
We start with a two Higgs doublet model discussed in [@Jora] with a tree level effective Higgs potential that satisfies the requirement of $SU(2)_L\times SU(2)_R$ flavor invariance together with parity and charge conjugation invariances. We denote the two Higgs doublets by $\Phi$ and $\Psi$ where, $$\begin{aligned}
\Phi=
\left[
\begin{array}{c}
i\pi^+\\
\frac{\sigma-i\pi_0}{\sqrt{2}}
\end{array}
\right],
\hspace{1cm}
\Psi=
\left[
\begin{array}{c}
-ia^+\\
\frac{\eta+ia_0}{\sqrt{2}}
\end{array}
\right].
\label{doubl54}\end{aligned}$$
One can make three invariants: $$\begin{aligned}
&&I_1=\sigma^2+\pi^2
\nonumber\\
&&I_2=\eta^2+a^2
\nonumber\\
&&I_3=\sigma\eta-\pi a
\label{inv5454}\end{aligned}$$
Then the tree level potential can be written as: $$\begin{aligned}
V=
\frac{\alpha_1}{2}I_1+\frac{\alpha_2}{2}I_2+\frac{\alpha_3}{4}I_1^2+\frac{\alpha_4}{4}I_2^2+\frac{\alpha_5}{4}I_3^2
+\frac{\alpha_6}{4}I_1I_2.
\label{oneloop657}\end{aligned}$$
Since we consider the doublet $\Psi$ to have reversed parity with respect to $\Phi$ the potential does not contain a term linear in $I_3$ due to parity invariance.
For a reasonable range of parameters the potential admits a minimum with $\langle\sigma\rangle\neq 0$ and $\langle\eta\rangle=0$. (See[@Jora] for details. Note that we also slightly change the notation for the $\alpha_i$ to agree with the standard model two Higgs doublets). The minimum condition reads: $$\begin{aligned}
\alpha_1+\alpha_3v^2=0,
\label{min6767}\end{aligned}$$
whereas the scalar masses are simply: $$\begin{aligned}
&&m_{\sigma}^2=2\alpha_3v^2
\nonumber\\
&&m_{\eta}^2=\alpha_2+\frac{\alpha_5+\alpha_6}{2}v^2
\nonumber\\
&&m_{a_0}^2=m_{a^{\pm}}^2=\alpha_2+\frac{\alpha_6}{2}v^2.
\label{mass4353}\end{aligned}$$
From the Higgs mass and the minimum condition one can determine the two parameters $\alpha_1$ and $\alpha_3$. The masses of the other scalars depend on the three unknown parameters $\alpha_2$, $\alpha_5$ and $\alpha_6$. Assuming that the lightest Higgs coincides with the scalar discovered by Atlas and CMS with a mass $m_h=125-126$ GeV, except for some lower bounds we have little experimental information regarding $\eta$ and the $a$’s.
In the present model one can add two lower limits stemming from the well known experimental knowledge on the Z width, $$\begin{aligned}
&&m_a>\frac{m_Z}{2}
\nonumber\\
&&m_a+m_{\eta}>m_Z.
\label{lowlimit546}
\end{aligned}$$
since the decays of Z to $a^+ +a^-$ and to $a^0+\eta$ are kinematically prohibited.
Masses and couplings
====================
We adopt the criterion of the cancellation of the quadratic divergences, the analogue of the Veltman condition [@Veltman] for the standard model. Thus we will ask that the masses and couplings are such that the quadratic divergences to all scalar masses in our model cancel. The corresponding set of conditions was derived by Newton and Wu [@Newton] for the most general two Higgs doublet model. Applied to our case this leads to two constraints. These are: $$\begin{aligned}
&&-12m_t^2+3m_Z^2+6m_W^2+3m_h^2+(2\alpha_6+\frac{\alpha_5}{2})v^2=0
\nonumber\\
&&3m_z^2+6m_W^2+(6\alpha_4+2\alpha_6+\frac{\alpha_5}{2})v^2=0
\label{setofcondfg454}
\end{aligned}$$
From these it is straightforward to determine: $$\begin{aligned}
6\alpha_4=3m_h^2-12m_t^2.
\label{res5454}\end{aligned}$$
The latest experimental data suggest [@LHC1]-[@CMS2] that the actual mass of the Higgs boson is around 125-126 Gev. Thus the constraint in Eq (\[res5454\]) would lead to $\alpha_4<0$ which is unacceptable from the point of view of the vacuum structure.
A possible interesting way out is to generalize our simple two Higgs doublet so as to admit a vev different from zero also for the $\eta$. For that we assume that the model is still parity and charge conjugation invariant but that the vacuum spontaneously breaks the parity invariance. In the situation when $\langle\sigma\rangle=v_1$, $\langle\eta\rangle=v_2$ the minimum equations for the potential become:
$$\begin{aligned}
&&\alpha_1+\alpha_3 v_1^2+(\frac{\alpha_5+\alpha_6}{2})v_2^2=0
\nonumber\\
&&\alpha_2+\alpha_4 v_2^2+(\frac{\alpha_5+\alpha_6}{2})v_1^2=0.
\label{min76868}\end{aligned}$$
If we denote, $$\begin{aligned}
&&\tilde{\sigma}=\sigma-v_1
\nonumber\\
&&\tilde{\eta}=\eta-v_2
\label{den6776}\end{aligned}$$
then the mass eigenstates are obtained through the transformation: $$\begin{aligned}
\left[
\begin{array}{c}
\tilde{\sigma}\\
\tilde{\eta}
\end{array}
\right]
=
\left[
\begin{array}{cc}
\cos{\alpha}&\sin{\alpha}\\
-\sin{\alpha}&\cos{\alpha}
\end{array}
\right]
\left[
\begin{array}{c}
h\\
H
\end{array}
\right],
\label{def543354}\end{aligned}$$
where, $$\begin{aligned}
\tan{2\alpha}=
\frac{(\alpha_5+\alpha_6)v_1v_2}{2(\alpha_3v_1^2-\alpha_4v_2^2)}.
\label{res4343}\end{aligned}$$
We define as usual $\frac{v_2}{v_1}=\tan{\beta}$ where $v_1^2+v_2^2=v^2$. Then the mass spectrum can be easily derived and we deduce, $$\begin{aligned}
&&m_{h}^2+m_{H}^2=2v^2[\alpha_3 \cos^2(\beta)+\alpha_4 \sin^2(\beta)]
\nonumber\\
&&m_{h}^2 m_{H}^2=v^4[4\alpha_3 \alpha_4-(\alpha_5+\alpha_6)^2]\sin^2(\beta)\cos^2(\beta).
\label{eq3232}\end{aligned}$$ and,
$$\begin{aligned}
m^2_{a^0,a^{\pm}}=-\frac{\alpha_{5}}{2}v^2.
\label{somm4353}\end{aligned}$$
Since we still preserve the parity invariance at the level of the Lagrangian the fermion couple only to first Higgs doublet (type I Higgs doublet model). For this case the Newton-Wu conditions [@Newton], [@Ma] of cancellation of quadratic divergences read (we keep only the top and bottom quarks): $$\begin{aligned}
&&3m_Z^2+6m_W^2+(6\alpha_3+2\alpha_6+\frac{\alpha_5}{2})v^2=12\frac{m_t^2}{\cos^2(\beta)}+12\frac{m_b^2}{\cos^2(\beta)}
\nonumber\\
&&3m_Z^2+6m_W^2+(6\alpha_4+2\alpha_6+\frac{\alpha_5}{2})v^2=0
\label{newcond6565}\end{aligned}$$
The couplings of the Higgs with the top and bottom quarks in our model are: $$\begin{aligned}
&&(h{\bar t}t)=(h{\bar t}t)_{SM}\frac{\cos(\alpha)}{\cos(\beta)}
\nonumber\\
&&(h{\bar b}b)=(h{\bar b}b)_{SM}\frac{\cos(\alpha)}{\cos(\beta)}
\label{res4443}\end{aligned}$$
whereas the coupling of the Higgs with the gauge bosons W and Z read: $$\begin{aligned}
&&(h W W)=(h W W)_{SM}\cos(\alpha+\beta)
\nonumber\\
&&(h Z Z)=(h Z Z)_{SM}\cos(\alpha+\beta)
\label{res444353}\end{aligned}$$
From these one can compute the two photon decay rate of the Higgs boson [@Marciano]: $$\begin{aligned}
\Gamma_{h\rightarrow \gamma\gamma}=\Gamma_{h\rightarrow \gamma\gamma}^{SM}
\frac{|8.35\cos(\alpha+\beta)-1.84\frac{\cos(\alpha)}{\cos(\beta)}|^2}
{|8.35-1.84|^2}.
\label{diphoy7676}\end{aligned}$$
Here the value of $m_h=125.9$ GeV was used and 8.35 and -1.84 are the W and top loop contributions in the standard model.
Discussion
==========
The model contains seven parameters. Two of them can be eliminated from the minimum equations. This leaves us with five parameters $\alpha_3$, $\alpha_4$, $\alpha_5$, $\alpha_6$ and $\beta$. We consider as input the mass of the Higgs boson $m_h=125.9$ GeV. Note that there are two possibilities: $m_h\leq m_H$ and $m_h> m_H$. Using Eq. (\[eq3232\]), Eq. (\[somm4353\]) and Eq. (\[newcond6565\]) we plot the square of the mass of the Higgs boson H ($m_H^2$) in terms of $m_a^2$ for different values of $\sin^2(\beta)$ for which we consider increments of $0.1$.
It turns out that there are no positive solutions for $m_H^2$ for values of $\sin^2(\beta)$ in the range $0.1-0.5$. For $\sin^2(\beta)=0.6$ there are solutions only for $m_a^2\leq 10000$ GeV. However we are looking for solutions with the diphoton decay rate of the Higgs boson equal or greater than that of the standard model and with couplings $(h{\bar b}b)$ close to the standard model couplings. There are no solutions even close to our requirements for $\sin^2(\beta)=0.6$. For $\sin^2(\beta)=0.7$ there are two reasonable sets of solutions for $m_H^2$ (see Fig.1) both for masses of the a’s $m_a^2\leq 20000$ GeV. However only the second set of solutions (dashed line in Fig.1) give acceptable two diphoton decay rate for the Higgs boson as illustrated in Fig.2 and also correct couplings with the bottom quarks (see Fig.7 thick line). For $\sin^2(\beta)=0.8$ the solutions for the masses $m_H^2$ are shown in Fig.3. The reasonable diphoton decay rates and bottom couplings correspond to the first set of solutions in Fig.3 (thick line) and are displayed in Fig.4 and Fig.7 (dashed line). The relevant graphs for $\sin^2(\beta)=0.9$ are given in Figs.5,6,7 (dotdashed line). Here again only the first set of solutions (thick line in Fig.5) gives the correct answers.
= 8cm
= 8cm
= 8cm
= 8cm
= 8cm
= 8cm
= 8cm
Estimate of the masses
======================
The two Higgs doublet models have been discussed and analyzed extensively in the literature in connection to the LHC data [@Sher1],[@Sher2],[@Chang]. It would be useful here rather then reiterate these efforts to apply some of these results to the naturalness problem. For this specific problem we will use the global fit for the parameters $\alpha$ and $\tan(\beta)$ to the observed Higgs signal strength defined for all Higgs search channels at the LHC [@Chang]. The values of these parameters are then taken as inputs in Eqs. (\[res4343\]), (\[eq3232\]), (\[newcond6565\]) together with the mass of the Higgs boson. The system of 5 equations leads to solutions for all the unknown parameters of the model: $\alpha_3$, $\alpha_4$, $\alpha_5$, $\alpha_6$ and $m_H$. The three scenarios displayed in Table I [@Chang] correspond to: 1) The mass of the lightest Higgs boson is 125-126 GeV; 2) The mass of the heaviest Higgs boson is 125-126 GeV ; 3) There are two resonances, h and $a^0$ with a mass around 125-126 GeV.
${\rm Masses}$ I ($\alpha=1.38+\pi$, $\cot(\beta)=0.21$ ) II($\alpha=-0.15+\pi$, $\cot(\beta)=0.17$) III($\alpha=-0.98+\pi$, $\cot(\beta)=1.37$)
---------------- -------------------------------------------- -------------------------------------------- ---------------------------------------------
$m_H(m_h)$ $381\,{\rm GeV}$ $368\,{\rm GeV}$ $m_H^2<0$
$m_a$ $132\,{\rm GeV}$ $129\,{\rm GeV}$ $305\, {\rm GeV}$
: Masses of the Higgs bosons $H(h)$, $a^{\pm}$, $a^0$ for the three unconstrained scenarios.[]{data-label="table22"}
As it can be seen form Table I only the first scenario works as scenario II leads to an inconsistency ($m_h=368$ GeV) and scenario II leads to a imaginary mass for the H boson.
We relax the condition (\[newcond6565\]) and replace it by a new constraint which limits the amount of fine-tuning in this sector. First let us express the quadratic contribution to the scalars $\sigma$ and $\eta$ self energies before spontaneous symmetry breakdown: $$\begin{aligned}
&&\delta m_{\sigma}^2=\frac{\Lambda^2}{32\pi^2 v^2}[3m_Z^2+6m_W^2+6m_W^2+(6\alpha_3+2\alpha_6+\frac{\alpha_5}{2})v^2-12\frac{m_t^2}{\cos^2(\beta)}-12\frac{m_b^2}{\cos^2(\beta)}]
\nonumber\\
&&\delta m_{\eta}^2=\frac{\Lambda^2}{32\pi^2 v^2}[3m_Z^2+6m_W^2+(6\alpha_4+2\alpha_6+\frac{\alpha_5}{2})v^2]
\label{newcond656578}\end{aligned}$$
Then we ask: $$\begin{aligned}
&&\delta m_{\sigma}^2\leq 0.1 \alpha_1
\nonumber\\
&&\delta m_{\eta}^2\leq 0.1\alpha_2.
\label{newfine555}\end{aligned}$$
Here $\alpha_1$ and $\alpha_2$ are the masses of the $\sigma$ and $\eta$ in the gauge eigenstate basis. For large $\Lambda$ Eq. (\[newfine555\]) approaches the condition of cancellation of the quadratic divergences such that we will study the implication for a $\Lambda$ relatively small, $\Lambda=10$ TeV. For scenario I we plot the parameters $a_3$, $a_4$, $a_6$ (see Fig.8) and also the mass $m_a^2$ as a function of the allowed range for $m_H\leq 381$ GeV (Fig.9). It turns out that the mass $m_a$ is real only for $283\,{\rm GeV}\leq m_H\leq 381\,{\rm GeV}$ and increases from zero to $132$ GeV in this interval.
= 8cm
= 8cm
Conclusions
===========
The naturalness criterion plays an important role in building beyond the standard model theories like supersymmetry, technicolor, extra dimensions, little Higgs etc. Even if the two Higgs doublet model can be viewed as a lower effective limit of one of these theories or another one should still consider a measure of the fine-tuning that it is allowed at least with respect to some scale at which new physics might intervene.
In the present work we consider for a particular type I two Higgs doublet model two cases: a) when the scale of new physics is high and b) when the scale of new physics is relatively low. For case a) we apply the condition of cancellation of quadratic divergences and study this in conjunction with the Higgs diphoton decay rate and the $(h {\bar b}b)$ couplings. For case b) we require that the quadratic corrections to the scalar masses be relatively small with respect to the actual masses and analyze this in the context of more comprehensive phenomenological fits for the angle $\alpha$ and $\tan(\beta)$ taken from the literature [@Chang]. We thus estimate an acceptable interval for the masses of the other neutral and charged Higgs bosons: H, $a^{\pm}$, $a^0$. Depending on the set of conditions applied the range of masses can be larger or smaller. Our conclusion is that our two Higgs doublet model can both be natural and in agreement with the latest experimental data.
Acknowledgments {#acknowledgments .unnumbered}
===============
-.5cm
The work of R. J. was supported by PN 09370102/2009. The work of J. S. was supported in part by US DOE under Contract No. DE-FG-02-85ER 40231.
[15]{} ATLAS Collaboration, Phys. Lett. [**B**]{} 710, 49-66 (2012). ATLAS Collaboration, Phys. Lett. [**B**]{} 716, 1 (2012). CMS Collaboration, Phys. Lett [**B**]{} 710, 26 (2012). CMS Collaboration, Phys. Lett [**B**]{} 716, 30 (2012). A. P. Posch, Phys. Lett. [**B**]{} 696, 447 (2011). P. M. Ferreira, R. Santos, M. Sher, J. P. Silva, arXiv:1201.0019 (2012). P. M. Ferreira, R. Santos, M. Sher, J. P. Silva, arXiv:1112.3277 (2011). C.-Y. Chen, S. Dawson, arXiv:1301.0309 (2013). B. Grzadkowski, P. Osland, Phys. Rev. D [**82**]{}, 125026 (2010), arXiv:0910.4068. B. Grzadkowski, P. Osland, Fortsch. Phys. 59, 1041-1045 (2011), arXiv:1012.0703. B. Grzadkowski, P. Osland, J. Phys. Conf. Ser. 259, 012055 (2010), arXiv:1012.2201. E. Cervero and J.-M. Gerard, Phys. Lett. B [**712**]{}, 255 (2012), arXiv:1202.1973. L. Wang, X.-F. Han, JHEP 1205, 088 (2012), arXiv: 1203.4477. A. Drozd, B. Grzadkowski, J. F. Gunion, Y. Jiang, arXiv:1211.3580 (2012). P. M. Fereira, H. F. Haber, R. Santos, J. P. Silva, arXiv:1211.3131 (2012). B. D. S. M. Alves, P. J. Fox, N. Weiner, arXiv:1207.6499. S. Chang, S. K. Kang, J.Lee, K. Y. Lee, S. C. Park, J. Song, arXiv:1210.3439 (2012). S. Bar-Shalom, M. Geller, S. Nandi, A. Soni, arXiv:1208.3195 (2012). R. Jora, S. M. Moussa, S. Nasri, J. Schechter and M. N. Shahid, Int. J. Mod. Phys. A 23, 5159 (2008), arXiv:0805.0293. M. J. G Veltman, Acta Phys. Polon. B 12, 437 (1981). C. Newton and T. T. Wu, Zeitschrift fur Physik, 62, 253-263, 1994. E. Ma, arXiv:hep-ph/0101355 (2001). W. J. Marciano, C. Zhang and S. Willenbrock, Phys. Rev. D [**85**]{}, 013002 (2012); arXiv:1109.5304.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Bayesian optimization is a sequential decision making framework for optimizing expensive-to-evaluate black-box functions. Computing a full lookahead policy amounts to solving a highly intractable stochastic dynamic program. Myopic approaches, such as expected improvement, are often adopted in practice, but they ignore the long-term impact of the immediate decision. Existing nonmyopic approaches are mostly heuristic and/or computationally expensive. In this paper, we provide the first efficient implementation of general multi-step lookahead Bayesian optimization, formulated as a sequence of nested optimization problems within a multi-step scenario tree. Instead of solving these problems in a nested way, we equivalently optimize all decision variables in the full tree jointly, in a “one-shot” fashion. Combining this with an efficient method for implementing multi-step Gaussian process “fantasization,” we demonstrate that multi-step expected improvement is computationally tractable and exhibits performance superior to existing methods on a wide range of benchmarks.'
author:
- |
Shali Jiang[^1]\
Washington University\
`[email protected]` Daniel R. Jiang$^*$\
Facebook\
`[email protected]` Maximilian Balandat$^*$\
Facebook\
`[email protected]` Brian Karrer\
Facebook\
`[email protected]` Jacob R. Gardner\
University of Pennsylvania\
`[email protected]` Roman Garnett\
Washington University\
`[email protected]`
bibliography:
- 'main.bib'
title: '[Efficient Nonmyopic Bayesian Optimization via One-Shot Multi-Step Trees]{}'
---
[^1]: Equal contribution.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We consider Vlasov-type scaling for Markov evolution of birth-and-death type in continuum, which is based on a proper scaling of corresponding Markov generators and has an algorithmic realization in terms of related hierarchical chains of correlation functions equations. The existence of rescaled and limiting evolutions of correlation functions as well as convergence to the limiting evolution are shown. The obtained results enable to derive a non-linear Vlasov-type equation for the density of the limiting system.'
author:
- 'Dmitri Finkelshtein[^1]'
- 'Yuri Kondratiev[^2]'
- 'Oleksandr Kutoviy[^3]'
title: Operator approach to Vlasov scaling for some models of spatial ecology
---
[ Continuous systems, Spatial birth-and-death processes, Individual based models, Vlasov scaling, Vlasov equation, Correlation functions]{}
[ 47D06, 60J25, 60J35, 60J80, 60K35]{}
Introduction
============
The Vlasov equation is a famous example of a kinetic equation which describes the dynamical behavior of a many-body system. In physics, it characterizes the Hamiltonian motion of an infinite particle system influenced by weak long-range forces in the mean field scaling limit. The detailed exposition of the Vlasov scaling for the Hamiltonian dynamics was given by W.Braun and K.Hepp [@BH1977] and later by R.L.Dobrushin [@Dob1979] for more general deterministic dynamical systems. The limiting Vlasov-type equations for particle densities in both papers are considered in classes of integrable functions (or finite measures in the weak form). This corresponds, actually, to the situation of finite volume systems or systems with zero mean density in an infinite volume. The Vlasov equation for the integrable functions was investigated in details by V.V.Kozlov [@Koz2008]. An excellent review about kinetic equations which describe dynamical multi-body systems was given by H.Spohn [@Spo1980], [@Spo1991]. Note that in the framework of interacting diffusions a similar problem is known as the McKean–Vlasov limit.
Motivated by the study of Vlasov scaling for some classes of stochastic evolutions in continuum for which the use of the mentioned above approaches breaks down (even in the finite volumes) we developed the general approach to study the Vlasov-type dynamics (see [@FKK2010b]). It is based on a proper scaling of the hierarchical equations for the evolution of correlation functions and can be interpreted in the terms of the rescaled Markov generators. Up to our knowledge, at the present time only this technique may give a possibility to control the convergence in the Vlasov limit in the case of non-integrable densities which is generic for infinite volume infinite particle systems. Saying about the evolutions, the kinetic equations of which can not be studied by the classical techniques described in [@BH1977] and [@Dob1979], we have in mind, first of all, spatial birth-and-death Markov processes (e.g., continuous Glauber dynamics, spatial ecological models) and hopping particles Markov evolutions (e.g., Kawasaki dynamics in continuum). The main difficulty to carry out the approach proposed by W.Braun, K.Hepp [@BH1977] and R.L.Dobrushin [@Dob1979] for such models is absence of the proper descriptions in terms of stochastic evolutional equations. Another problem concerns the possible variation of particles number in the evolution. The important point to note also is that an application of the technique proposed in [@FKK2010b] leads to a limiting hierarchy which posses a chaos preservation property.
The aim of this paper is to study the Vlasov scaling for the individual based model (IBM) in spatial ecology introduced by B.Bolker and S.Pacala [@BP1997; @BP1999], U.Dieckmann and R.Law [@DL2000] (BDLP model) using the scheme developed in [@FKK2010b]. A population in this model is represented by a configuration of motionless organisms (plants) located in an infinite habitat (an Euclidean space in our considerations). The unbounded habitat is taken to avoid boundary effects in the population evolution.
The evolution equation for the correlation functions of the BDLP model was studied in details in [@FKK2009]. In [@BP1997; @BP1999], [@DL2000] this system was called the system of spatial moment equations for plant competition and, actually, this system itself was taking as a definition of the dynamics in the BDLP model. The mathematical structure of the correlation functions evolution equation is close to other well-known hierarchical systems in mathematical physics, e.g., BBGKY hierarchy for the Hamiltonian dynamics (see, e.g. [@DSS1989]). As in all hierarchical chains of equations, we can not expect the explicit form of the solution, and even more, the existence problem for these equations is a highly delicate question.
According to the general scheme (see [@FKK2010b]), we state conditions on structural coefficients of the BDLP Markov generator, which give a weak convergence of the rescaled generator to the limiting generator of the related Vlasov hierarchy. Next, we may compute limiting Vlasov type equation for the BDLP model leaving the question about the strong convergence of the hierarchy solutions for a separate analysis. A control of the strong convergence of the rescaled hierarchy is, in general, a difficult technical problem. In particular, this problem remains be open for BBGKY hierarchy for the case of Hamiltonian dynamics as well as for Bogoliubov–Streltsova hierarchy corresponding to the gradient diffusion model. In the present paper we show the existence of the rescaled and limiting evolutions of correlation functions related to the Vlasov scaling of the BDLP model and the convergence to the limiting evolution. With this evolution for special class of initial conditions is related a non-linear equation for the density, which is called Vlasov equation for the considered stochastic dynamics.
Let us mention that a version of the BDLP model for the case of finite populations was studied in the paper [@FM2004]. In this work the authors developed a probabilistic representation for the finite BDLP process and applied this technique to analyze a mean-field limit in the spirit of classical Dobrushin or McKean–Vlasov schemes. They obtained an integro-differential equation for the limiting deterministic process corresponding to an integrable initial condition. The latter equation coincides with the Vlasov equation for the BDLP model derived below in our approach.
The present paper is organized in the following way. Section 2 is devoted to the general settings required for the description of the model which we study. In Subsection 3.1 we discuss the general Vlasov scaling approach for spatial continuos models. Subsection 3.2 is devoted to the abstract convergence result for semigroups in Banach spaces which will be crucial to prove the main statements of the paper presented in Subsection 3.3. The corresponding proofs are given in Subsection 3.4.
Basic fact and description of model {#sect:base}
===================================
General facts and notations
---------------------------
Let ${{\mathcal B}}({{{\mathbb R}}}^{d})$ be the family of all Borel sets in ${{\mathbb R}}^d$ and ${{\mathcal B}}_{b}({{{\mathbb R}}}^{d})$ denotes the system of all bounded sets in ${{\mathcal B}}({{{\mathbb R}}}^{d})$.
The space of $n$-point configuration is $${\Gamma}_{0}^{(n)}={\Gamma}_{0,{{{\mathbb R}}}^{d}}^{(n)}:=\left\{ \left. \eta \subset
{{{\mathbb R}}}^{d}\right| \,|\eta |=n\right\} ,\quad n\in {{\mathbb N}}_0:={{\mathbb N}}\cup \{0\},$$ where $|A|$ denotes the cardinality of the set $A$. The space ${\Gamma}_{{\Lambda}}^{(n)}:={\Gamma}_{0,{\Lambda}}^{(n)}$ for ${\Lambda}\in {{\mathcal B}}_b({{{\mathbb R}}}^{d})$ is defined analogously to the space ${\Gamma}_{0}^{(n)}$. As a set, ${\Gamma}_{0}^{(n)}$ is equivalent to the symmetrization of $$\widetilde{({{{\mathbb R}}}^{d})^n} = \left\{ \left. (x_1,\ldots ,x_n)\in
({{{\mathbb R}}}^{d})^n\right| \,x_k\neq x_l\,\,\mathrm{if} \,\,k\neq l\right\}
,$$ i.e. $\widetilde{({{{\mathbb R}}}^{d})^n}/S_{n}$, where $S_{n}$ is the permutation group over $\{1,\ldots,n\}$. Hence one can introduce the corresponding topology and Borel $\sigma $-algebra, which we denote by $O({\Gamma}_{0}^{(n)})$ and ${{\mathcal B}}({\Gamma}_{0}^{(n)})$, respectively. Also one can define a measure $m^{(n)}$ as an image of the product of Lebesgue measures $dm(x)=dx$ on $\bigl({{\mathbb R}}^d, {{\mathcal B}}({{\mathbb R}}^d)\bigr)$.
The space of finite configurations $${\Gamma}_{0}:=\bigsqcup_{n\in {{\mathbb N}}_0}{\Gamma}_{0}^{(n)}$$ is equipped with the topology which has structure of disjoint union. Therefore, one can define the corresponding Borel $\sigma $-algebra ${{\mathcal B}}({\Gamma}_0)$.
A set $B\in {{\mathcal B}}({\Gamma}_0)$ is called bounded if there exists ${\Lambda}\in
{{\mathcal B}}_b({{{\mathbb R}}}^{d})$ and $N\in {{\mathbb N}}$ such that $B\subset
\bigsqcup_{n=0}^N{\Gamma}_{\Lambda}^{(n)}$. The Lebesgue–Poisson measure ${\lambda}_{z} $ on ${\Gamma}_0$ is defined as $${\lambda}_{z} :=\sum_{n=0}^\infty \frac {z^{n}}{n!}m ^{(n)}.$$ Here $z>0$ is the so called activity parameter. The restriction of ${\lambda}_{z} $ to ${\Gamma}_{\Lambda}$ will be also denoted by ${\lambda}_{z} $.
The configuration space $${\Gamma}:=\left\{ \left. {\gamma}\subset {{{\mathbb R}}}^{d}\ \right| \; |{\gamma}\cap {\Lambda}|<\infty, \text{ for all } {\Lambda}\in {{\mathcal B}}_b({{{\mathbb R}}}^{d})\right\}$$ is equipped with the vague topology. It is a Polish space (see e.g. [@KK2006]). The corresponding Borel $\sigma $-algebra $ {{\mathcal B}}({\Gamma})$ is defined as the smallest $\sigma $-algebra for which all mappings $N_{\Lambda}:{\Gamma}\rightarrow {{\mathbb N}}_0$, $N_{\Lambda}({\gamma}):=|{\gamma}\cap {\Lambda}|$ are measurable, i.e., $${{\mathcal B}}({\Gamma})=\sigma \left(N_{\Lambda}\left| {\Lambda}\in
{{\mathcal B}}_b({{{\mathbb R}}}^{d})\right.\right ).$$ One can also show that ${\Gamma}$ is the projective limit of the spaces $\{{\Gamma}_{\Lambda}\}_{{\Lambda}\in {{\mathcal B}}_b({{{\mathbb R}}}^{d})}$ w.r.t. the projections $p_{\Lambda}:{\Gamma}\rightarrow {\Gamma}_{\Lambda}$, $p_{\Lambda}({\gamma}):={\gamma}_{\Lambda}$, ${\Lambda}\in {{\mathcal B}}_b({{{\mathbb R}}}^{d})$.
The Poisson measure $\pi _{z} $ on $({\Gamma},{{\mathcal B}}({\Gamma}))$ is given as the projective limit of the family of measures $\{\pi _{z} ^{\Lambda}\}_{{\Lambda}\in {{\mathcal B}}_b({{{\mathbb R}}}^{d})}$, where $\pi _{z} ^{\Lambda}$ is the measure on ${\Gamma}_{\Lambda}$ defined by $\pi _{z} ^{\Lambda}:=e^{-z m ({\Lambda})}{\lambda}_{z}$.
We will use the following classes of functions: $L_{\mathrm{ls}}^0({\Gamma}_0)$ is the set of all measurable functions on ${\Gamma}_0$ which have a local support, i.e. $G\in
L_{\mathrm{ls}}^0({\Gamma}_0)$ if there exists ${\Lambda}\in {{\mathcal B}}_b({{{\mathbb R}}}^{d})$ such that $G\upharpoonright_{{\Gamma}_0\setminus {\Gamma}_{\Lambda}}=0$; $B_{\mathrm{bs}}({\Gamma}_0)$ is the set of bounded measurable functions with bounded support, i.e. $G\upharpoonright_{{\Gamma}_0\setminus B}=0$ for some bounded $B\in {{\mathcal B}}({\Gamma}_0)$.
On ${\Gamma}$ we consider the set of cylinder functions $\mathcal{F}_{\mathrm{cyl}}({\Gamma})$, i.e. the set of all measurable functions $G$ on $\bigl({\Gamma},{{\mathcal B}}({\Gamma}))\bigr)$ which are measurable w.r.t. ${{\mathcal B}}_{\Lambda}({\Gamma})$ for some ${\Lambda}\in {{\mathcal B}}_b({{{\mathbb R}}}^{d})$. These functions are characterized by the following relation: $F({\gamma})=F\upharpoonright
_{{\Gamma}_{\Lambda}}({\gamma}_{\Lambda})$.
The following mapping between functions on ${\Gamma}_0$, e.g. $L_{\mathrm{ls}}^0({\Gamma}_0)$, and functions on ${\Gamma}$, e.g. $\mathcal{F}_{\mathrm{cyl}}({\Gamma})$, plays the key role in our further considerations: $$KG({\gamma}):=\sum_{\eta \Subset {\gamma}}G(\eta ), \quad {\gamma}\in {\Gamma},
\label{KT3.15}$$ where $G\in L_{\mathrm{ls}}^0({\Gamma}_0)$, see e.g. [@KK2002; @Len1975; @Len1975a]. The summation in the latter expression is taken over all finite subconfigurations of ${\gamma},$ which is denoted by the symbol $\eta \Subset {\gamma}$. The mapping $K$ is linear, positivity preserving, and invertible, with $$K^{-1}F(\eta ):=\sum_{\xi \subset \eta }(-1)^{|\eta \setminus \xi
|}F(\xi ),\quad \eta \in {\Gamma}_0.\label{k-1trans}$$
Let $ \mathcal{M}_{\mathrm{fm}}^1({\Gamma})$ be the set of all probability measures $\mu $ on $\bigl( {\Gamma}, {{\mathcal B}}({\Gamma}) \bigr)$ which have finite local moments of all orders, i.e. $\int_{\Gamma}|{\gamma}_{\Lambda}|^n\mu (d{\gamma})<+\infty $ for all ${\Lambda}\in {{\mathcal B}}_b({{\mathbb R}}^{d})$ and $n\in
{{\mathbb N}}_0$. A measure $\rho $ on $\bigl( {\Gamma}_0, {{\mathcal B}}({\Gamma}_0) \bigr)$ is called locally finite iff $\rho (A)<\infty $ for all bounded sets $A$ from ${{\mathcal B}}({\Gamma}_0)$. The set of such measures is denoted by $\mathcal{M}_{\mathrm{lf}}({\Gamma}_0)$.
One can define a transform $K^{*}:\mathcal{M}_{\mathrm{fm}}^1({\Gamma})\rightarrow \mathcal{M}_{\mathrm{lf}}({\Gamma}_0),$ which is dual to the $K$-transform, i.e., for every $\mu \in
\mathcal{M}_{\mathrm{fm}}^1({\Gamma})$, $G\in {{\mathcal B}}_{\mathrm{bs}}({\Gamma}_0)$ we have $$\int_{\Gamma}KG({\gamma})\mu (d{\gamma})=\int_{{\Gamma}_0}G(\eta )\,(K^{*}\mu
)(d\eta ).$$ The measure $\rho _\mu :=K^{*}\mu $ is called the correlation measure of $\mu $.
As shown in [@KK2002] for $\mu \in
\mathcal{M}_{\mathrm{fm}}^1({\Gamma})$ and any $G\in L^1({\Gamma}_0,\rho
_\mu )$ the series is $\mu$-a.s. absolutely convergent. Furthermore, $KG\in L^1({\Gamma},\mu )$ and $$\int_{{\Gamma}_0}G(\eta )\,\rho _\mu (d\eta )=\int_{\Gamma}(KG)({\gamma})\,\mu
(d{\gamma}). \label{Ktransform}$$
A measure $\mu \in \mathcal{M}_{\mathrm{fm} }^1({\Gamma})$ is called locally absolutely continuous w.r.t. $\pi _{z} $ iff $\mu_{\Lambda}:=\mu
\circ p_{\Lambda}^{-1}$ is absolutely continuous with respect to $\pi
_{z} ^{\Lambda}$ for all ${\Lambda}\in {{\mathcal B}}_{\Lambda}({{{\mathbb R}}}^{d})$. In this case $\rho
_\mu :=K^{*}\mu $ is absolutely continuous w.r.t ${\lambda}_{z} $. We denote $$k_{\mu}(\eta):=\frac{d\rho_{\mu}}{d{\lambda}_{z}}(\eta),\quad
\eta\in{\Gamma}_{0}.$$ The functions $$k_{\mu}^{(n)}:({{\mathbb R}}^{d})^{n}\longrightarrow{{\mathbb R}}_{+}\label{corfunc}$$ $$k_{\mu}^{(n)}(x_{1},\ldots,x_{n}):=\left\{\begin{array}{ll}
k_{\mu}(\{x_{1},\ldots,x_{n}\}), & \mbox{if $(x_{1},\ldots,x_{n})\in
\widetilde{({{\mathbb R}}^{d})^{n}}$}\\ 0, & \mbox{otherwise}
\end{array}
\right.$$ are the correlation functions well known in statistical physics, see e.g. [@Rue1969], [@Rue1970].
Description of model
--------------------
We consider the evolving in time system of interacting individuals (particles) in the space ${{\mathbb R}}^{d}$. The state of the system at the fixed moment of time $t>0$ is described by the random configuration ${\gamma}_{t}$ from ${\Gamma}$. Heuristically, the mechanism of the evolution is given by a Markov generator, which has the following form $$L:=L^- + L^+ ,$$ where $$\begin{aligned}
(L^-F)({\gamma})&:=(L^-(m,\varkappa^-,a^-)F)({\gamma}):=
\sum_{x\in{\gamma}}\left[m+\varkappa^{-}\sum_{y\in{\gamma}\setminus
x}a^{-}(x-y)\right]D_{x}^{-}F({\gamma}),\notag\\
(L^+F)({\gamma})&:=(L^+(\varkappa^+,a^-)F)({\gamma}):=
\varkappa^{+}\int_{{{\mathbb R}}^{d}}\sum_{y\in{\gamma}}a^{+}(x-y)D_{x}^{+}F({\gamma})dx.
\label{BP-gen}\end{aligned}$$ Here $0\leq a^{-}, \,a^{+}\in L^{1}({{\mathbb R}}^{d})$ are arbitrary, even functions such that $$\int_{{{\mathbb R}}^{d}}a^{-}(x)dx =\int_{{{\mathbb R}}^{d}}a^{+}(x)dx =1$$ (in other words, $a^{-},\,a^{+}$ are probability densities) and $m$, $\varkappa^{-}$, $\varkappa^{+}>0$ are some positive constants. The pre-generator $L$ describes the Bolker–Dieckmann–Law–Pacala BDLP model, which was introduced in [@BP1997; @BP1999; @DL2000]. During the corresponding stochastic evolution the birth of individuals occurs independently and the death is ruled not only by the global regulation (mortality) but also by the local regulation with the kernel $\varkappa^-a^-$. This regulation may be described as a competition (e.g., for resources) between individuals in the population.
The evolution of the one dimensional distribution for such systems can be expressed in terms of their characteristics, e.g. the correlation functions (see ). The dynamics of correlation functions for the BDLP model was studied in [@FKK2009]. The main result of this paper informally says the following:
[*If the mortality $m$ and the competition kernel $\varkappa^-a^-$ are large enough, then the dynamics of correlation functions, associated with the pre-generator , exists and preserves (sub-)Poissonian bound.*]{}
For the readers convenience we repeat below the relevant material from [@FKK2009] without proofs. Let $\hat{L}^\pm:= K^{-1} L^\pm K$ be the $K$-image of $L^\pm$, which can be initially defined on functions from $B_{\mathrm{bs}}({\Gamma}_0)$. For arbitrary and fixed $C>0$ we consider the operator $\hat{L}:=\hat{L}^+ + \hat{L}^-$ in the functional space $$\mathcal{L}_{C}=L^{1}\left( \Gamma _{0},C^{\left\vert \eta
\right\vert }d\lambda \left( \eta \right) \right) .$$ Below, symbol $\left\Vert \cdot \right\Vert_{C} $ stands for the norm of this space. For any $\omega>0$ we define $\mathcal{H}(\omega)$ to be the set of all densely defined closed operators $T$ on $\mathcal{L}_{C},$ the resolvent set $\rho(T)$ of which contains the sector $${{\mathrm{Sect}}}\left(\frac{\pi}{2}+\omega\right):=\left\{\zeta\in\mathbb{C}\,\Bigm|
|{{\mathrm{arg}}}\, \zeta|<\frac{\pi}{2}+\omega\right\},$$ and for any $\varepsilon >0$ $$||(T-\zeta 1\!\!1)^{-1}||\leq \frac{M_{\varepsilon}}{|\zeta|},\quad
|\arg\,\zeta |\leq\frac{\pi}{2}+\omega-\varepsilon,$$ where $M_{\varepsilon}$ does not depend on $\zeta$.
The first non-trivial result, which is based on the perturbation theory, says that the operator $\hat{L}$ with the domain $$D(\hat{L}):=\left\{ G\in \mathcal{L}_{C} \Bigm| \left\vert
\cdot \right\vert G(\cdot) \in \mathcal{L}_{C}, \;E^{a^{-}}(\cdot) G(\cdot) \in \mathcal{L}_{C}\right\}$$ is a generator of a holomorphic $C_{0}$-semigroup $\hat{U}_t$ on ${{\mathcal L}}_C$.
To construct the corresponding evolution of correlation functions we note that the dual space $({{\mathcal L}}_C)'=\bigl(L^1({\Gamma}_0, d{\lambda}_C)\bigr)'=L^\infty({\Gamma}_0, d{\lambda}_C)$, where $d{\lambda}_C:= C^{|\cdot|} d{\lambda}$. The space $({{\mathcal L}}_C)'$ is isometrically isomorphic to the Banach space $${{\mathcal K}}_{C}:=\left\{k:{\Gamma}_{0}{\rightarrow}{{\mathbb R}}\,\Bigm| k(\cdot) C^{-|\cdot|}\in
L^{\infty}({\Gamma}_{0},{\lambda})\right\}$$ with the norm $$\|k\|_{{{\mathcal K}}_C}:=\|C^{-|\cdot|}k(\cdot)\|_{L^{\infty}({\Gamma}_{0},{\lambda})},$$ where the isomorphism is provided by the isometry $R_C$ $$\label{isometry}
({{\mathcal L}}_C)'\ni k \longmapsto R_Ck:=k(\cdot) C^{|\cdot|}\in {{\mathcal K}}_C.$$ In fact, we have duality between Banach spaces ${{\mathcal L}}_C$ and ${{\mathcal K}}_C$ given by the following expression $${{\left\langle}\!{\left\langle}}G,\,k {{\right\rangle}\!{\right\rangle}}:= \int_{{\Gamma}_{0}}G\cdot k\, d{\lambda},\quad G\in{{\mathcal L}}_C, \
k\in {{\mathcal K}}_C \label{duality}$$ with $${\left\vert}{{\left\langle}\!{\left\langle}}G,k {{\right\rangle}\!{\right\rangle}}{\right\vert}\leq \|G\|_C \cdot\|k\|_{{{\mathcal K}}_C}.
\label{funct_est}$$ It is clear that for any $k\in {{\mathcal K}}_C$ $$\label{RB-norm}
|k(\eta)|\leq \|k\|_{{{\mathcal K}}_C} \, C^{|\eta|} \quad \text{for } {\lambda}\text{-a.a. }
\eta\in{\Gamma}_0.$$
Let ${{\hat{L}}}'$ be the adjoint operator to ${{\hat{L}}}$ in $({{\mathcal L}}_C)'$ with domain $D({{\hat{L}}}')$. Its image in ${{\mathcal K}}_C$ under the isometry $R_C$ we denote by ${{\hat{L}}}^{*}=R_C{{\hat{L}}}'R_{C^{-1}}$. It is evident that the domain of ${{\hat{L}}}^{*}$ will be $D({{\hat{L}}}^{*})=R_C
D({{\hat{L}}}')$, correspondingly. Then, for any $G\in{{\mathcal L}}_C$, $k\in D({{\hat{L}}}^\ast)$ $$\begin{aligned}
\int_{{\Gamma}_0}G\cdot {{\hat{L}}}^\ast k d{\lambda}&=\int_{{\Gamma}_0}G\cdot R_C{{\hat{L}}}'R_{C^{-1}} k d{\lambda}=\int_{{\Gamma}_0}G\cdot {{\hat{L}}}'R_{C^{-1}} k d{\lambda}_C\\&=
\int_{{\Gamma}_0}{{\hat{L}}}G\cdot R_{C^{-1}} k d{\lambda}_C=\int_{{\Gamma}_0}{{\hat{L}}}G\cdot k
d{\lambda},\end{aligned}$$ therefore, ${{\hat{L}}}^\ast$ is the dual operator to ${{\hat{L}}}$ w.r.t. the duality . By [@FKO2009], we have the precise form of ${{\hat{L}}}^{*}$: $$\begin{aligned}
\label{dual-descent}
({{\hat{L}}}^* k)(\eta)=&-\left(m|\eta|+\varkappa^{-}E^{a^{-}}(\eta)\right)k(\eta)\\&+
\varkappa^{+}\sum_{x\in\eta}\sum_{y\in\eta\setminus x}a^{+}(x-y)k(\eta\setminus x)\notag\\&
+\varkappa^{+}\int_{{{\mathbb R}}^{d}}\sum_{y\in\eta}a^{+}(x-y)k((\eta\setminus y)\cup
x)dx\notag\\&
+\varkappa^{-}\int_{{{\mathbb R}}^{d}}\sum_{y\in\eta}a^{-}(x-y)k(\eta\cup x)dx.
\notag\end{aligned}$$ Now we consider the adjoint semigroup ${{\hat{T}}}'(t)$ on $({{\mathcal L}}_C)'$ and its image ${{\hat{T}}}^\ast(t)$ in ${{\mathcal K}}_C$. The latter one describes the evolution of correlation functions. Transferring the general results about adjoint semigroups (see, e.g., [@EN2000]) onto semigroup ${{\hat{T}}}^\ast(t)$ we deduce that it will be weak\*-continuous and weak\*-differentiable at $0$. Moreover, ${{\hat{L}}}^\ast$ will be the weak\*-generator of ${{\hat{T}}}^\ast(t)$. Here and subsequently we mean “weak\*-properties” w.r.t. the duality .
Vlasov scaling
==============
Description of Vlasov scaling
-----------------------------
We begin with a general idea of the Vlasov-type scaling. It is of interest to construct some scaling $L_{\varepsilon}$, ${\varepsilon}>0$ of the generator $L$, such that the following scheme works.
Suppose that we know the proper scaling of $L$ and we are able to prove the existence of the semigroup ${{\hat{T}}}_{\varepsilon}(t)$ with the generator ${{\hat{L}}}_{\varepsilon}:=K^{-1} L_{\varepsilon}K$ in the space ${{\mathcal L}}_C$ for some $C>0$. Let us consider the Cauchy problem corresponding to the adjoint operator ${{\hat{L}}}^\ast$ and take an initial function with the strong singularity in ${\varepsilon}$. Namely, $$k_0^{{({\varepsilon})}}(\eta) \sim {\varepsilon}^{-|\eta|} r_0(\eta),\quad\quad{\varepsilon}{\rightarrow}0,\quad\quad
\eta\in{\Gamma}_0,$$ where the function $r_0$ is independent of ${\varepsilon}$. The solution to this problem is described by the dual semigroup ${{\hat{T}}}_{\varepsilon}^\ast(t)$. The scaling $L\mapsto L_{\varepsilon}$ has to be chosen in such a way that ${{\hat{T}}}_{\varepsilon}^\ast(t)$ preserves the order of the singularity: $$({{\hat{T}}}_{\varepsilon}^\ast(t)k_0^{{({\varepsilon})}})(\eta) \sim {\varepsilon}^{-|\eta|} r_t(\eta),\quad\quad{\varepsilon}{\rightarrow}0,
\quad\quad\eta\in{\Gamma}_0.$$ Another very important requirement for the proper scaling concerns the dynamics $r_0 \mapsto r_t$. It should preserve Lebesgue–Poisson exponents: if $r_0(\eta)=e_{\lambda}(\rho_0,\eta)$ then $r_t(\eta)=e_{\lambda}(\rho_t,\eta)$ and there exists explicit (nonlinear, in general) differential equation for $\rho_t$ $$\label{V-eqn-gen}
\frac{\partial}{\partial t}\rho_t(x) = \upsilon(\rho_t(x)),$$ which will be called the Vlasov-type equation.
Now let us explain the main technical steps to realize Vlasov-type scaling. Let us consider for any ${\varepsilon}>0$ the following mapping (cf. ) on functions on ${\Gamma}_0$ $$(R_{\varepsilon}r)(\eta):={\varepsilon}^{{|\eta|}}r(\eta).$$ This mapping is “self-dual” w.r.t. the duality , moreover, $R_{\varepsilon}^{-1}=R_{{\varepsilon}^{-1}}$. Then we have $k^{{({\varepsilon})}}_0\sim
R_{{\varepsilon}^{-1}} r_0$, and we need $r_t \sim R_{\varepsilon}{{\hat{T}}}_{\varepsilon}^\ast(t)k_0^{{({\varepsilon})}}\sim R_{\varepsilon}{{\hat{T}}}_{\varepsilon}^\ast(t)R_{{\varepsilon}^{-1}}
r_0$. Therefore, we have to show that for any $t\geq 0$ the operator family $R_{\varepsilon}{{\hat{T}}}_{\varepsilon}^\ast(t)R_{{\varepsilon}^{-1}}$, ${\varepsilon}>0$ has limiting (in a proper sense) operator $U(t)$ and $$\label{chaospreserving}
U(t)e_{\lambda}(\rho_0)=e_{\lambda}(\rho_t).$$ But, informally, ${{\hat{T}}}^\ast_{\varepsilon}(t)=\exp{\{t{{\hat{L}}}^\ast_{\varepsilon}\}}$ and $R_{\varepsilon}{{\hat{T}}}_{\varepsilon}^\ast(t)R_{{\varepsilon}^{-1}}=\exp{\{t R_{\varepsilon}{{\hat{L}}}_{\varepsilon}^\ast
R_{{\varepsilon}^{-1}} \}}$. Let us consider the “renormalized” operator $$\label{renorm_def}
{{{\hat{L}}}_{{{\varepsilon}, \, \mathrm{ren}}}^\ast}:= R_{\varepsilon}{{\hat{L}}}_{\varepsilon}^\ast R_{{\varepsilon}^{-1}}.$$ In fact, we need that there exists an operator $\hat{V}^\ast$ (called Vlasov operator) such that $\exp{\{t R_{\varepsilon}{{\hat{L}}}_{\varepsilon}^\ast R_{{\varepsilon}^{-1}} \}}{\rightarrow}\exp{\{t\hat{V}^\ast\}=:U(t)}$ for which holds. Hence, heuristic way to produce the scaling $L\mapsto L_{\varepsilon}$ is to demand that $$\lim_{{\varepsilon}{\rightarrow}0}\left(\frac{\partial}{\partial
t}e_{\lambda}(\rho_t,\eta)-{{{\hat{L}}}_{{{\varepsilon}, \, \mathrm{ren}}}^\ast}e_{\lambda}(\rho_t,\eta)\right)=0, \quad
\eta\in{\Gamma}_0,
$$ if $\rho_t$ satisfies . The point-wise limit of ${{{\hat{L}}}_{{{\varepsilon}, \, \mathrm{ren}}}^\ast}$ will be natural candidate for $\hat{V}^\ast$. Having chosen the proper scaling we proceed to the following technical steps which give the rigorous meaning to the idea introduced above. Note that definition implies ${{\hat{L}}}_{{{\varepsilon}, \, \mathrm{ren}}}=R_{{\varepsilon}^{-1}}{{\hat{L}}}_{\varepsilon}R_{\varepsilon}$. We prove that “renormalized” operator ${{\hat{L}}}_{{{\varepsilon}, \, \mathrm{ren}}}$ is a generator of a contraction semigroup ${{\hat{T}}}_{{{\varepsilon}, \, \mathrm{ren}}}(t)$ on ${{\mathcal L}}_C$. Next we show that this semigroup converges strongly to some semigroup ${{\hat{T}}}_V(t)$ with the generator $\hat{V} $. This limiting semigroup leads us directly to the solution for the Vlasov-type equation. Below we show how to realize this scheme in details.
Approximation in Banach space
-----------------------------
In this subsection we study general question about the strong convergence of semigroups in Banach spaces. The obtained results will be crucial in the realization of the Vlasov-type scaling for the BDLP model.
Let $\left\{U_{t}^{\varepsilon},\,t\geq 0\right\},\: \varepsilon\geq
0$ be a family of semigroups on a Banach space $E$. We set $(L_{\varepsilon},\,D(L_{\varepsilon}))$ to be the generator of $\left\{U_{t}^{\varepsilon},\,t\geq 0\right\}$ for each $\varepsilon\geq 0$. Our purpose now is to describe the strong convergence of semigroups $\left\{U_{t}^{\varepsilon},\,t\geq
0\right\},\: \varepsilon\geq 0$ in terms of the corresponding generators as $\varepsilon$ tends to $0$. According to the classical result (see e.g. [@Kat1976]), it is enough to show that there exists $\beta >0$ and ${\lambda}:\; \mathrm{Re}\,{\lambda}>\beta$ such that $$\left(L_{\varepsilon}-{\lambda}1\!\!1\right)^{-1}\overset{s}{\longrightarrow}\left(L_{0}-{\lambda}1\!\!1\right)^{-1},\quad\quad\varepsilon\rightarrow 0,\label{resconv}$$ where $1\!\!1$ is the identical operator. In this subsection we show how to prove under the following assumptions on the family $(L_{\varepsilon},\,D(L_{\varepsilon}))$, $\varepsilon \geq 0$:
:
1. For any $\varepsilon\geq 0$, the operator $(L_{\varepsilon},\,D(L_{\varepsilon}))$ admits representation $$L_{\varepsilon}=A_{1}(\varepsilon)+A_{2}(\varepsilon),$$ where $A_{1}(\varepsilon)$ is a closed operator and $D(A_{1}(\varepsilon))=D(A_{2}(\varepsilon)):=D(L_{\varepsilon})$.
2. There exists $\beta>0$ and ${\lambda}$: $\mathrm{Re} \,{\lambda}>\beta$ such that
1. ${\lambda}$ belongs to the resolvent set of $A_{1}(\varepsilon)$ for any $\varepsilon \geq 0$ and $$\left( A_{1}(\varepsilon)-\lambda 1\!\!1\right) ^{-1}\overset{s}{
\longrightarrow }\left( A_{1}(0)-\lambda1\!\!1 \right) ^{-1},\varepsilon \rightarrow
0,$$
2. $$\sup_{\varepsilon >0}\left\Vert \left( A_{1}(\varepsilon)-\lambda 1\!\!1 \right)
^{-1}\right\Vert \leq \left\Vert \left( A_{1}(0)-\lambda 1\!\!1\right) ^{-1}\right\Vert ,$$
3. for any $\varepsilon \geq 0$ $$\left\Vert A_{2}(\varepsilon)\left( A_{1}(\varepsilon)-\lambda 1\!\!1 \right)
^{-1}\right\Vert < 1,$$
4. $\left( A_{2}(\varepsilon)\left( A_{1}(\varepsilon)
-\lambda 1\!\!1 \right) ^{-1}+1\!\!1 \right) ^{-1}$ converges strongly to the operator $\left( A_{2}(0)\left( A_{1}(0)-\lambda 1\!\!1 \right) ^{-1}+1\!\!1\right) ^{-1}$ as $\varepsilon \rightarrow 0$.
The strong convergence result for the family $\left\{U_{t}^{\varepsilon},\,t\geq 0\right\},\: \varepsilon\geq 0$ is established by our next theorem
\[gentheor\] Let $(L_{\varepsilon},\,D(L_{\varepsilon}))$, $\varepsilon \geq 0$ be the family of generators corresponding to the family of $C_{0}$-semigroups $\left\{U_{t}^{\varepsilon},\,t\geq 0\right\},\:
\varepsilon\geq 0$. Then, $U_{t}^{\varepsilon}$ converges strongly to $U_{t}^{0}$ as $\varepsilon \rightarrow 0$ uniformly on each finite interval of time, provided assumptions [**(A)**]{} are satisfied.
The proof is completed by showing . For any $\varepsilon\geq 0$ and $\lambda$ from the resolvent set of $A_{1}(\varepsilon)$ we have $$\mathrm{Ran}\left( \left(
A_{1}(\varepsilon)-\lambda 1\!\!1\right) ^{-1}\right) =D\left(
A_{1}(\varepsilon)\right) =D\left( A_{2}(\varepsilon)\right) =
D(L_{\varepsilon}).$$ Hence, $$\begin{aligned}
& L_{\varepsilon}-\lambda1\!\!1=A_{1}(\varepsilon)+A_{2}(\varepsilon)-\lambda1\!\!1\nonumber \\
=&\left( A_{2}(\varepsilon)\left( A_{1}(\varepsilon)-\lambda 1\!\!1 \right) ^{-1}+1\!\!1\right)
\left( A_{1}(\varepsilon)-\lambda 1\!\!1\right). \label{o1}\end{aligned}$$ Combining with the assumption 2(c) of [**(A)**]{} we get the following representations for the resolvent $$\begin{aligned}
&\left( L_{\varepsilon}-\lambda1\!\!1\right)^{-1}=\left( A_{1}(\varepsilon)+A_{2}(\varepsilon)-\lambda 1\!\!1
\right)
^{-1} \label{longexpansresolv} \nonumber\\
=&\left( A_{1}(\varepsilon)-\lambda1\!\!1 \right) ^{-1}\left(
A_{2}(\varepsilon)\left( A_{1}(\varepsilon)-\lambda 1\!\!1\right)
^{-1}+1\!\!1\right) ^{-1}.\end{aligned}$$ From this formula, triangle inequality and assumptions 2(a), 2(b) and 2(d) of [**(A)**]{} we conclude the assertion of the theorem.
Main results
------------
We check at once that the proper scaling for the BDLP pre-generator is the following one $$\begin{aligned}
(L_{\varepsilon}F)({\gamma}):=&
\sum_{x\in{\gamma}}\left[m+{\varepsilon}\varkappa^{-}\sum_{y\in{\gamma}\setminus
x}a^{-}(x-y)\right]D_{x}^{-}F({\gamma})\label{resc BP-gen}
\\
&+\varkappa^{+}\int_{{{\mathbb R}}^{d}}\sum_{y\in{\gamma}}a^{+}(x-y)D_{x}^{+}F({\gamma})dx,
\quad\quad\quad {\varepsilon}>0. \notag\end{aligned}$$ Next we consider the formal $K$-image of $L_{\varepsilon}$ and the corresponding renormalized operator on $B_{\mathrm{bs}}({\Gamma}_0)$: $${{\hat{L}}}_{\varepsilon}G:=K^{-1}L_{\varepsilon}K G; \qquad {{\hat{L}}}_{{{\varepsilon}, \, \mathrm{ren}}}G:=R_{{\varepsilon}^{-1}}{{\hat{L}}}_{\varepsilon}R_{\varepsilon}G.$$ In the proposition below we calculate the precise form of the operator ${{\hat{L}}}_{{{\varepsilon}, \, \mathrm{ren}}}$ for the BDLP model.
For any $\varepsilon>0$ and any $G\in B_{\mathrm{bs}}\left( \Gamma _{0}\right) $ $$\begin{aligned}
\hat{L}_{\varepsilon ,\mathrm{ren}}G =&A_{1}G+A_{2}G+\varepsilon
\left( B_{1}G+B_{2}G\right) ,\end{aligned}$$where$$\begin{aligned}
(A_{1}G)\left( \eta \right) =&-m\left\vert \eta \right\vert G\left(
\eta
\right) , \\
(A_{2}G)\left( \eta \right) =&-\varkappa ^{-}\sum_{x\in \eta
}\sum_{y\in
\eta \setminus x}a^{-}\left( x-y\right) G\left( \eta \setminus x\right) \\
&+\varkappa ^{+}\sum_{y\in \eta }\int_{\mathbb{R}^{d}}a^{+}\left(
x-y\right) G\left( \eta \setminus y\cup x\right) dx, \\
(B_{1}G)\left( \eta \right) =&-\varkappa ^{-}E^{a^{-}}\left( \eta
\right)
G\left( \eta \right) , \\
(B_{2}G)\left( \eta \right) =&\,\varkappa ^{+}\sum_{y\in \eta }\int_{\mathbb{R}^{d}}a^{+}\left( x-y\right) G\left( \eta \cup x\right) dx.\end{aligned}$$
According to the definition, we have $\hat{L}_{\varepsilon ,\mathrm{ren}}=R_{\varepsilon ^{-1}}\hat{L}_{\varepsilon }R_{\varepsilon }$, where $$\hat{L}_{\varepsilon }=\hat{L}^{-}\left( m,\varepsilon \varkappa ^{-}a^{-}\right) +\varepsilon ^{-1}\hat{L}^{+}\left( \varepsilon \varkappa ^{+}a^{+}\right).$$ As a result,$$\begin{aligned}
(\hat{L}_{\varepsilon }G)\left( \eta \right) =&\,(A_{1}G)\left( \eta
\right)
+\varepsilon (B_{1}G)\left( \eta \right) +(B_{2}G)\left( \eta \right) \\
&-\varepsilon \varkappa ^{-}\sum_{x\in \eta }\sum_{y\in \eta
\setminus
x}a^{-}\left( x-y\right) G\left( \eta \setminus x\right) \\
&+\varkappa ^{+}\sum_{y\in \eta }\int_{\mathbb{R}^{d}}a^{+}\left(
x-y\right) G\left( \eta \setminus y\cup x\right) dx.\end{aligned}$$and hence $$(\hat{L}_{\varepsilon ,\mathrm{ren}}G)\left( \eta \right) =(A_{1}G)\left(
\eta \right) +(A_{2}G)\left(
\eta \right)+\varepsilon (\left( B_{1}+B_{2}\right) G)\left( \eta
\right) ,$$ which completes the proof.
It is easily seen that the operator $\hat{V}: =A_{1}+A_{2}$ will be the point-wise limit of ${{\hat{L}}}_{{{\varepsilon}, \, \mathrm{ren}}}$ as $\varepsilon$ tends to 0. Therefore, the adjoint operator to $\hat{V}$ w.r.t. to the duality (if it exists) can be considered as a candidate for the Vlasov operator in our model.
Below we give the rigorous meaning to the operator $\hat{L}_{\varepsilon ,\mathrm{ren}}$. Let us define the set $$D_{1}:=\left\{ G\in \mathcal{L}_{C}|\:
E^{a^{-}}\left(
\cdot \right) G\left(
\cdot \right)\in \mathcal{L}_{C},\;\left\vert \cdot \right\vert G\left(
\cdot \right)\in \mathcal{L}_{C}\right\}$$
\[pr1\] For any $\varepsilon,\,m,\,\varkappa ^{-}, C>0$ the operator $$A_{1}(\varepsilon):=A_{1}+\varepsilon B_{1}$$ with the domain $D_{1}$ is a generator of a contraction $C_{0}$-semigroup on $\mathcal{L}_{C}$. Moreover, $A_{1}(\varepsilon)\in \mathcal{H}\left( \omega\right) $ for all $\omega
\in \left( 0;\frac{\pi }{2}\right) $.
See the proof of Proposition 4.2 in [@FKK2009].
\[rem001\] It is a simple matter to check that Proposition \[pr1\] holds also in the case $\varepsilon=0$, provided the domain of the operator $A_{1}(0):=A_{1}$ is changed to $$D_{0}:=\left\{ G\in \mathcal{L}_{C}\left|\right.\:
\;\left\vert \cdot \right\vert G\in \mathcal{L}_{C}\right\}\supset D_{1}.$$
The next task is to show that for any $\varepsilon > 0$ the operator $$A_{2}(\varepsilon):= \hat{L}_{\varepsilon ,\mathrm{ren}}-A_{1}(\varepsilon)=A_{2}+\varepsilon B_{2}$$ with the domain $D_{1}$ as well as the operator $A_{2}(0):=A_{2}$ with the domain $D_{0}$ are relatively bounded w.r.t. the operator $(A_{1}(\varepsilon),\,D_{1})$ and $(A_{1},\,D_{0})$, correspondingly. This is demonstrated in Propositions \[pr001\] and \[pr002\], which can be proved similarly to Lemmas 4.4 and 4.5 in [@FKK2009].
\[pr001\] For any $\delta >0$ and any $\,\varkappa^{-}, \varkappa^{+}, m, C >0$ such that $$\frac{\varkappa^{-}C}{m}+\frac{\varkappa^{+}}{m}\leq \delta$$ the following estimate holds $$\left\Vert A_{2}G\right\Vert _{C}\leq \delta \left\Vert A_{1}G\right\Vert
_{C},~G\in D_{0}.$$Moreover, for all $\varepsilon >0$ $$\left\Vert A_{2}G\right\Vert _{C}\leq \delta \left\Vert A_{1}(\varepsilon) G\right\Vert _{C},~G\in D_{1} .$$
Now, the operator $\left( A_{2}, D_{0}\right)$ is well-defined on $\mathcal{L}_{C}$.
\[pr002\] For any $\varepsilon ,\delta >0$ and any $ \varkappa ^{-},\varkappa ^{+}, m, C>0$ such that$$\varepsilon \varkappa ^{+}E^{a^{+}}\left( \eta \right) <\delta C\left(
\varepsilon \varkappa ^{-}E^{a^{-}}\left( \eta \right) +m\left\vert \eta
\right\vert \right) ,~\eta \neq \emptyset$$the following estimate holds$$\left\Vert \varepsilon B_{2}G\right\Vert _{C}\leq a\left\Vert
A_{1}(\varepsilon) G\right\Vert _{C},~G\in D_{1}$$with $a <\delta $.
Proposition \[pr002\] enables us to take $D(B_{2})=D_{1}$. As a result, Remark \[rem001\] shows that the domain of the operator $A_{2}(\varepsilon)$ will be $D_{0}\cap D_{1}=D_{1}$.
We are now in a position to show that the operator $(
\hat{L}_{\varepsilon ,\mathrm{ren}}, \,D_{1} )$ generates semigroup on $\mathcal{L}_{C}$. To this end we use the classical result about the perturbation of holomorphic semigroups (see, e.g. [@Kat1976]). For the convenience of the reader we formulate below the main statement without proof:
[*For any $T\in\mathcal{H}(\omega), \;\omega\in(0;\,\frac{\pi}{2})$ and for any $\epsilon > 0$ there exist positive constants $\alpha$, $\delta$ such that if the operator $A$ satisfies $$||Au||\leq a||Tu||+b||u||, \quad u\in D(T)\subset D(A),$$ with $a<\delta$, $b<\delta$, then $T+A$ is a generator of a holomorphic semigroup. In particular, if $\;b=0$, then $T+A\in
\mathcal{H}(\omega -\epsilon)$.* ]{}
\[theor11\] Let the functions $a^{-},a^{+}$ and the constants $m,\, \varkappa
^{-},\varkappa ^{+}, C>0$ satisfy $$\begin{aligned}
m&>4\left( \varkappa ^{-}C+\varkappa ^{+}\right), \label{bigmort}\\
C\varkappa ^{-}a^{-}\left( x\right) &\geq 4\varkappa ^{+}a^{+}\left(
x\right) ,~x\in \mathbb{R}^{d}.\label{bigcomp}\end{aligned}$$ Then, for any $\varepsilon > 0$ the operator $( \hat{L}_{\varepsilon ,\mathrm{ren}}, \,D_{1} ) $ is a generator of a holomorphic semigroup $\hat{U}_{t,\varepsilon },\,
t\geq 0$ on $\mathcal{L}_{C}$.
Let $\varepsilon > 0$ be arbitrary and fixed. By definition, $$\hat{L}_{\varepsilon
,\mathrm{ren}}=A_{1}(\varepsilon)+A_{2}(\varepsilon).$$ The direct application of the theorem about perturbation of holomorphic semigroups (see the formulation above the assertion of Theorem \[theor11\]) to $T= A_{1}(\varepsilon)$ and $A=A_{2}(\varepsilon)$ gives now the desired claim. It is important to note that Proposition \[pr1\] enables us to consider $\delta$ equal to $\frac{1}{2}$ in the formulation of the classical theorem introduced above. The appearance of the multiplicand $4$ on the left-hand side of the both assumptions in assertion of Theorem \[theor11\] is motivated exactly by the latter fact.
\[co1\] Assume that the constants $\,m, \varkappa ^{-}, \varkappa ^{+}, C>0$ satisfy $$m>2\left( \varkappa ^{-}C+\varkappa ^{+}\right).$$ Then, the operator $\hat{V}=A_{1}+A_{2}$ with the domain $D_{0}$ is a generator of a holomorphic semigroup $\hat{U}_{t}^{V}, \,t\geq 0$ on $\mathcal{L}_{C}$.
We use the same classical result as for Theorem \[theor11\] in the case: $A_{1}$ is a generator of holomorphic semigroup, $A_{2}$ is relatively bounded w.r.t. $A_{1}$ with the boundary less then $\frac{1}{2}$.
Now we may repeat the same considerations as at the end of Section \[sect:base\]. Namely, transferring the general results about adjoint semigroups (see, e.g., [@EN2000]) onto semigroup $(\hat{U}_{t}^{V})^\ast$ in ${{\mathcal K}}_C$ we deduce that it will be weak\*-continuous and weak\*-differentiable at $0$. Moreover, $\hat{V}^\ast$ will be the weak\*-generator of ${{\hat{T}}}^\ast(t)$. This means, in particular, that for any $G\in D(\hat{V})\subset {{\mathcal L}}_C$, $k\in D(\hat{V}^*)\subset {{\mathcal K}}_C$ $$\label{abstrCauchy}
\frac{d}{dt}{{\left\langle}\!{\left\langle}}G, (\hat{U}_{t}^{V})^\ast
k {{\right\rangle}\!{\right\rangle}}= {{\left\langle}\!{\left\langle}}G, \hat{V}^\ast(\hat{U}_{t}^{V})^\ast
k{{\right\rangle}\!{\right\rangle}}.$$ The explicit form of $\hat{V}^\ast$ follows from , namely, for any $k \in D(\hat{V}^*)$ $$\begin{aligned}
\hat{V}^\ast k(\eta)=-m|\eta|k(\eta)&-\varkappa^{-}\int_{{{\mathbb R}}^{d}}\sum_{x\in\eta}a^{-}(x-y)k(\eta\cup y)dy\nonumber\\
&+ \varkappa^{+}\sum_{x\in\eta}\int_{{{\mathbb R}}^{d}}a^{+}(x-y)k(\eta\setminus x\cup y)dy.\label{vadjoint}\end{aligned}$$ As a result, we have that for any $k_0\in
D(\hat{V}^*)$ the function $k_t=(\hat{U}_{t}^{V})^\ast k_0$ provides a weak\* solution of the following Cauchy problem $$\label{CauchyVlasov}
\begin{cases}
\dfrac{\partial}{\partial t} k_t = \hat{V}^\ast k_t\\[2mm]
k_t\bigr|_{t=0}=k_0.
\end{cases}$$
In the next theorem we show that the limiting Vlasov dynamics has chaos preservation property, i.e. preserves the Lebesgue–Poisson exponents.
\[Vlasovscheme\] Let conditions of Theorem \[theor11\] be satisfied and, additionally, $C\geq\frac{4}{16 e-1}$. Let $\rho_0\geq 0$ be a measurable nonnegative function on ${{{{\mathbb R}}^d}}$ such that $\operatorname*{{\mathrm{ess\,sup}}}_{x\in{{{{\mathbb R}}^d}}} \rho_0(x) \leq C$. Then the Cauchy problem with $k_0=e_{\lambda}(\rho_0)$ has a weak\* solution $k_{t}=e_{\lambda}(\rho_t)\in{{\mathcal K}}_C$, where $\rho_t$ is a unique nonnegative solution to the Cauchy problem $$\label{CauchyVlasoveqn}
\begin{cases}
\dfrac{\partial}{\partial t} \rho_t(x) = \varkappa^{+}(a^{+}\ast\rho_t)(x)- \varkappa^{-}\rho_{t}(x)(a^{-}\ast \rho_{t})(x)-
m\rho_{t}(x),\\[2mm]
\rho_t \bigr|_{t=0}(x)=\rho_0(x),
\end{cases}$$ and $\operatorname*{{\mathrm{ess\,sup}}}_{x\in{{{{\mathbb R}}^d}}} \rho_{t}(x) \leq C$, $t\geq0$.
First of all, if has a solution $\rho_t(x)\geq0$ then $$\frac{\partial}{\partial t} \rho_t(x) \leq\varkappa^{+}(a^{+}\ast\rho_t)(x) -m\rho_t(x)$$ and, therefore, $\rho_t(x)\leq r_t(x)$ where $r_t(x)$ is a solution of the Cauchy problem $$\label{CauchyEst}
\begin{cases}
\dfrac{\partial}{\partial t} r_t(x) = \varkappa^{+}(a^{+}\ast
r_t)(x)-mr_t(x) , \\
r_t \bigr|_{t=0}(x)=\rho_0(x)\geq 0,
\end{cases}$$ for a.a. $x\in{{{{\mathbb R}}^d}}$. Hence, $$\begin{aligned}
r_t(x)&=e^{-(m-\varkappa^+)t}e^{\varkappa^+ tL_{a^{+}}}\rho_0(x),\\\intertext{where} (L_{a^+}f)(x)&:=\int_{{{{\mathbb R}}^d}}a^+(x-y)[f(y)-f(x)]dy.\end{aligned}$$ Since for $f\in L^\infty({{{{\mathbb R}}^d}})$ we have $\bigl| (L_{a^+}f)(x)|\leq2\|f\|_{L^\infty({{{{\mathbb R}}^d}})}$ then, by , $$r_t(x)\leq Ce^{-(m-\varkappa^+)t}e^{2\varkappa^+ t}\leq C,$$ that yields $0\leq\rho_t(x)\leq C$.
To prove the existence and uniqueness of the solution of let us fix some $T>0$ and define the Banach space $X_T=C([0;T],L^\infty({{{{\mathbb R}}^d}}))$ of all continuous functions on $[0;T]$ with values in $L^\infty({{{{\mathbb R}}^d}})$; the norm on $X_T$ is given by . We denote by $X_T^+$ the cone of all nonnegative functions from $X_T$.
Let $\Phi$ be a mapping which assign to any $v\in X_T$ the solution $u_t$ of the linear Cauchy problem $$\label{CauchyLin}
\begin{cases}
\dfrac{\partial}{\partial t} u_t(x) = \varkappa^{+}(a^{+}\ast v_t)(x)- \varkappa^{-}u_{t}(x)(a^{-}\ast v_{t})(x)-
mu_{t}(x), \\
u_t \bigr|_{t=0}(x)=\rho_0(x),
\end{cases}$$ for a.a. $x\in{{{{\mathbb R}}^d}}$. Therefore, $$\begin{aligned}
\label{defPhi}
(\Phi v)_t(x)=&\exp\left\{-\int_0^t\bigl(
m+\varkappa^-(a^-\ast v_s)(x)
\bigr)ds\right\}\rho_0(x)\\&+\int_0^t \exp\left\{-\int_s^t\bigl(
m+\varkappa^-(a^-\ast v_\tau)(x)
\bigr)d\tau\right\}\varkappa^{+}(a^{+}\ast v_s)(x)ds.\nonumber\end{aligned}$$ We have that $v\in X_T^+$ implies $\Phi v \geq0$ as well as the estimate $$(\Phi v)_t(x)\leq \rho_0(x)+\varkappa^+ \|v\|_T\int_0^t e^{-(t-s)m}ds\leq C +\frac{\varkappa^+}{m} \|v\|_T,$$ where we use the trivial inequality $$\label{H}
\|f\ast g\|_{L^\infty({{{{\mathbb R}}^d}})}\leq\|f\|_{L^1({{{{\mathbb R}}^d}})}\|g\|_{L^\infty({{{{\mathbb R}}^d}})},
\qquad f\in L^1({{{{\mathbb R}}^d}}), \ g\in L^\infty ({{{{\mathbb R}}^d}}).$$ Therefore, $\Phi v\in
X_T^+$. For simplicity of notations we denote for $v\in X_T^+$ $$(Bv)(t,x)=m+\varkappa^-(a^-\ast v_t)(x)\geq m>0.$$ Then, for any $v, w\in X_T^+$ $$\begin{aligned}
&\bigl| (\Phi v)_t(x)-(\Phi w)_t(x) \bigr| \\
\leq & \left|\exp\left\{-\int_0^t (Bv)(s,x)ds\right\}
-\exp\left\{-\int_0^t (Bw)(s,x)ds\right\}\right|\rho_0(x)\\
&+\int_0^t \left|\exp\left\{-\int_s^t(Bv)(\tau,x)d\tau\right\}\varkappa^{+}(a^{+}\ast v_s)(x)\right. \\ &\qquad \left.-\exp\left\{-\int_s^t(Bw)(\tau,x)d\tau\right\}\varkappa^{+}(a^{+}\ast w_s)(x)\right|ds.\end{aligned}$$ We have $$\begin{aligned}
&\left|\exp\left\{-\int_0^t (Bv)(s,x)ds\right\}
-\exp\left\{-\int_0^t (Bw)(s,x)ds\right\}\right|\\\leq&e^{-mt}\left|\exp\left\{-\int_0^t \varkappa^-(a^-\ast v_s)(x)ds\right\}
-\exp\left\{-\int_0^t \varkappa^-(a^-\ast w_s)(x)ds\right\}\right|\\\leq&
e^{-mt}\left|\int_0^t \varkappa^-(a^-\ast v_s)(x)ds-\int_0^t \varkappa^-(a^-\ast w_s)(x)ds\right|\\\leq&e^{-mt}
\varkappa^-\|v-w\|_T\cdot t\leq \frac{\varkappa^-}{em}\|v-w\|_T,\end{aligned}$$ where we used and obvious inequalities $|e^{-a}-e^{-b}|\leq |a-b|$ for $a,b\geq0$; $e^{-x}x\leq
e^{-1}$ for $x\geq0$.
Next, using another simple estimates for any $a,b,p,q\geq0$ $$|pe^{-a}-qe^{-b}|\leq e^{-a}|p-q|+qe^{-b}|e^{-(a-b)}-1|\leq
e^{-a}|p-q|+qe^{-b}|a-b|,$$ we obtain $$\begin{aligned}
&\int_0^t \left|\exp\left\{-\int_s^t(Bv)(\tau,x)d\tau\right\}\varkappa^{+}(a^{+}\ast v_s)(x)\right. \\ &\qquad \left.-\exp\left\{-\int_s^t(Bw)(\tau,x)d\tau\right\}\varkappa^{+}(a^{+}\ast w_s)(x)\right|ds\\
\leq&
\varkappa^+ \int_0^t \exp\left\{-\int_s^t(Bv)(\tau,x)d\tau\right\}\bigl|
a^+*(v_s-w_s)\bigr|(x) ds\\&
+\int_0^t \exp\left\{-\int_s^t(Bw)(\tau,x)d\tau\right\}(\varkappa^{+}a^{+}\ast w_s)(x)\\ &\quad\times\left| \int_s^t(Bv)(\tau,x)d\tau-\int_s^t(Bw)(\tau,x)d\tau\right|ds\\\leq&
\varkappa^+ \|v-w\|_T \int_0^t e^{-m(t-s)}ds\\&
+\int_0^t \exp\left\{-\int_s^t\varkappa^-(a^-\ast w_\tau)(x)d\tau\right\}(\varkappa^{+}a^{+}\ast w_s)(x)\allowdisplaybreaks[0]\\ &\quad\times e^{-m(t-s)} \int_s^t \varkappa^-(a^-\ast |v_\tau-w_{\tau}|)(x)d\tau ds
\\\intertext{and, using \eqref{bigcomp} and
the inequalities above, one can continue}\leq&
\frac{\varkappa^+}{m}\|v-w\|_T+\frac{C}{4}\frac{\varkappa^-}{em}\|v-w\|_T\\&\quad\times\int_0^t \exp\left\{-\int_s^t\varkappa^-(a^-\ast w_\tau)(x)d\tau\right\}\varkappa^-(a^{-}\ast w_s)(x)ds\\=&
\frac{\varkappa^+}{m}\|v-w\|_T+\frac{C}{4}\frac{\varkappa^-}{em}\|v-w\|_T\allowdisplaybreaks[0]\\&\quad\times\int_0^t \frac{\partial}{\partial s} \exp\left\{-\int_s^t\varkappa^-(a^-\ast w_\tau)(x)d\tau\right\}ds\\\leq&\left( \frac{\varkappa^+}{m}+\frac{C}{4}\frac{\varkappa^-}{em}\right)\|v-w\|_T.\end{aligned}$$
Therefore, for $v,w\in X_T^+$ $$\|\Phi v-\Phi w\|_T\leq
\left( \frac{\varkappa^+}{m}+\Bigl(1+\frac{C}{4}\Bigr)\frac{\varkappa^-}{em}\right)\|v-w\|_T\leq \frac{4(\varkappa^++C\varkappa^-)}{m}\|v-w\|_T,$$ if, e.g., $1+\frac{C}{4}\leq 4Ce$, that means $C\geq\frac{4}{16
e-1}$.
As a result, by , $\Phi$ is a contraction mapping on the cone $X_T^+$. Taking, as usual, $v^{(n)}=\Phi^nv^{(0)}$, $n\geq1$ for $v^{(0)}\in X_T^+$ we obtain that $\{v^{(n)}\}\subset X_T^+$ is a fundamental sequence in $X_T$ which has, therefore, a unique limit point $v\in X_T$. Since $X_T^+$ is a closed cone we have that $v\in X_T^+$. Then, identically to the classical Banach fixed point theorem, $v$ will be a fixed point of $\Phi$ on $X_T$ and a unique fixed point on $X_T^+$. Then, this $v$ is the nonnegative solution of on the interval $[0;T]$. By the note above, $v_t(x)\leq C$. Changing initial value in onto $\rho_t
\bigr|_{t=T}(x)=v_T(x)$ we may extend all our considerations on the time-interval $[T;2T]$ with the same estimate $v_t(x)\leq C$; and so on. As a a result, has a unique global bounded non-negative solution $\rho_t(x)$ on ${{\mathbb R}}_+$.
Consider now $$k_t(\eta)=e_{\lambda}(\rho_t,\eta)\in \mathcal{K}_{C},$$ then $$\frac{\partial}{\partial t}e_{\lambda}(\rho_t,\eta)=\sum_{x\in{\eta}}\frac{\partial\rho_{t}}{\partial t}(x)e_{\lambda}(\rho_t,\eta\setminus x).$$ Using (\[CauchyVlasoveqn\]) and , we immediately conclude that $k_t(\eta)=e_{\lambda}(\rho_t,\eta)$ is a solution to .
The main result of the paper is formulated in the next theorem. Its proof will be given in Subsection 3.4.
\[theor22\] Under conditions of Theorem \[theor11\] the semigroup $\hat{U}_{t,\varepsilon }$ converges strongly to the semigroup $\hat{U}_{t}^{V}$ as $\varepsilon \rightarrow 0$ uniformly on any finite intervals of time.
Proofs
------
According to Theorem \[gentheor\], the statement of Theorem \[theor22\] will be proved once we verify Assumptions [**(A)**]{} for the operators $\left( A_{1}(\varepsilon), D_{1}\right)$, $\left( A_{2}(\varepsilon), D_{1}\right)$, $\varepsilon > 0$, defined in the previous subsection. Note, that $A_{1}(0)=A_{1}$ and $A_{2}(0)=A_{2}$ are defined on the domain $D_{0}$.
In the following proposition we verify Assumption 2(a) of [**(A)**]{}.
\[Prop2.1\] Let $\lambda > 0$ then $$\left( A_{1}(\varepsilon)-\lambda 1\!\!1 \right) ^{-1}\overset{s}{\longrightarrow }\left( A_{1}-\lambda 1\!\!1 \right) ^{-1},\varepsilon \rightarrow
0.$$
For any $G\in \mathcal{L}_{C}$$$\begin{aligned}
&\left\Vert \left( A_{1}(\varepsilon)-\lambda 1\!\!1 \right)
^{-1}G-\left(
A_{1}-\lambda 1\!\!1 \right) ^{-1}G\right\Vert _{C} \\
=&\int_{\Gamma _{0}}\left\vert G\left( \eta \right) \left( \frac{1}{-m\left\vert \eta \right\vert -\varepsilon \varkappa ^{-}E^{a^{-}}\left(
\eta \right) -\lambda }-\frac{1}{-m\left\vert \eta \right\vert -\lambda }\right) \right\vert C^{\left\vert \eta \right\vert }d\lambda \left( \eta
\right) \\
= &\int_{\Gamma _{0}}\left\vert G\left( \eta \right) \right\vert
F_{\varepsilon }\left( \eta \right) C^{\left\vert \eta \right\vert
}d\lambda \left( \eta \right) ,\end{aligned}$$where $$F_{\varepsilon }\left( \eta \right) :=\frac{\varepsilon \varkappa
^{-}E^{a^{-}}\left( \eta \right) }{\left( m\left\vert \eta \right\vert
+\varepsilon \varkappa ^{-}E^{a^{-}}\left( \eta \right) +\lambda \right)
\left( m\left\vert \eta \right\vert +\lambda \right) },~~\eta \in \Gamma
_{0}.$$Since $0\leq F_{\varepsilon }\left( \eta \right) <1/\lambda$ and $F_{\varepsilon
}\left( \eta \right) \rightarrow 0$ as $\varepsilon \rightarrow 0$ for any $\eta
\in \Gamma _{0}$, we get the desired statement.
Next we check Assumption 2(b) of [**(A)**]{}.
\[Prop2.2\] Let $\lambda > 0$ be arbitrary and fixed. Then $$\sup_{\varepsilon \geq 0}\left\Vert \left( A_{1}(\varepsilon)-\lambda 1\!\!1 \right)
^{-1}\right\Vert \leq \left\Vert \left( A_{1}-\lambda 1\!\!1 \right) ^{-1}\right\Vert .$$
For any $G\in \mathcal{L}_{C}$ and any $\varepsilon >0$ $$\begin{aligned}
&\left\Vert \left( A_{1}(\varepsilon)-\lambda 1\!\!1 \right)
^{-1}G\right\Vert _{C} \\
=&\int_{\Gamma _{0}}\left\vert G\left( \eta \right) \right\vert \frac{1}{m\left\vert \eta \right\vert +\varepsilon \varkappa ^{-}E^{a^{-}}\left( \eta
\right) +\lambda }C^{\left\vert \eta \right\vert }d\lambda \left( \eta
\right) \\
\leq &\int_{\Gamma _{0}}\left\vert G\left( \eta \right) \right\vert \frac{1}{m\left\vert \eta \right\vert +\lambda }C^{\left\vert \eta \right\vert
}d\lambda \left( \eta \right) =\left\Vert \left( A_{1}-\lambda 1\!\!1 \right)
^{-1}G\right\Vert _{C} \\
\leq &\left\Vert \left( A_{1}-\lambda 1\!\!1 \right) ^{-1}\right\Vert \cdot
\left\Vert G\right\Vert _{C}.\end{aligned}$$ This finishes the proof.
Assumption 2(c) of [**(A)**]{} is proved in the next Proposition.
\[2(cc)\] Let conditions of Theorem \[theor11\] be satisfied. Then, for any $\lambda>0$ $$\sup_{\varepsilon\geq 0}\left\Vert A_{2}(\varepsilon)\left( A_{1}(\varepsilon)-\lambda 1\!\!1 \right)
^{-1}\right\Vert < \frac{1}{2}\label{2(c)}$$
First we prove assertion for $\varepsilon=0$. Since $D\left( A_{1}\right)= D\left( A_{2}\right)=D_{0} $ and $Ran\left(
\left( A_{1}-\lambda 1\!\!1 \right) ^{-1}\right) =D\left( A_{1}\right)$, the operator $A_{2}\left( A_{1}-\lambda 1\!\!1 \right) ^{-1}$ is well defined. Next, inequality and Proposition \[pr001\] yields$$\left\Vert A_{2}\left( A_{1}-\lambda 1\!\!1 \right) ^{-1}\right\Vert <\frac{1}{4}.
\label{star}$$Indeed, $$\left\Vert A_{2}G\right\Vert _{C}\leq a\left\Vert A_{1}G\right\Vert
_{C}<a\left\Vert \left( A_{1}-\lambda 1\!\!1 \right) G\right\Vert _{C}$$ with $a<\frac{1}{4}$. Therefore, $$\left\Vert A_{2}\left( A_{1}-\lambda 1\!\!1 \right) ^{-1}G\right\Vert _{C}<\frac{1}{4}\left\Vert G\right\Vert _{C},$$and is proved. Now, let $\varepsilon > 0$ be arbitrary and fixed. The main arguments we use to show $$\left\Vert A_{2}(\varepsilon)\left( A_{1}(\varepsilon)-\lambda 1\!\!1 \right)
^{-1}\right\Vert < \frac{1}{2}$$ are the following:
1\) $D\left( A_{1}(\varepsilon) \right) =D_{1} \subset D_{0}=D\left(
A_{2}\right) $. Hence, $A_{2}\left( A_{1}(\varepsilon)-\lambda 1\!\!1
\right) ^{-1}$ is well-defined on $\mathcal{L}_{C}$. Moreover, Proposition \[pr001\] implies
$$\left\Vert A_{2}\left( A_{1}(\varepsilon)-\lambda 1\!\!1 \right)
^{-1}\right\Vert <\frac{1}{4}, \quad \varepsilon >0.$$
2\) $D\left( B_{2}\right) =D\left( A_{1}(\varepsilon) \right) =D_{1} $ and for any $\varepsilon >0$ $$\left\Vert \varepsilon B_{2}\left( A_{1}(\varepsilon)-\lambda 1\!\!1
\right) ^{-1}\right\Vert <\frac{1}{4},$$ which follows from Proposition \[pr002\].
3\) Since $A_{2}(\varepsilon):=A_{2}+\varepsilon B_{2}$, we have $$\left\Vert A_{2}(\varepsilon)\left( A_{1}(\varepsilon)-\lambda 1\!\!1 \right)
^{-1}\right\Vert <\frac{1}{2}. \label{againstar}$$ The latter concludes the proof.
We set$$\begin{aligned}
Q_{\varepsilon } =\left( A_{2}(\varepsilon)\left( A_{1}(\varepsilon)-\lambda 1\!\!1 \right) ^{-1}+1\right) ^{-1},
\quad\quad
Q =\left( A_{2}\left( A_{1}-\lambda 1\!\!1 \right) ^{-1}+1\!\!1\right) ^{-1}.\end{aligned}$$In order to verify Assumption 2(d) of [**(A)**]{} we have to show that $Q_{\varepsilon }\overset{s}{\longrightarrow }Q$ as $\varepsilon\rightarrow 0$.
Suppose that we can show that $$\label{3star}
\begin{aligned}
A_{2}\left( A_{1}(\varepsilon)-\lambda 1\!\!1 \right) ^{-1}&\overset{s}{\longrightarrow }A_{2}\left( A_{1}-\lambda 1\!\!1 \right)
^{-1},&\varepsilon
\rightarrow 0. \\
\varepsilon B_{2}\left( A_{1}(\varepsilon)-\lambda 1\!\!1 \right) ^{-1}&\overset{s}{\longrightarrow }0,&\varepsilon \rightarrow 0.
\end{aligned}$$ Then,$$\begin{aligned}
C_{\varepsilon } :=&A_{2}(\varepsilon)\left(
A_{1}(\varepsilon)-\lambda 1\!\!1 \right) ^{-1} \\
=&A_{2}\left( A_{1}+\varepsilon B_{1}-\lambda 1\!\!1 \right)
^{-1}+\varepsilon
B_{2}\left( A_{1}+\varepsilon B_{1}-\lambda 1\!\!1 \right) ^{-1}\overset{s}{\longrightarrow }A_{2}\left( A_{1}-\lambda 1\!\!1 \right) ^{-1}\end{aligned}$$To check $$Q_{\varepsilon }=\left( C_{\varepsilon }+1\!\!1\right) ^{-1}\overset{s}{\longrightarrow }Q \label{twostar}$$we proceed as follows: $$\begin{aligned}
&\left( C_{\varepsilon }+1\!\!1\right) ^{-1}-Q \\
=&\left( C_{\varepsilon }+1\!\!1\right) ^{-1}-\left( A_{2}\left(
A_{1}-\lambda 1\!\!1
\right) ^{-1}+1\!\!1\right) ^{-1} \\
=&\left( C_{\varepsilon }+1\!\!1\right) ^{-1}\left( A_{2}\left(
A_{1}-\lambda 1\!\!1 \right) ^{-1}+1\!\!1-C_{\varepsilon }-1\!\!1\right) \left(
A_{2}\left( A_{1}-\lambda 1\!\!1
\right) ^{-1}+1\!\!1\right) ^{-1} \\
=&\left( C_{\varepsilon }+1\!\!1\right) ^{-1}\left( A_{2}\left(
A_{1}-\lambda 1\!\!1 \right) ^{-1}-C_{\varepsilon }\right) \left(
A_{2}\left( A_{1}-\lambda 1\!\!1 \right) ^{-1}+1\!\!1\right) ^{-1}.\end{aligned}$$Assuming it is obvious now that convergence [twostar]{} is equivalent to $$\sup_{\varepsilon >0}\left\Vert \left( C_{\varepsilon }+1\!\!1\right)
^{-1}\right\Vert <\infty,$$ which is clear from $$\left\Vert \left( C_{\varepsilon }+1\!\!1\right) ^{-1}\right\Vert \leq \frac{1}{1-\left\Vert C_{\varepsilon }\right\Vert }\quad \mathrm{and}\quad\left\Vert C_{\varepsilon }\right\Vert <\frac{1}{2}.$$ The last bound we conclude from . As a result we shall have established Theorem \[theor22\] if we show .
\[L1\] $A_{2}\left( A_{1}(\varepsilon)-\lambda 1\!\!1 \right) ^{-1}\overset{s}{\longrightarrow }A_{2}\left( A_{1}-\lambda 1\!\!1 \right) ^{-1},\; \mathrm{as}\; \varepsilon
\rightarrow 0.$
Proposition \[pr001\] and $$D\left(A_{1}(\varepsilon)\right) =D_{1}\subset D\left( A_{1}\right)=D\left( A_{2}\right) =D_{0}$$ leads to the following formula $$A_{2}\left( A_{1}(\varepsilon)-\lambda 1\!\!1 \right) ^{-1}=A_{2}\left(
A_{1}-\lambda 1\!\!1 \right) ^{-1}\left( A_{1}-\lambda 1\!\!1 \right) \left(
A_{1}(\varepsilon)-\lambda 1\!\!1 \right) ^{-1}.$$Now, we are left with the task to show that $$\left( A_{1}-\lambda 1\!\!1 \right) \left( A_{1}(\varepsilon)-\lambda 1\!\!1 \right)
^{-1}\overset{s}{\longrightarrow }1, \quad \mathrm{as}\quad \varepsilon
\rightarrow 0.$$But, for any $G\in \mathcal{L}_{C}$$$\begin{aligned}
&\left\Vert \left( \left( A_{1}-\lambda 1\!\!1 \right) \left(
A_{1}(\varepsilon)-\lambda 1\!\!1 \right) ^{-1}-1\!\!1\right) G\right\Vert _{C} \\
=&\int_{\Gamma _{0}}\left\vert \frac{m\left\vert \eta \right\vert
+\lambda }{m\left\vert \eta \right\vert +\varepsilon \varkappa
^{-}E^{a^{-}}\left( \eta \right) +\lambda }-1\right\vert \left\vert
G\left( \eta \right)
\right\vert C^{\left\vert \eta \right\vert }d\lambda \left( \eta \right) \\
=&\int_{\Gamma _{0}}\frac{\varepsilon \varkappa ^{-}E^{a^{-}}\left(
\eta \right) }{m\left\vert \eta \right\vert +\varepsilon \varkappa
^{-}E^{a^{-}}\left( \eta \right) +\lambda }\left\vert G\left( \eta
\right) \right\vert C^{\left\vert \eta \right\vert }d\lambda \left(
\eta \right) \rightarrow 0,\quad \mathrm{as}\quad \varepsilon
\rightarrow 0\end{aligned}$$ due to the Lebesgue’s dominated convergence theorem.
\[L2\] $\varepsilon B_{2}\left( A_{1}(\varepsilon)-\lambda 1\!\!1 \right) ^{-1}\overset{s}{\longrightarrow }0,\; \mathrm{as}\; \varepsilon
\rightarrow 0.$
Since $\left\Vert B_{2}G\right\Vert _{C}\leq \frac{1}{4}\left\Vert B_{1}G\right\Vert _{C}$, we have to show that$$\left\Vert \varepsilon B_{1}\left( A_{1}(\varepsilon)-\lambda 1\!\!1 \right)
^{-1}G\right\Vert _{C}\rightarrow 0,\quad \mathrm{as}\quad\varepsilon \rightarrow 0.$$But, $$\begin{aligned}
&\left\Vert \varepsilon B_{1}\left( A_{1}(\varepsilon)-\lambda 1\!\!1
\right) ^{-1}G\right\Vert _{C}\\&=\int_{\Gamma
_{0}}\frac{\varepsilon \varkappa ^{-}E^{a^{-}}\left( \eta \right)
}{m\left\vert \eta \right\vert +\varepsilon \varkappa
^{-}E^{a^{-}}\left( \eta \right) +\lambda }\left\vert G\left( \eta
\right) \right\vert C^{\left\vert \eta \right\vert }d\lambda \left(
\eta \right) \rightarrow 0,\quad\varepsilon\rightarrow 0. \qedhere\end{aligned}$$
The last two lemmas conclude the proof of the main Theorem.
Under assumptions of Proposition \[2(cc)\] we get the following representation for the resolvents of $\hat{V}$ and $\hat{L}_{\varepsilon ,\mathrm{ren}}$ $$\left( \hat{V}-\lambda 1\!\!1 \right)
^{-1}=\left( A_{1}+A_{2}-\lambda 1\!\!1 \right) ^{-1}=\left( A_{1}-\lambda 1\!\!1 \right)
^{-1}\left( A_{2}\left( A_{1}-\lambda 1\!\!1 \right) ^{-1}+1\!\!1\right) ^{-1},$$$$\begin{aligned}
\left( \hat{L}_{\varepsilon ,\mathrm{ren}}-\lambda 1\!\!1 \right) ^{-1}=&\left( A_{1}(\varepsilon)+A_{2}(\varepsilon)-\lambda 1\!\!1
\right)
^{-1} \label{longexpansresolv2} \\
=&\left( A_{1}(\varepsilon)-\lambda 1\!\!1 \right) ^{-1}\left(
A_{2}(\varepsilon)\left( A_{1}(\varepsilon)-\lambda 1\!\!1 \right)
^{-1}+1\!\!1\right) ^{-1},\quad\quad \lambda >0. \nonumber\end{aligned}$$
Acknowledgements {#acknowledgements .unnumbered}
----------------
The financial support of DFG through the SFB 701 (Bielefeld University) and German-Ukrainian Projects 436 UKR 113/97 is gratefully acknowledged. This work was partially supported by the Marie Curie “Transfer of Knowledge” programme, project TODEQ MTKD-CT-2005-030042 (Warsaw, IMPAN). O.K. is very thankful to Prof. J. Zemanek for fruitful and stimulating discussions.
[99]{}
B. Bolker and S. W. Pacala. Using moment equations to understand stochastically driven spatial pattern formation in ecological systems. , 52(3):179–197, 1997.
B. Bolker and S. W. Pacala. Spatial moment equations for plant competitions: Understanding spatial strategies and the advantages of short dispersal. , 153:575–602, 1999.
W. Braun and K. Hepp. The [V]{}lasov dynamics and its fluctuations in the [$1/N$]{} limit of interacting classical particles. , 56(2):101–113, 1977.
U. Dieckmann and R. Law. Relaxation projections and the method of moments. In [*The Geometry of Ecological Interactions*]{}, pages 412–455. Cambridge University Press, Cambridge, UK, 2000.
R. L. Dobrushin. Vlasov equations. , 13(2):115–123, 1979.
R. L. Dobrushin, Y. G. Sinai, and Y. M. Sukhov. Dynamical systems of statistical mechanics. In Y. G. Sinai, editor, [*Ergodic Theory with Applications to Dynamical Systems and Statistical Mechanics*]{}, volume II of [*Encyclopaedia Math. Sci.*]{}, Berlin, Heidelberg, 1989. Springer.
K.-J. Engel and R. Nagel. , volume 194 of [*Graduate Texts in Mathematics*]{}. Springer-Verlag, New York, 2000.
D. Finkelshtein, Y. Kondratiev, and O. Kutoviy. Vlasov scaling for the [G]{}lauber dynamics in continuum. Preprint 10055, University of Bielefeld; arXiv:1002.4762v3, 2011.
D. Finkelshtein, Y. Kondratiev, and O. Kutoviy. Individual based model with competition in spatial ecology. , 41(1):297–317, 2009.
D. Finkelshtein, Y. Kondratiev, and M. J. Oliveira. Markov evolutions and hierarchical equations in the continuum. [I]{}. [O]{}ne-component systems. , 9(2):197–233, 2009.
N. Fournier and S. Meleard. A microscopic probabilistic description of a locally regulated population and macroscopic approximations. , 14(4):1880–1919, 2004.
T. Kato. . Springer-Verlag, Berlin, second edition, 1976. Grundlehren der Mathematischen Wissenschaften, Band 132.
Y. Kondratiev and T. Kuna. Harmonic analysis on configuration space. [I]{}. [G]{}eneral theory. , 5(2):201–233, 2002.
Y. Kondratiev and O. Kutoviy. On the metrical properties of the configuration space. , 279(7):774–783, 2006.
Y. Kondratiev, R. Minlos, and E. Zhizhina. One-particle subspace of the [G]{}lauber dynamics generator for continuous particle systems. , 16(9):1073–1114, 2004.
V. V. Kozlov. The generalized [V]{}lasov kinetic equation. , 63(4):691–726, 2008.
A. Lenard. States of classical statistical mechanical systems of infinitely many particles. [I]{}. , 59(3):219–239, 1975.
A. Lenard. States of classical statistical mechanical systems of infinitely many particles. [II]{}. [C]{}haracterization of correlation measures. , 59(3):241–256, 1975.
M. Reed and B. Simon. , volume Vol. 1: Functional Analysis. Academic Press, 1981.
D. Ruelle. . W. A. Benjamin, Inc., New York-Amsterdam, 1969.
D. Ruelle. Superstable interactions in classical statistical mechanics. , 18:127–159, 1970.
H. Spohn. Kinetic equations from [H]{}amiltonian dynamics: [M]{}arkovian limits. , 52(3):569–615, 1980.
H. Spohn. . Texts and Monographs in Physics. Springer, November 1991.
[^1]: Institute of Mathematics, National Academy of Sciences of Ukraine, Kyiv, Ukraine (`[email protected]`).
[^2]: Fakultät für Mathematik, Universität Bielefeld, 33615 Bielefeld, Germany (`[email protected]`)
[^3]: Fakultät für Mathematik, Universität Bielefeld, 33615 Bielefeld, Germany (`[email protected]`).
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'In [@roth4], K. Roth showed that the expected value of the $L^2$ discrepancy of the cyclic shifts of the $N$ point van der Corput set is bounded by a constant multiple of $\sqrt{\log N}$, thus guaranteeing the existence of a shift with asymptotically minimal $L^2$ discrepancy, [@MR0066435]. In the present paper, we construct a specific example of such a shift.'
address: 'University of South Carolina, Columbia, SC / Institute for Advanced Study, Princeton, NJ'
author:
- Dmitriy Bilyk
title: Cyclic shifts of the Van der Corput set
---
[^1]
Introduction
============
Let $ \mathcal A_N \subset [0,1] ^{2}$ be a finite point set of cardinality $ N$. The extent of equidistribution of $\mathcal A_N$ can be measured by the *discrepancy function*: $$D_{\mathcal A_N} (x_1,x_2) \coloneqq \sharp \bigl(\mathcal A_N \cap
[0, x_1)\times[0,x_2) \bigr) - N x_1\cdot x_2\,,$$ i.e. the difference between the actual and expected number of points of $\mathcal A_N$ in the rectangle $[0, x_1)\times[0,x_2)$. The main principle of the theory of [*irregularities of distribution*]{} states that the size of this function must increase with $ N$. The fundamental results in the subject are:
([@MR0066435], 1954) For any set $ \mathcal A_N \subset
[0,1]^{2}$, we have $$\label{e.roth}
\norm D_{\mathcal A_N}. 2. \gtrsim (\log N) ^{1/2}$$ where “$\gtrsim$" stands for “greater than a constant multiple of".
([@MR0319933], 1972) For any set $ \mathcal A_N \subset [0,1]^2$, we have $$\label{e.schmidt}
\norm D_{\mathcal A_N} .\infty . \gtrsim \log N \,.$$
Both theorems are known to be sharp in the order of magnitude (e.g., [@Cor35], [@MR0082531], [@roth3], [@MR610701]). One of the most famous examples, yielding sharpness of , is the van der Corput “digit-reversing" set, [@Cor35]. For $N=2^n$ points, it can be defined as $$\label{e.vdc}
\mathcal V_n = \left\{ (0.a_1 a_2 \dots a_n 1,\, 0.a_n a_{n-1} \dots
a_2 a_1 1 ): \, a_i=0,1 \right\},$$ where the coordinates are given in terms of the binary expansion. Unfortunately, most “classical" sets with minimal $L^\infty$ norm of the discrepancy fail to meet the sharp bounds in the $L^2$ norm. In fact, Halton and Zaremba [@HZ] proved that $$\label{e.hz}
\|D_{\mathcal V_n}\|_2^2 = \frac{n^2}{2^6}+ O(n)\approx (\log N)^2.$$
There are three standard remedies in the theory for this shortcoming. To achieve the smallest possible order of the $L^2$ discrepancy, one can alter the sets in the following ways:
[*1. Davenport’s Reflection Principle.*]{} Informally, if $P$ has low $L^\infty$ discrepancy, then the set $\widetilde{P} = P \cup
\{(1-x,y):\, (x,y) \in P \}$ has low $L^2$ discrepancy. This was demonstrated by Davenport [@MR0082531] in the case of the irrational lattice, and by Chen and Skriganov ([@MR1805869], see also [@MR0965955]) in the case of the van der Corput set.
[*2. Digit Scrambling.*]{} This procedure, initially introduced in [@chen2], has been extensively studied; a comprehensive discussion can be found in [@MR1697825].
[*3. Cyclic shifts.*]{} This transformation is the subject of this paper. It has been proved by Roth, [@roth4], (see also [@roth3], where the translation idea was originally used), that for the cyclic shifts of the van der Corput set $$\mathcal V^\alpha_n = \left\{ \big( (x+\alpha)\, \textup{mod}\, 1, y
\big):\, (x,y)\in \mathcal V_n \right\},$$ the expected value of the $L^2$ discrepancy over $\alpha$ satisfies $$\int_0^1 \| D_{\mathcal V_n^\alpha}\|_2 \, d\alpha \lesssim n^{1/2}
= \left( \log N \right)^{1/2}.$$ This implies that there exists a particular cyclic shift of the van der Corput distribution with minimal $L^2$ norm of the discrepancy function. However, this was purely an existence proof and no deterministic examples of such shifts have been constructed. In the present paper, we “de-randomize" this result and provide an explicit value of $\alpha$, which asymptotically minimizes $\|D_{\mathcal V_n^\alpha}\|_2$. We prove the following theorem
\[t.main\] For $\alpha_0=1-\frac{k}{2^n}$, where $k\in \mathbb N$, in the binary form, is given by $$\label{e.defk}
k := \big( \underbrace{000 \,\dots \, 00}_{n_0 \, \textup{digits}}
\underbrace{00001111 \,\dots \, 00001111}_{n_2 \, \textup{digits}}\,
\underbrace{000111 \,\dots \, 000111}_{n_1 \, \textup{digits}}
\big)_2 + 1,$$ with $n_0+n_1+n_2=n$, $\frac{n_1}{n_2}=\frac{54}{17}$, and $n_0<568$, the cyclically shifted van der Corput set $\mathcal
V_n^{\alpha_0}$ satisfies $$\label{e.main}
\Norm D_{\mathcal V_n^{\alpha_0}}. 2. \lesssim n^{1/2} = (\log
N)^{1/2}.$$
[*Remark.*]{} The “$+1$" in the end of is just a minor nuisance, which simplifies some calculations, and is not important. In fact, one can easily see that a cyclic shift by the amount $\alpha = 1/N = 2^{-n}$ changes the discrepancy by at most $1$ at each point.
We would like to point out that most constructions of sets with minimal order of $L^p$ discrepancy (which are important in applications to numerical integration) are probabilistic; explicit constructions are rare. In fact, the first deterministic examples of such sets in dimensions $d\ge 3$ have only been obtained quite recently by Chen and Skriganov ([@MR1896098], [@MR2283797]).
The outline of the paper is the following: in §2 we deal with the quantities $\int_{[0,1]^2} D_{\mathcal V_n}(x)dx$ and $\int_{[0,1]^2} D_{\mathcal V_n^\alpha}(x)dx$ (which can be viewed as the “zero-order" term of the expansion in any reasonable orthonormal basis) and minimize the latter. In §3, we examine the Fourier coefficients $\widehat{D_{\mathcal V_n}}(n_1, n_2)$ when $(n_1,n_2)\neq (0,0)$ and show that they do not change too much under cyclic shifts.
We will refer to the two parts of the discrepancy function as “linear" and “counting": $$\begin{aligned}
\label{e.linear}
L_N (x_1,x_2) &= N x_1 \cdot x_2 \,,
\\ \label{e.counting}
C_ {\mathcal A_N} (x_1,x_2) &= \sum _{ p \in \mathcal A_N} \mathbf
1_{[p_1, 1)\times [p_2,1) } ( x_1,x_2) \,.\end{aligned}$$ In proving upper bounds for the discrepancy function, one of course needs to capture a large cancelation between these two.
The integral of the discrepancy function
========================================
Recall that in our definition of the van der Corput set, $\mathcal
V_n = \left\{ (0.a_1 \dots a_n 1,\, 0.a_n \dots a_2 a_1 1
)\right\}$, both coordinates have $1$’s in the $(n+1)^{st}$ binary place. This is just a technical modification, which ensures that, for any $\alpha =j/2^n$, $j\in \mathbb Z$, the average value of both coordinates in $\mathcal V_n^\alpha$ is one-half: $$\label{e.sumall}
\frac1{2^n}\sum_{(p_1,p_2)\in \mathcal V_n^\alpha} p_1=
\frac1{2^n}\sum_{(p_1,p_2)\in \mathcal V_n^\alpha} p_2 =\frac12.$$ This makes many formulas look ‘cleaner’ and is not essential to the computations.
It has been noticed (see [@HZ], [@BLPV]), that the quantity $\int_{[0,1]^2} D_{\mathcal V_n}(x)dx$ is the main reason why $\|
D_{\mathcal V_n} \|_2$ is large. Indeed, if one compares and below, it is easy to see that $$\label{e.rest}
\NOrm D_{\mathcal V_n} - \int D_{\mathcal V_n} .2. \lesssim (\log
N)^{1/2}.$$ We include the proof of the lemma below for the sake of completeness.
\[l.integral\] For the van der Corput set $ \mathcal V_n$ $$\label{e.integral}
\int _{[0,1) ^2 } D_{\mathcal V_n} (x) \; d x = \frac n8.$$
The linear part of the discrepancy function clearly gives us $$\label{e.I1}
\int _{[0,1) ^2 } L_N \; d x= 2 ^{n-2}\,.$$
Let $ X_1 ,\dotsc, X_n$ be independent random variables taking values $ \{0,1\}$ with probability $\frac12$. A straightforward computation yields
$$\begin{aligned}
\notag
\int _{[0,1) ^2 } C _{\mathcal V_n} (x_1, x_2) \; dx_1\, dx_2 &=
\sum _{(p_1,p_2) \in \mathcal V_n} (1-p_1)(1-p_2)\\
\notag & = 2
^{n} \mathbb E \Biggl[ 1- \sum _{j=1} ^{n} X_j 2 ^{-j} -
2^{-n-1}\Biggr] \Biggl[ 1-\sum _{k=1} ^{n} X_k 2 ^{-n+k-1} -
2^{-n-1}\Biggr]
\\
\label{e.I4} &= 2 ^{n-2} +\frac{n}{8}.\end{aligned}$$
Combining and proves the lemma.
In what follows we prove that the average of $\int_{[0,1]^2}
D_{\mathcal V_n^\alpha} dx $ over $\alpha$ is zero. Besides, we construct a specific value of $\alpha_0$, for which $$\int_{[0,1]^2} D_{\mathcal V_n^{\,\alpha_0}}\,\, dx \approx 1.$$
Assume that $\alpha\in [0,1)$ is an $n$-digit binary number. Then $$\mathbb E_\alpha \int_{[0,1]^2} D_{\mathcal V_n^\alpha} dx
= 0.$$
We denote $1-\alpha =\frac{k}{2^n}$ ($k=1,\dots , 2^n$) and start with the following computation:
$$\begin{aligned}
\nonumber \int_{[0,1]^2} C_{\mathcal V_n^\alpha} & = \sum_{p\in
\mathcal
V_n^\alpha} (1-p_1)(1-p_2)\\
\nonumber & = \sum_{p\in \mathcal V_n:\, p_1< 1-\alpha}
(1-p_1-\alpha)\cdot(1-p_2)\,\, + \sum_{p\in \mathcal V_n:\, p_1 >
1-\alpha} (2-p_1-\alpha)\cdot(1-p_2)\\
\nonumber & = \int_{[0,1]^2} C_{\mathcal V_n} dx +
(1-\alpha)\sum_{p\in \mathcal V_n} (1-p_2) - \sum_{p\in \mathcal
V_n:\, p_1< 1-\alpha}
(1-p_2)\\
\label{e.cn}&= \int_{[0,1]^2} C_{\mathcal V_n} dx \,\,-\,\,
\frac{k}{2} + \sum_{p\in \mathcal V_n:\, p_1< k/2^{n}} p_2.\end{aligned}$$
Next, we examine the behavior of the last sum above. Using the structure of the van der Corput set, we can write $$\label{e.fk}
\sum_{p\in \mathcal V_n:\, p_1< k/2^{n}} p_2 =\sum_{l=1}^n 2^{-l}
f_l (k)+ k 2^{-n-1},$$ where $k 2^{-n-1}$ comes from the final $1$’s in the expansion of $p_2$ and $$\label{e.defflk}
f_l (k)=\#\{0\le j \le k-1:\,\,{\textup{ such that the $l^{th}$
(from the end) binary digit of
$j$ is $1$}} \}.$$ It can be seen that $$\label{e.flk1}
f_l(k)=2^{l-1}m \,\,\,\,\,\,\,\,\,\,\,\,\, \textup{if } k-1=2^l m,\,
2^l m +1,\, ... \,, 2^l m + 2^{l-1}-1,\,\,\, \textup{and}$$ $$\label{e.flk2}
f_l(k)=2^{l-1}m + j \,\,\,\,\,\,\,\,\, \textup{if } k-1=2^l
m+2^{l-1}+j-1,\textup{ where }\,1\le j \le 2^{l-1}.$$ Thus, if we set $f_l(k) = 2^{l-1} m_l(k) + j_l(k)$, where $0\le
m_l(k) < 2^{n-l}$ and $1\le j_l(k) \le 2^{l-1}$, we have $\mathbb
E_k m_l(k) = \frac12 \cdot (2^{n-l}-1)$ and $$\mathbb E_k j_l(k)=\frac12 \cdot \frac12(2^{l-1}+1),$$ where the extra one-half above comes from the fact that $j_l(k)=0$ half of the time. Thus $$\mathbb E_k f_l(k) = 2^{n-2} - 2^{l-2} + 2^{l-3} +\frac14.$$ Plugging this into , we obtain $$\begin{aligned}
\nonumber \mathbb E_k \sum_{p\in \mathcal V_n:\, p_1< k/2^{n}} p_2
& = \sum_{l=1}^n 2^{-l} \left( 2^{n-2} - 2^{l-3} +\frac14 \right)\, + \mathbb E_k k\cdot 2^{-n-1}\\
\label{e.cn1} & = 2^{n-2} - \frac{n}{8} +\frac14.\end{aligned}$$ Finally, equation , together with as well as , yields $$\begin{aligned}
\nonumber \mathbb E \int_{[0,1]^2} D_{\mathcal V_n^\alpha} &=
\int_{[0,1]^2} D_{\mathcal V_n} dx \,\,-\,\,
\mathbb E_k \frac{k}{2} + \mathbb E_k \sum_{p\in \mathcal V_n:\, p_1< k/2^{n}} p_2\\
& = \frac{n}{8}\, - \,\left(2^{n-2} +\frac14 \right) \,+\, \left(
2^{n-2} - \frac{n}{8} +\frac14 \right) = 0.\end{aligned}$$
To facilitate the construction of an example, we further look at the functions $f_l(k) = 2^{l-1} m_l(k) + j_l(k)$ defined above, -. Assume that $k-1$ is written in the binary representation: $$k -1 = \sum_{j=1}^n k_j\cdot 2^{j-1} = \big( k_n k_{n-1}\dots k_2 k_1\big)_2.$$ By construction, $m_l(k)=(k_n k_{n-1} \dots k_{l+1})_2$, besides, when $k_l=0$, we have $j_l(k)=0$, and if $k_l=1$, $j_l(k)=(k_{l-1}\dots k_1)_2 +1$. Thus, $f_l(k)$ can be written in closed form in terms of digits of $k-1$ as follows $$\label{e.flk}
f_l(k) = \sum_{j=l+1}^n k_j 2^{j-2} + k_l \cdot \sum_{j=1}^{l-1}k_j
2^{j-1} \, + k_l.$$ Indeed, if $k_l=0$, the last two terms will disappear, otherwise, they’ll equal exactly $j_l(k)$.
Plugging this into , we obtain $$\label{e.kj}
\sum_{p\in \mathcal V_n:\, p_1 < k/2^{n}} p_2 =
\sum_{l=1}^{n-1} \sum_{j=l+1}^n k_j \cdot 2^{j-l-2} + \sum_{l=1}^n
k_l \cdot 2^{-l} + \sum_{l=2}^n \sum_{j=1}^{l-1} k_j \cdot k_l \cdot
2^{j-l-1}.$$ Obviously, the second term above is bounded by one. Next we shall look at the first term in . At this point we assume that $$\label{e.halfdigits}
\sum_{j=1}^n k_j =\frac{n}{2} + O(1),$$ i.e. approximately half of the binary digits of $k-1$ are ones and half are zeros. We have $$\begin{aligned}
\nonumber \sum_{l=1}^{n-1} \sum_{j=l+1}^n k_j \cdot 2^{j-l-2} & =
\frac12 \sum_{j=2}^n k_j \cdot 2^{j-1} \sum_{l=1}^{j-1} 2^{-l}
\,\,\,\,\,\,\,\,\,\,\,\,\, = \,\,\, \frac12 \sum_{j=2}^n k_j \cdot
2^{j-1} \cdot (1-
2^{-(j-1)})\\
\label{e.kj1} & = \frac12 \sum_{j=2}^n k_j \cdot 2^{j-1}\, - \,
\frac12 \sum_{j=2}^n k_j \,\,\, = \,\,\, \frac12 k - \frac{n}{4} +
O(1).\end{aligned}$$
As to the last term of , we have the following lemma:
\[l.n8\] For every $n\in \mathbb N$, there exists $k: \, 1\le k \le 2^n$ with $\sum_{j=1}^n k_j =
n/2 + O(1)$, where $k -1 = \big( k_n k_{n-1}\dots k_2 k_1\big)_2$, so that $$\label{e.kj3}
\sum_{l=2}^n \sum_{j=1}^{l-1} k_j \cdot k_l \cdot 2^{j-l-1} =
\frac{n}{8} + O(1).$$
Assuming this statement for the moment, putting together , , and for $k$ defined by Lemma \[l.n8\] above, we obtain $$\label{e.1p2}
\sum_{p\in \mathcal V_n:\, p_1< k/2^{n}} p_2 = \left( \frac12 k -
\frac{n}{4} \right) + \frac{n}{8} + O(1) = \frac12 k - \frac{n}{8}
+ O(1),$$ and together with , , this yields: $$\label{e.cnk}
\int_{[0,1]^2} C_{\mathcal V_n^\alpha} dx = \left( 2^{n-2}
+\frac{n}{8}\right) - \frac12 k + \left( \frac12 k - \frac{n}{8}
\right) + O(1) = 2^{n-2} + O(1).$$ Finally, and give $$\label{e.dnk}
\int_{[0,1]^2} D_{\mathcal V_n^\alpha}(x) dx = O(1).$$
Thus, it remains to prove Lemma \[l.n8\]. We shall denote $$\label{e.snk}
S(n,k-1) := \sum_{l=2}^n \sum_{j=1}^{l-1} k_j \cdot k_l \cdot
2^{j-l-1}$$ and will look at some base examples first. Let $k'$ be of the form $$\label{e.k'}
k' := \big( 000111 \,\dots \, 000111 \big)_2 ,$$ where the sequence $000111$ is repeated $n'$ times, $n=6n'$. We then have the following calculation: $$\begin{aligned}
\label{e.k'1} S(6n', k') & = \frac12 \sum_{l'=1}^{n'-1}
\left(2^{-(6l'+1)}+ 2^{-(6l'+2)}+2^{-(6l'+3)}\right) \cdot \left(
\sum_{j'=0}^{l'-1} \left[ 2^{6j'+1} +2^{6j'+2} +2^{6j'+3} \right]
\right)\\
\label{e.k'2} & \quad + \frac12 \sum_{l'=0}^{n'-1} \left(\frac12 +
\left( \frac12 +\frac14 \right) \right)\\
\nonumber & = \frac12 \sum_{l'=1}^{n'-1} 2^{-6l'} \cdot (2^{6l'}-1)
\cdot \frac{1}{2^6 - 1}\cdot (2^{-1}+2^{-2} + 2^{-3})(2^1 + 2^2 +
2^3
) \,\,+ \frac12 \cdot \frac54 n'\\
\nonumber & = \left(\frac{7}{72} +\frac{45}{72} \right)\cdot
\frac{n}{6} + O(1)\\
\label{e.k'3} & = \frac{13}{108}n + O(1),\end{aligned}$$ where the term in describes the interactions of digits in different triples and arises from interactions within the triples. (Notice that the obtained fraction $\frac{13}{108} \approx 0.12037...$ is quite close to the desired $\frac18 = 0.125$.)
Next we set $k''=(00001111....00001111)_2$, where the string $00001111$ is repeated $n''$ times. An absolutely analogous computation yields: $$\begin{aligned}
\nonumber S(8n'', k'') & = \frac12 \sum_{l''=1}^{n''-1} 2^{-8l''}
\cdot (2^{8l''}-1) \cdot \frac{1}{2^8 - 1}\cdot (2^{-1}+2^{-2} +
2^{-3}+2^{-4})(2^1 + 2^2 + 2^3 +2^4
)\\
\nonumber & \quad + \frac12 \sum_{l'=0}^{n'-1} \left(\frac12 +
\left( \frac12 +\frac14 \right) +\left( \frac12 +\frac14 +\frac18 \right)\right)\\
\label{e.k''3} & = \frac{19}{136}n + O(1).\end{aligned}$$
We are now ready to define the number $k$ which satisfies . Set $$\label{e.k}
k -1 := \big( \underbrace{00001111 \,\dots \, 00001111}_{n_2 \,
\textup{digits}}\, \underbrace{000111 \,\dots \, 000111}_{n_1 \,
\textup{digits}} \big)_2 .$$ Then we have, $$S(n_1+n_2,k-1)= S(n_1, k') + S(n_2, k'') + I(n_1, n_2),$$ where $I(n_1, n_2)$ describes the interaction between the two parts of $k$. We can estimate: $$I(n_1,n_2) = \frac12 \cdot \left(\sum_{l=n_1+1}^n k_l 2^{-l}
\right)\cdot \left( \sum_{j=1}^{n_1} k_j 2^j \right) \leq \frac12
\big(2^{-n_1-1} \cdot 2 \big) \cdot (2^{n_1+1}-1) \leq 1.$$ We now choose $n_1$ and $n_2$ so that $\frac{n_1}{n_2}=\frac{54}{17}$, i.e. $n_1 = \frac{54}{71}n$, $n_2=\frac{17}{71}n$. We then obtain $$\begin{aligned}
\nonumber S(n, k-1)& = \frac{13}{108}n_1 + \frac{19}{136}n_2 +O(1)\\
\nonumber & = \left( \frac{13\cdot 54}{108 \cdot 71} + \frac{19
\cdot 17}{136 \cdot 71} \right) n + O(1)\\
& = \frac{n}{8} + O(1),\end{aligned}$$ which finishes the proof of Lemma \[l.n8\]. Thus, if we set $\alpha_0 = 1 - \frac{k}{2^n}$, where $k$ is as defined in , then the cyclic shift of the Van der Corput set by $\alpha_0$ satisfies $$\label{e.dnk0}
\int_{[0,1]^2} D_{\mathcal V_n^{\alpha_0}}\,(x)\, dx = O(1).$$
[*Remark.* ]{} Of course, the above construction only works when $n$ is a multiple of $71\cdot 2\cdot 4=568$. However, it can be easily adjusted for other values of $n$ just by setting the “remainder" digits equal to zero.
The Fourier coefficients of the discrepancy function
====================================================
Having eliminated the main problem, we shall now proceed to show that the remaining part of $D_{\mathcal V_n}$ behaves well under cyclic shifts. We shall use the exponential Fourier basis (rather than the more standard in this theory Haar basis) since it is better adapted to cyclic shifts.
Obviously, for any $\alpha$, we have $$\label{e.sumexp}
\sum_{p\in \mathcal V_n^\alpha} e^{-2\pi i m p_1} = \sum_{p\in
\mathcal V_n} e^{-2\pi i m p_1} = \sum_{j=0}^{2^n-1} e^{-2\pi i
\frac{m}{2^n}j} \cdot e^{-\pi i \frac{m}{2^{n}}} =
\begin{cases}
0,\,\,\,\,\,\,\,\,\,\textup{if}\,\,m\nequiv 0 \mod 2^n,\\
m,\,\,\,\,\,\,\,\,\textup{if}\,\, m=2^n m',\, m'\textup{- even},\\
-m,\,\,\,\,\textup{if}\,\, m=2^n m',\, m'\textup{- odd}.
\end{cases}$$
Fourier coefficients in the case $n_1 , n_2 \neq 0$. {#fourier-coefficients-in-the-case-n_1-n_2-neq-0. .unnumbered}
----------------------------------------------------
We first note that, for $n_1 , n_2 \neq 0$, the Fourier coeficient of the linear part is: $$\label{e.lnfourier}
\widehat{L_N}(n_1,n_2) = - \frac{N}{4\pi^2 n_1 n_2}.$$ The counting part yields $$\label{e.cnfourier}
\widehat{C_{\mathcal V_N}}(n_1,n_2) = -\frac{1}{4\pi^2 n_1 n_2}
\sum_{p\in \mathcal V_N} \left(1- e^{-2\pi i n_1 p_1} \right) \left(1- e^{-2\pi i n_2 p_2}
\right),$$ and, thus, $$\begin{aligned}
\label{e.dnfourier}
\widehat{D_{\mathcal V_n}} (n_1, n_2) &= \frac{1}{4\pi^2 n_1 n_2}
\sum_{p\in \mathcal V_N} \left( e^{-2\pi i n_1 p_1} + e^{-2\pi i n_2
p_2} - e^{-2\pi i (n_1 p_1 +n_2 p_2)} \right) $$
We now consider cases:
- Both $n_1$ and $n_2 \equiv 0 \mod 2^n$. Then $\widehat{D_{\mathcal V_n}} (n_1,
n_2) = C \frac{N}{4\pi^2 n_1 n_2}$, where $C$ takes values $-3$ or $1$, depending on whether $n_1/2^n$ and $n_2/2^n$ are even or odd.
- $n_1 \nequiv 0 \mod 2^n$, $n_2 \equiv 0 \mod 2^n$. In this case $\widehat{D_{\mathcal V_n}} (n_1,
n_2) = \frac{N}{4\pi^2 n_1 n_2}\cdot e^{-\pi i n_2/2^n}$.
- $n_2 \nequiv 0 \mod 2^n$, $n_1 \equiv 0 \mod 2^n$. In this case $\widehat{D_{\mathcal V_n}} (n_1,
n_2) = \frac{N}{4\pi^2 n_1 n_2}\cdot e^{-\pi i n_1/2^n}$.
- $n_1, n_2 \nequiv 0 \mod 2^n.$ Now we have $\widehat{D_{\mathcal V_n}} (n_1,
n_2) = -\frac{1}{4\pi^2 n_1 n_2}
\sum_{p\in \mathcal V_N} e^{-2\pi i (n_1 p_1 +n_2
p_2)}$.
Changing $p_1$ to $(p_1 +\alpha) \mod 1$ in the above computations, with $\alpha = j/2^n$, we notice that $$\label{e.nochange1}
\Abs{\widehat{D_{\mathcal V_n^\alpha}}(n_1,n_2)} = \Abs{\widehat{D_{\mathcal
V_n}}(n_1,n_2)} \,\,\,\,\textup{when}\,\, n_1, n_2 \neq 0.$$ Indeed, in the first three cases the coefficient does not change, while in the last it is multiplied by $e^{-2\pi i n_1 \alpha}$.
Fourier coefficients in the case $n_2=0$, $n_1\neq 0$. {#fourier-coefficients-in-the-case-n_20-n_1neq-0. .unnumbered}
------------------------------------------------------
We first note that, in this case $$\label{e.lnfourier0}
\widehat{L_N}(n_1, 0) = -\frac{N}{4\pi i n_1},
\qquad\textup{and}\qquad
\widehat{C_{\mathcal V_n}}(n_1, 0) = -\frac{1}{2 \pi i n_1}
\sum_{p\in \mathcal V_n} \left(1- e^{-2\pi i n_1 p_1}\right) \left(1- p_2 \right)
.$$ Thus, taking into account , we have $$\label{e.dnfourier0}
\widehat{D_{\mathcal V_n}} (n_1,0) = \frac{1}{2 \pi i n_1} \sum_{p\in \mathcal V_n} e^{-2\pi i
n_1
p_1}\cdot \left(1- p_2 \right).$$ And, once again, we obtain that $$\label{e.nochange2}
{\widehat{D_{\mathcal V_n^\alpha}}(n_1,0)} = \widehat{D_{\mathcal
V_n}}(n_1,0) \cdot e^{-2\pi i n_1 \alpha},
\qquad\textup{i.e.}\,\,\Abs{\widehat{D_{\mathcal V_n^\alpha}}} = \Abs{\widehat{D_{\mathcal
V_n}}} \,\,\textup{if }\, n_1\neq 0, n_2= 0.$$
Fourier coefficients in the case $n_1=0$, $n_2\neq
0$. {#fourier-coefficients-in-the-case-n_10-n_2neq0. .unnumbered}
--------------------------------------------------
As above, we can compute $$\label{e.dnfourier00}
\widehat{D_{\mathcal V_n}} (0, n_2) = \frac{1}{2 \pi i n_2} \sum_{p\in \mathcal V_n} \left(1- p_1 \right)\cdot e^{-2\pi i n_2
p_2}.$$ In the case $n_2 \equiv 0 \mod 2^n$, we obtain, using , $$\label{e.nochange3}
\widehat{D_{\mathcal V_n}} (0,
n_2) = \widehat{D_{\mathcal V_n^\alpha}} (0, n_2)= \frac{N}{4 \pi i
n_2}\cdot e^{-\pi i n_2/2^n}.$$
The only somewhat non-trivial case is when $n_1 = 0$, $n_2 \nequiv 0
\mod 2^n$. The Fourier coefficient in this case is $$\begin{aligned}
\label{e.dnfourier01}
\nonumber \widehat{D_{\mathcal V_n^\alpha}} (0, n_2) & = \frac{1}{2 \pi i n_2} \sum_{p\in \mathcal
V_n^\alpha} \left(1- p_1 \right)\cdot e^{-2\pi i n_2
p_2}\\
& = \widehat{D_{\mathcal V_n}} (0, n_2) + \frac{1}{2 \pi i n_2} \sum_{p\in \mathcal V_n :\, p_1>k/2^n}
e^{-2\pi i n_2
p_2},\,\,\,\,\,\textup{where}\,\,k/2^n=1-\alpha.
\end{aligned}$$ We shall examine the last sum above. Assume $n_2 = 2^s m$, where $0\le s <n$, $m$ is odd. Let us look over the part of the sum, ranging over a dyadic interval of length $2^{-l}$, $1\le l \le n$. This means that the first $l$ digits of $p_1$ (and thus, the last $l$ digits of $p_2$) are fixed, and the last $n-l$ (the first $n-l$ of $p_2$) are allowed to change freely. $$\begin{aligned}
\label{e.dyadick}
\sum_{p\in \mathcal V_n :\, p_1 \in [q2^{-l}, (q+1)2^{-l})}
e^{-2\pi i n_2 p_2}
& = e^{-2\pi i 2^s m \left( q_{n-l+1} 2^{-n+l-1} +\dots + q_{n}
2^{-n} + 2^{-n-1} \right)} \cdot \sum_{j=0}^{2^{n-l}-1} e^{-2\pi i m
2^{-n+l+s}j}.\end{aligned}$$ It is easy to see that the last sum equals zero when $l+s<n$; otherwise, its absolute value is at most $2^{n-l}$. We now split the interval $\{p_1 > k/2^n\}$ into at most $n$ dyadic intervals of length $2^{-l}$, $1\le l \le n$. We obtain $$\label{e.diff}
\ABS{\sum_{p\in \mathcal V_n :\, p_1>k/2^n}
e^{-2\pi i n_2
p_2}} \le \sum_{l=n-s}^n 2^{n-l} = 2^{s+1}-1.$$ That is, for $n_2=2^s m$, by and , we have $$\label{e.diff1}
\ABS{\widehat{D_{\mathcal V_n^\alpha}} (0, n_2)-\widehat{D_{\mathcal
V_n}} (0, n_2)} \le \frac{2^{s+1}}{2\pi n_2} = \frac{1}{\pi m}.$$
Proof of Theorem \[t.main\].
============================
For a function $f\in L^2\left([0,1]^2\right)$ and $S\subset \mathbb
Z^2$, we shall denote by $f_S$ the orthogonal projection of $f$ onto the span of the Fourier terms with indices in $S$, i.e. $$\label{e.fs}
f_S (x_1,x_2) \eqdef \sum_{(n_1, n_2 )\in S} \widehat{f} (n_1,n_2)\,
e^{2\pi i (n_1 x_1 + n_2 x_2)}.$$
Due to , , and Parseval’s identity, we have $$\NOrm \big( D_{\mathcal V_n^{\alpha_0}}\,\big)_{\mathbb Z^2
\setminus \{n_1=0\}} .2. = \NOrm \big( D_{\mathcal
V_n}\,\big)_{\mathbb Z^2 \setminus \{n_1=0\}} .2.$$ Inequality yields $$\NOrm \left( D_{\mathcal V_n^{\alpha_0}} - D_{\mathcal
V_n}\right)_{\{n_1=0, n_2\neq 0\}}.2.^2 \lesssim \sum_{s=0}^{n-1}
\sum_{m\,\,\textup{odd}} \frac1{m^2} \lesssim n= \log N.$$ Thus, we see that $\Norm (D_{\mathcal V_n})_{\mathbb Z^2\setminus
(0,0)}.2.$ indeed does not change much under cyclic shift. The inequalities above and yield: $$\Norm \big(D_{\mathcal V_n^{\alpha_0}}\big)_{\mathbb Z^2\setminus
(0,0)}.2. \lesssim \Norm \big(D_{\mathcal V_n}\big)_{\mathbb
Z^2\setminus (0,0)}.2. + \left( \log N \right)^{1/2} \lesssim \left(
\log N \right)^{1/2}.$$ Together with the fact that $\int D_{\mathcal V_n^{\alpha_0}}
\lesssim 1$, , this finishes the proof: $$\Norm D_{\mathcal V_n^{\alpha_0}} .2. \lesssim \left( \log N
\right)^{1/2}.$$
[^1]: The author is grateful to the Fields Institute and the Institute for Advanced Study for hospitality and to the National Science Foundation for support.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'A major hurdle in brain-machine interfaces (BMI) is the lack of an implantable neural interface system that remains viable for a lifetime. This paper explores the fundamental system design trade-offs and ultimate size, power, and bandwidth scaling limits of neural recording systems built from low-power CMOS circuitry coupled with ultrasonic power delivery and backscatter communication. In particular, we propose an ultra-miniature as well as extremely compliant system that enables massive scaling in the number of neural recordings from the brain while providing a path towards truly chronic BMI. These goals are achieved via two fundamental technology innovations: 1) thousands of 10 – 100 $\mu$m scale, free-floating, independent sensor nodes, or neural dust, that detect and report local extracellular electrophysiological data, and 2) a sub-cranial interrogator that establishes power and communication links with the neural dust. For 100 $\mu$m scale sensing nodes embedded 2 mm into the brain, ultrasonic power transmission can enable 7 % efficiency power links (-11.6 dB), resulting in a received power of $\sim$500 $\mu$W with a 1 mm$^2$ interrogator, which is >10$^7$ more than EM transmission at similar scale (40 pW). Extreme efficiency of ultrasonic transmission and CMOS front-ends can enable the scaling of the sensing nodes down to 10’s of $\mu$m.'
author:
- 'Dongjin Seo, Jose M. Carmena, Jan M. Rabaey, Elad Alon,'
- 'Michel M. Maharbiz'
title: 'Neural Dust: An Ultrasonic, Low Power Solution for Chronic Brain-Machine Interfaces'
---
alf a century of scientific and engineering effort has yielded a vast body of knowledge about the brain as well as a set of tools for stimulating and recording from neurons across multiple brain structures. However, for clinically relevant applications such as brain-machine interfaces (BMI), a tetherless, high density, chronic interface to the brain remains as one of the grand challenges of the 21st century.
Currently, the majority of neural recording is done through the direct electrical measurement of potential changes near relevant neurons during depolarization events called *action potentials* (AP). While the specifics vary across several prominent technologies, all of these interfaces share several characteristics: a physical, electrical connection between the active area inside the brain and electronic circuits near the periphery (from which, increasingly, data is sent out wirelessly from a “hub”) [@Biederman; @Fan; @Miranda; @Szuts]; a practical upper bound of several hundred implantable recording sites [@Stevenson; @Nicolelis; @Harrison1; @Carmena]; and the development of a biological response around the implanted electrodes which degrades recording performance over time [@Turner; @Polikov; @Chestek; @Suner]. To date, chronic clinical neural implants have proved to be successful in the short range (months to a few years) and for a small number of channels (10’s to 100’s) [@Alivisatos2]. Chronic recording from thousands of sites in a clinically relevant manner with little or no tissue response would be a game changer.
Outside the scope of clinical neuroprosthetics, the need for large scale recording of ensembles of neurons was recently emphasized by the Brain Research through Advancing Innovative Neurotechnologies (BRAIN) initiative in April 2013 by U.S. President Obama [@BRAIN] and several related opinion papers [@Alivisatos2; @Alivisatos1]. Currently, there are numerous modalities with which one can extract information from the brain. Advances in imaging technologies such as functional magnetic resonance imaging (fMRI), electroencephalography (EEG), positron emission tomography (PET), and magnetoencephalography (MEG) have provided a wealth of information about collective behaviors of groups of neurons [@Buzsaki]. Numerous efforts are focusing on intra- [@Xie] and extra-cellular [@Du2] electrophysiological recording and stimulation, molecular recording [@Zamft], optical recording [@Ziv], and hybrid techniques such as opto-genetic stimulation [@Cardin] and photo-acoustic [@Filonov] methods to perturb and record the individual activity of neurons in large (and, hopefully scalable) ensembles. All modalities, of course, have some fundamental tradeoffs and are usually limited in temporal or spatial resolution, portability, power, invasiveness, etc. Note that a comprehensive recent review of tradeoffs focused on recording from all neurons in a mouse brain can be found in Marblestone et al. [@Marblestone].
System Concept
==============
*Low-power CMOS circuits coupled with ultrasonic harvesting and backscatter communication can provide a toolset from which to build scalable, chronic extracellular recording systems.*\
\
This paper focuses on the fundamental system design trade-offs and ultimate size, power, and bandwidth scaling limits of systems built from low-power CMOS coupled with ultrasonic power delivery and backscatter communication. In particular, we propose an ultra-miniature as well as extremely compliant system, shown in [**Fig. 1**]{}, that enables massive scaling in the number of neural recordings from the brain while providing a path towards truly chronic BMI. These goals are achieved via two fundamental technology innovations: 1) thousands of 10 – 100 $\mu$m scale, free-floating, independent sensor nodes, or *neural dust*, that detect and report local extracellular electrophysiological data, and 2) a *sub-cranial interrogator* that establishes power and communication links with the neural dust. The interrogator is placed beneath the skull and below the dura mater, to avoid strong attenuation of ultrasound by bone and is powered by an external transceiver via RF power transfer. During operation, the sub-cranial interrogator couples ultrasound energy into the tissue and performs both spatial and frequency discrimination with sufficient bandwidth to interrogate each sensing node. Neural dust can be either an *active node*, which rectifies or recovers power at the sensing node to activate CMOS electronics for data pre-processing, encoding, and transmission, or a *passive node*, which maximizes the reflectivity of the dust as a function of a measured potential. For both schemes, neural dust can communicate the recorded neural data back to the interrogator by modulating the amplitude, frequency, and/or phase of the incoming ultrasound wave. The descriptions of each scheme and the modulation mechanism of each sensing node are detailed in the later sections.\
\
*Several energy modalities exist for powering and communicating with implants, but many of them are unsuitable for the size scales associated with neural dust.*\
\
The requirements for any computational platform interfacing with microelectrodes to acquire useful neural signals (e.g., for high quality motor control) are fairly stringent [@Harrison1; @Muller]. The two primary constraints on the implanted device are size and power. These are discussed in greater detail below, but we list them briefly next. First, implants placed into cortical tissue with scales larger than one or two cell diameters have well-documented tissue responses which are ultimately detrimental to performance and occur on the time-scale of months [@Seymour; @Marin]. Note that some debate exists as to what role mechanical anchoring outside the cortex plays in performance degradation. Second, all electrical potentials (extra-cellular or otherwise) are by definition measured differentially, so as devices scale down and the distance between recording points decreases accordingly, the absolute magnitude of the measured potential will also decrease. This decreased amplitude necessitates reductions in the front-end noise, which in turns requires higher power (i.e., for a fixed bandwidth, lowering the noise floor requires increased power consumption). Smaller devices, however, collect less power, and building sufficiently low-power electronics may be extremely challenging. Additionally, to eliminate the risk of infection associated with the transcutaneous/trans-cranial wires required for communication and power, such tethers should be avoided as much as possible; a wireless hub is therefore essential to relay the information recorded by the device through the skull.\
\
*High attenuation in brain tissue and geometry-dependent magnetic coupling limit the transfer efficiency of electromagnetics, especially for miniature implants.*\
\
The most popular existing wireless transcutaneous energy transfer technique relies on electromagnetics (EM) as the energy modality [@Rabaey]. An external transmitter generates and transfers information through purely electric [@Sodagar] or magnetic [@Lee] near field or electromagnetic far field coupling [@Poon]; this energy can be harvested by the implanted device and converted into a stable DC supply voltage. Energy transmission via magnetic near field has been used in a wide variety of medical applications and is the principal source of power for cochlear implants [@Clark]. As EM requires no moving parts or the need for chemical processing or temperature gradients, it is considered more robust and stable than other forms energy scavenging. When used in-body, however, EM coupling power density is restricted by the potential adverse health effects associated with excess tissue heating in the vicinity of the human body due to electromagnetic fields. This is regulated by the well known FCC and IEEE-recommended levels [@IEEE]. Roughly, the upper limit for EM power density transiting through tissue is set by the minimum required to heat a model sample of human tissue by 1$^\circ$C. For electromagnetic waves, the output power density is frequency dependent and cannot exceed a maximum of 10 mW/cm$^2$.
![Neural dust system diagram showing the placement of ultrasonic interrogator under the skull and the independent neural dust sensing nodes dispersed throughout the brain.[]{data-label="fig1"}](fig1-4.jpg){width=".48\textwidth"}
Consider, in this context, the problem of transmitting EM power to (and information from) very small CMOS chiplets embedded in tissue; does this approach scale to allow high density neural recordings? Regardless of the specific implementation, any such chiplet will contain a resonant component that couples to the EM waves; such a system can be modeled as a series/parallel RLC (for the purposes of this exercise, one may presume that a suitable method exists for modulating the quality factor or mutual coupling of the RLC as a function of neural activity). Given this, the performance of electromagnetic power transfer suffers from two fundamental issues. First, the extreme constraint on the size of the node limits the maximum achievable values of the passives. Assuming a planar square loop inductor, calculations predict the resonant frequency of a 100 $\mu$m neural dust would be $\sim$10 GHz. [**Fig. 2 (a)**]{} plots the modeled channel loss, or the attenuation of the EM signal as it propagates through 2 mm of brain tissue, due to tissue absorption and beam spreading, as a function of frequency. We observe that there is an exponential relationship between the channel loss and the frequency, and at 10 GHz – the total combined loss for one-way transmission is approximately 20 dB. Moreover, at these very small footprints (compared to the wavelength, which is in millimeter range), the receive antenna efficiency becomes quite small, thereby easily adding roughly 20 dB of additional loss, resulting in a total gain of at most -40 dB. The tissue absorption loss penalty incurred by operating at a high frequency can be reduced by increasing the capacitance density using 3D inter-digitized capacitor layouts, but even then, as shown in [**Fig. 2 (b)**]{}, scaling down the dimensions of the chiplets increases the resonant frequency of the link, causing an exponential increase in the tissue absorption loss and the overall channel loss, and the efficiency of EM transmission becomes miniscule.
![[**(a)**]{} Total channel loss in 2 mm brain tissue, due to both tissue and propagation loss, increases exponentially with frequency, resulting in a 20 dB of loss at 10 GHz. [**(b)**]{} The mutual coupling, and therefore link efficiency, also reduces dramatically with the scaling of chiplet dimensions.[]{data-label="fig2"}](fig2.png){width=".485\textwidth"}
To make matters worse, the mutual coupling between the transmitter and receiver coils drops dramatically and significantly degrades the transfer efficiency and increases the sensitivity to misalignments [@Salim; @Fotopoulou]. As shown in [**Fig. 2 (b)**]{}, EM transmission with a 100 $\mu$m neural dust embedded 2 mm into the cortex results in 64 dB of transmission loss. Given a 1 mm$^2$ transmitter aperture outputting 100 $\mu$W of power – limited by the need to satisfy safety regulations on output power density of 10 mW/cm$^2$ – the resulting received power at the neural dust is $\sim$40 pW. This is orders of magnitude smaller than the power consumption imposed by noise requirements on the front-end amplification circuitry (refer to later sections for further discussion). As a result, prior work by [@Biederman], which features the most energy-efficient and smallest wirelessly EM powered neural recording system to date, at 2.5 $\mu$W/channel and 250 $\mu$m x 450 $\mu$m, is limited in terms of further dimensional scaling and increasing the range (the effective range within brain tissue for this work was 0.6 mm). We conclude that due to the non-linear interplay of form factor, speed of light, and frequency spectra of tissue absorption, EM power transmission is not an appropriate energy modality for the powering of 10’s of $\mu$m sized neural dust implants.\
\
*Ultrasound is attractive for in-tissue communication given its short wavelength and low attenuation.*\
\
Ultrasonic transducers have found application in various disciplines including imaging, high intensity focused ultrasound (HIFU), nondestructive testing of materials, communication and power delivery through steel walls, underwater communications, transcutaneous power delivery, and energy harvesting [@Ishida; @Wong; @Ozeri; @Richards]. The idea of using acoustic waves to transmit energy was first proposed in 1958 by Rosen [@Rosen] to describe the energy coupling between two piezoelectric transducers. Unlike electromagnetics, using ultrasound as an energy transmission modality never entered into widespread consumer application and was often overlooked because the efficiency of electromagnetics for short distances and large apertures is superior. However, at the scales discussed here and in tissue (i.e., aqueous media) the low acoustic velocity allows operation at dramatically lower frequencies, and more importantly, the acoustic loss in tissue is generally substantially smaller than the attenuation of electromagnetics in tissue ([**Table 1**]{}).
As mentioned earlier, the relatively low acoustic velocity of ultrasound results in substantially reduced wavelength compared to EM. For example, 10 MHz ultrasound in brain tissue has a wavelength $\lambda$ = 150 $\mu$m, while for 10 GHz EM, $\lambda$ = 5 mm [@Hoskins]. This smaller wavelength implies that for the same transmission distance, ultrasonic systems are much more likely to operate in the far-field, and hence offer more isotropic characteristics than an EM transmitter (i.e., the ultrasonic radiator can obtain larger spatial coverage). This opens up the prospect of interrogation of multiple nodes via frequency binning. More importantly, the acoustic loss in brain tissue is fundamentally smaller than the attenuation of electromagnetics in tissue because acoustic transmission relies on compression and rarefaction of the tissue rather than time-varying electric/magnetic fields that generate displacement currents on the surface of the tissue [@Leighton]. This is also manifested by the stark difference in the time-averaged acceptable intensity for ultrasound for cephalic applications, regulated by FDA, which is approximately 9x (94 mW/cm$^2$) for general-purpose devices and 72x (720 mW/cm$^2$) more than EM for devices conforming to output display standards (ODS) (recall EM is limited to 10 mW/cm$^2$) [@FDA].
As an aside, in order to increase the instantaneous power captured by an implant, FDA regulations would allow an interrogator to transmit up to 190 W/cm$^2$ of spatial peak pulse-averaged power density. This approach, however, must be taken with caution as more in-depth studies of the thermal impact of duty-cycled operation on the tissue are necessary to determine safe parameters of the applied duty-cycle and meet the time-averaged power level constraint [@Tufail; @King]. Also, as demonstrated by a body of work investigating the effectiveness of ultrasound as a means of modulating neuronal activity [@Foley; @Krasovitski; @Tyler; @Hameroff], systems operating in this regime may be capable of micro-stimulating the brain at a CW time-averaged output intensity as low as 1 W/cm$^2$ [@Tsui], and cause tissue ablation through heating and cavitation at intensities in the focal region of 100 - 1000 W/cm$^2$ [@Zhou].\
\
*Piezoelectric ultrasonic transducers suitable for implanted applications are available.*\
\
Piezoelectricity refers to the phenomenon present in certain solid (usually crystalline) materials where there is an interaction between the mechanical and electrical states. As a result, piezoelectric materials can transduce electrical energy into mechanical energy and vice versa by changing lattice structure, and this state change is accessible via either electrical stimulation or mechanical deformation. These materials serve as a critical component in the construction of probes that generate ultrasonic waves to enable ultrasound technology used in the medical industry. A relatively wide range of piezoelectric materials are available, each suitable for different applications. For instance, materials such as single crystal lithium niobate (LiNbO$_3$) and polymer PVDF are excellent choices for fabricating large aperture single element transducers due to their low dielectric permittivity [@Shung]. On the other hand, a ceramic compound known as lead zirconate titanate (PZT) is a popular choice for high performance diagnostic ultrasonic imaging due to its greater sensitivity, higher operational temperature, and exceptional electromechanical coupling coefficient. The electromechanical coupling coefficient is a figure of merit used to describe the ability of a material to convert one form of energy into another, and is defined as the ratio of stored mechanical energy to total stored energy in a given material. The lead content of PZT makes it difficult to introduce into human tissue in chronic applications; several works have demonstrated encapsulation as an option to avoid this issue [@Zenner; @Maleki], but the long-term stability of such encapsulation layers remain to be investigated.
{width=".5\textwidth"}
[ Comparison of both the scale and the loss incurred in brain tissue between ultrasound and EM radiation, displaying the stark differences in the achievable spatial resolution (set by the wavelength) and the tissue/path loss for operating frequency of a 100 $\mu$m neural dust (\*Attenuation of ultrasound in brain is 0.5 dB/(cm$\cdot$MHz) [@Hoskins])]{}.
Luckily, biocompatible piezoelectric materials exist with properties similar (but generally inferior) to PZT; these include barium titanate (BaTiO$_3$), aluminum nitride (AlN) and zinc oxide (ZnO) [@Przybyla]. Although the dielectric coefficients of AlN and ZnO are less than one-hundredth that of BaTiO$_3$ (which can result in an improvement in the signal to noise ratio due to the lower parallel plate capacitance), their piezoelectric coefficient (which is critical to the link efficiency) is one-tenth that of BaTiO$_3$. Therefore, BaTiO$_3$ transducers are assumed for the remainder of the paper. Clearly, material engineering to synthesize higher performance piezoelectric composite materials and reliability studies to assess performance over extended periods of operation are both active areas of research that can significantly contribute to the realization of neural dust.
System design and constraints: Power Delivery
=============================================
There are several implementation strategies for the neural dust. A neural dust can be an *active node*, which consists of a piezoelectric transducer to recover power at the sensing site to activate CMOS electronics for data pre-processing, encoding, and transmission, or a *passive node*, which maximizes the reflectivity of the dust as a function of a measured potential. In an active node scheme, the design of neural dust is heavily constrained in both size and available power to the implant. As a result, it is imperative to accurately model the transmission channel to maximize the power efficiency. Therefore, this section elaborates design tradeoffs and methodologies for power delivery optimization.\
\
*The propagation characteristics of ultrasound must be considered in determining the maximum range of neural dust and the optimal dimension of the external interrogator.*\
\
As the pressure field generated by a uniform continuous-wave excited piezoelectric transducer propagates through the tissue medium, the characteristics of the pressure field change with distance from the source. The varying field is typically divided into two segments, *near field* and *far field*. In the near field, the shape of the pressure field is cylindrical and the envelope of the field oscillates. At some point distal to the transducer, however, the beam begins to diverge and the pressure field becomes a spherically spreading wave, which decays inversely with distance. The transition between the near and far field is where the pressure field converges to a natural focus, and the distance at which this occurs is called the Rayleigh distance, defined as,
$$L=\mfrac{(D^2 - \lambda^2)}{4\lambda} \approx \mfrac{D^2}{4\lambda} , D^2\gg \lambda^2$$
where D is the aperture width of the transmitter and $\lambda$ is the wavelength of ultrasound in the propagation medium. In order to maximize the received power, it is preferable to place the receiver at one Rayleigh distance where the beam spreading is at a minimum. Therefore, with 2 mm of transmission distance and a resonant frequency of 10 MHz ($\lambda$ = 150 $\mu$m), the maximum dimension of the external interrogator should be $\sim$1 mm.\
\
*Neural dust transducers can be simulated with finite element packages and incorporated into a KLM-based link model.*\
\
Due to the importance of piezoelectric transducers in various applications, a number of models of the electromechanical operation of one-dimensional piezoelectric and acoustic phenomena have evolved over the years. The KLM model is arguably the most common equivalent circuit and is a useful starting point to construct a full link model with the intent of examining scaling and system constraints [@KLM]. The basic model includes a piezoelectric transducer with electrodes fully covering the two largest faces of the transducer. The entire transducer is modeled as a frequency-dependent three-port network, consisting of one electrical port (where electric power is applied or collected) and two acoustical ports (where mechanical waves are produced or sensed from the front and back faces of the transducer). The parallel-plate capacitance due to the electrodes and the frequency-dependent acoustic capacitance are modeled as C and $X_i$, respectively, and the transduction between electrical and mechanical domains is modeled as an ideal electromechanical transformer with a turn ratio of $\Phi$, connected to the middle of a transmission line of length $\lambda$/2, as shown in [**Fig. 3**]{}. Assuming an infinite 2D plate piezoelectric transducer of thickness *h*, the resonant frequency is set by *h* = $\lambda$/2; at the resonant frequency, the ultrasound wave impinging on either the front or back face of the transducer will undergo a 180$^{\circ}$ phase shift to reach the other side, causing the largest displacement between the two faces. This observation implies that phase inversion only exists at the odd harmonics of the fundamental mode in a given geometry.
The KLM model, however, was derived under the assumption of pure one-dimensional thickness vibration, and therefore can only provide a valid representation for a piezoelectric transducer with an aspect ratio (width/thickness) greater than 10 that mainly resonates in the thickness mode [@Roa-Prada]. Given the extreme miniaturization target for the neural dust, a cube dimension (aspect ratio of 1:1:1) is a better approximation of the geometry than a plate (aspect ratio > 10:10:1). Due to Poisson’s ratio and the associated mode coupling between resonant modes along each of the three axes of the cube, changing aspect ratio alters the resonant frequencies [@Holland]. The piezoelectric transducers for both the interrogator and the neural dust must be designed to resonate at the same frequency to maximize the link efficiency. In the model below, we assume the neural dust nodes are cubic and the external transceiver is approximately planar (i.e., 2D) so care must be taken not to confuse the thickness of the interrogator and the neural dust.
In order to obtain KLM parameters for the neural dust transducer, we simulated a cube transducer using a 3D finite element package (COMSOL Multiphysics) to model both the resonant frequency shift vs. a plate and the manifestation of spurious tones and higher resonances. The effect of resonance shift is included in the KLM model by extracting the effective acoustic impedance of the neural dust from the COMSOL model. To match the resonant frequency of the interrogator and the neural dust, the interrogator thickness is varied to match the fundamental thickness mode of the neural dust. Approximately 66 % of the total output energy is contained in the main thickness resonance; this is modeled as a loss term. Coupling into other modes, however, can be reduced by stretching BaTiO$_3$ in the \[110\] direction because BaTiO$_3$ is both anisotropic and partially auxetic, exhibiting negative Poisson’s ratio and therefore providing gain when stretched [@Baughman; @Aleshin]. Well-engineered placement of electrodes may enable orientation-insensitive implant nodes and can allow multi-node ad-hoc type communication networks. More on this topic will be elaborated in the discussion section.\
\
*The maximum energy transfer efficiency can be found via a link model consisting of a cascade of two-port networks.*\
\
A good model of the ultrasonic channel is crucial in order to assess the tradeoffs in optimizing systems for energy transfer through lossy brain tissue. The complete energy link model is shown in [**Fig. 4** ]{}and can be divided into three parts: (1) the ultrasonic interrogator or *transmitter*, (2) tissue, and (3) the neural dust or *receiver*. A signal generator and amplifying stages produce power for the ultrasonic transmitter through an impedance matching circuit that provides conjugate matching at the input. The ultrasonic wave launched by the interrogator penetrates brain tissue, modeled as a lossy transmission line, and a fraction of that energy is harvested by the ultrasonic receiver, or neural dust. We evaluated embedding the receiver up to 2 mm into the tissue, which generates an AC voltage at the electrical port of the piezoelectric transducer in response to the incoming ultrasonic energy.
![KLM model of a neural dust piezoelectric transducer, showing one electrical port and two mechanical ports. Coupling between the domains is modeled with an ideal electromechanical transformer.[]{data-label="fig3"}](fig3.png){width=".5\textwidth"}
In order to compute the link energy transfer efficiency, the model can be decomposed to a set of linear and time-invariant two-port parameters, representing a linear relationship between the input and output voltage. Here, we choose to represent the input-to-output relationship using ABCD parameters, which simplify analysis of cascades of two-port networks through simple matrix multiplication. By representing the link model with the two-port network, we can come to conclusions concerning optimal power transfer efficiency (or “gain”). Generally, maximum link efficiency ($G_{max}$) is achieved when we can conjugate match both the input and the output of a two-port network. However, with a 100 $\mu$m neural dust node, the output impedance level is such that it would require $\sim$100 $\mu$H of inductance to perfectly conjugate match the output of the two port link network. Given the compact form factor of the neural dust, it is completely infeasible to obtain such inductance with electrical means, and therefore $G_{max}$ is an unachievable figure of merit. It may be possible to approach $G_{max}$ by mechanical means such as the addition of material layers that perform an acoustic impedance transformation, or similarly, by electromechanical means such as utilizing micromachined acoustic resonators. We do not explore the first option in detail as it would likely lead to thickness increases on order of integer fractions of a wavelength (but see [**Fig. 5 (b)**]{} and below); the second option is touched upon in Discussion and Conclusion. Therefore, for comparison and scaling analysis, we assume we only have impedance control at the input, or the interrogator side, and therefore, power gain ($G_p$) is the suitable figure-of-merit.\
\
*For a 100 $\mu$m node embedded 2 mm into the brain, ultrasonic power transmission can enable 7 % efficiency power links (-11.6 dB), resulting in a received power of $\sim$500 $\mu$W with a 1 mm$^2$ interrogator.*\
\
The complete link model is implemented in MATLAB with the limitations of the KLM model (as outlined in the previous section) corrected via COMSOL simulations. Given a 1 mm$^2$ interrogator, [**Fig. 5**]{} plots both the efficiency of the link and the received power at the sensor node as the size of the dust node scales and the thickness of the interrogating transducer is adjusted to match the resonant frequency of the dust node and the tissue (i.e., transmission line resonator). We note that the maximum efficiency of the KLM-adapted link model, where the interrogator is fully immersed in the tissue medium, is limited to 50 % because both the back and front side of the interrogator are loaded by the tissue layer. This results in an efficiency drop of 3 dB as the ultrasonic energy couples to both the front and back face of the transducer equally. Additionally, without any impedance matching, since the acoustic impedance of the tissue (1.5 MRayls) and that of BaTiO$_3$ (30 MRayls) are drastically different, significant reflection occurs at their boundaries. Depending on the thickness of neural dust and the resonant frequency of the network, ultrasonic waves launched by the interrogator undergo varying phase changes through the lossy tissue. Thus, the efficiency of a system with smaller dust nodes can be improved if the total propagation distance happens to be a multiple of a wavelength of the ultrasound. As a result, for dust nodes greater than 100 $\mu$m, we note that the efficiency does not monotonically increase with the dimension. On the other hand, for a dust node that is less than 100 $\mu$m in dimension, because the wavelength associated with the network’s resonant frequency is much smaller than its tissue propagation distance, the link efficiency depends more heavily on the cross-sectional area of the neural dust. Therefore, we note that the efficiency will drop at least quadratically with the reduction of neural dust dimension. The efficiency of the link can be improved with a $\lambda$/4 matching layer for impedance transformation, but the improvement is limited due to the loss from the material (e.g., attenuation of graphite epoxy is $\sim$16 dB/(cm$\cdot$MHz) [@Mills] compared to that in brain tissue which is 0.5 dB/(cm$\cdot$MHz) [@Hoskins]) as shown in [**Fig. 5 (b)**]{}. Note that for the case with this matching layer, the efficiency is worse for dust nodes that are >500 $\mu$m since the loss of the matching layer outweighs that of the tissue.
![Complete single interrogator, single neural dust power and communication through link models.[]{data-label="fig4"}](fig5.png){width=".47\textwidth"}
More specifically, simulation of the complete link indicates that for a 100 $\mu$m node embedded 2 mm into the brain, ultrasonic power transmission can enable 7 % efficiency power transmission (-11.6 dB). As shown in [**Fig. 5 (a)**]{}, the optimal transmission frequency is 8 MHz; half of this peak $G_p$ can be maintained for carrier frequencies that are $\pm$2 MHz separated from this peak. At the resonant frequency, we can receive up to $\sim$500 $\mu$W at the neural dust node (resulting in nano-meters of total displacement) with a 1 mm$^2$ interrogator, which is >10$^7$ more than EM transmission at the same size scale (40 pW in [**Fig. 2**]{}). Scaling of neural dust also indicates that approximately 3.5 $\mu$W can be recovered by a dust node as small as 20 $\mu$m through ultrasonic transmission, which is still in the realm of feasibility to operate a state-of-the-art CMOS neural front-end. Designing an ultra-energy efficient neural front-end in CMOS in such small footprint (20 $\mu$m x 20 $\mu$m), however, is an extremely challenging problem and is discussed in detail below.
System design and constraints: Sensing / Communication
======================================================
*Extracting neural potential recording from a noisy environment is a challenging problem.*\
\
The electrical activity of neurons is most directly measured as an electrical potential across the cellular membrane. As a result, the highest fidelity measurement can be achieved using patch-clamp methods, where a glass pipette is placed in the vicinity of the cell and an intra-cellular electrical connection is established by penetrating the cellular membrane and sealing the membrane around the pipette. While this approach is well studied and commonly practiced, it does not scale well and is currently not useful for chronic implants due to the complexity of the procedure (but see [@Kodan; @Robinson; @Yao]). Due to these limitations, clinically-relevant, implantable recordings are taken *extra-cellularly*; that is, electrical measurements are taken entirely outside the cells.
![[**(a)**]{} Ultrasonic power transfer efficiency vs. operating frequency for a 100 $\mu$m neural dust [**(b)**]{} Link efficiency with and without a matching layer as a function of the neural dust side dimension.[]{data-label="fig5"}](fig6.png){width=".5\textwidth"}
A typical extracellular electrophysiological recording of neural activity in tissue usually records electrical potential differences between one electrode placed in-tissue near the neural activity and a second electrode “far away” which acts as a global ground or counter electrode (depending on the configuration). The recorded signal consists of three components: an electrochemical offset that appears as a DC offset, typically in the range of 100’s of mV, low-frequency (0.1 – 600 Hz) changes [@Belitski] ($\sim$0.5 mV amplitude) often termed *local field potential* (LFP) from a spatial average of neural activity in the neighborhood of electrodes and high frequency (0.8 – 10 kHz) *action potential* (AP) or spiking events ($\sim$100 $\mu$V) associated with the discharge of individual neurons in the vicinity of the electrode [@Nicolelis]. Ignoring noise inherent in the recording equipment (which is usually not insubstantial), there are two main sources of cortical recording noise: thermal noise generated by the recording electrode and the tissue interface and biological interference which arises from asynchronous neural activity in close proximity to the recording site. Therefore, neural signal acquisition chains often rely on obtaining a maximum signal level at the front-ends and/or separating the $\mu$V-level desired signal from large offsets and low frequency disturbances.\
\
*Spatial separation of recording electrodes to maximize the achievable differential signal on neural dust is the bottleneck for scaling.*\
\
Free-floating extracellular recording at untethered, ultra-small dust nodes poses a major challenge in scaling. Unlike the needle-like microelectrode shanks that can measure time-domain electrical potential at each recording site in relation to a common electrode, placed relatively far away, both the recording and the common electrode must be placed within the same (very small) footprint. Although the two are interchangeable, the separation and therefore, the maximum differential signal between the electrodes are inherently limited by the neural dust footprint, and follow the dipole-dipole voltage characteristic that decreases quadratically (unless very near a cell body, in which case it appears to scale exponentially; see [@Gold] for a more thorough review) with increasing separation distance. Since the power available to the implant has a fixed upper bound (see above), the reduction of extracellular potential amplitude as the neural dust dimensions are scaled down in the presence of biological, thermal, electronic, and mechanical noise (which do not scale), causes the signal-to-noise (SNR) ratio to degrade significantly; this places heavy constraints on the CMOS front-ends for processing and extracting the signal from extremely noisy measurements. Therefore, if we consider sufficient SNR at the input of the neural front-ends as one of the design variables, the scaling of neural dust (as depicted in [**Fig. 5 (b)**]{}) must be revisited.\
\
*Careful co-optimization of piezoelectric transducer and CMOS front-end circuitry can push the operation of neural dust down at least to the 50 $\mu$m scale.*\
\
Focusing specifically on the scaling of a cubic neural dust, we run into the inherent limitation in the maximum achievable differential signal (discussed above). At a separation distance of 100 $\mu$m between recording electrodes, we expect a 10 $\mu$V AP amplitude \[data derived from [@Du]\], with the amplitude further reducing quadratically as the separation is reduced. Since the power available to the neural dust is limited, the design goal of a front-end architecture is to minimize the input-referred noise within this power budget. The power efficiency factor (NEF$^2\cdot V_{dd}$) quantifies the tradeoff between power and noise [@Muller] and extrapolating from the measurement result of a previousCMOS neural front-end design (NEF$^2\cdot V_{dd}$ of 9.42 [@Biederman]), we can estimate the relationship between the input-referred noise level and the DC power consumption of an optimally designed front-end architecture as we scale. The fundamental limit to the NEF$^2\cdot V_{dd}$ occurs at a supply voltage of at least $\sim$4 $k_BT/q$ or 100 mV, in order to reliably operate the FET, and by definition, the NEF of 1 for a single BJT amplifier [@Steyaert]. In principle, one could push the supply voltage down to $\sim$2 $k_BT/q$, but in practice 100 mV is already extremely aggressive.
Fixing the input SNR to 3, which should be sufficient for extracting neural signals, we can evaluate the scaling capability of neural dust as shown in [**Fig. 6**]{}. We assumed the use of BaTiO$_3$ in the model described in the section above and do not include the use of matching layers. We also assumed that the interrogator’s output power is constrained by the two different FDA-approved ultrasonic energy transfer protocols. We note that there exists an inherent tradeoff between the power available to the implant and the exponential increase in the power required to achieve an SNR of 3 with the reduction of spacing between the electrodes. The point of intersection in [**Fig. 6**]{} denotes the minimum size of neural dust that enables the operation of the complete link. For the stated assumptions, this occurs at 50 $\mu$m, which is greater than the dimension at which the thermal noise from the electrode (R = 20 k$\Omega$ and BW = 10 kHz) limits further scaling. This effectively means that, staying within FDA-approved ultrasound power limits, assuming an SNR of 3 is required, neural dust nodes smaller than 50 $\mu$m cannot receive enough power to distinguish neural activity from noise. Note that the cross-over assumes 100 % efficiency in the rectifier and zero overhead cost in the remaining circuitry, both of which will not be true in practice (i.e., the actual size limit will be larger than this).
Given the lower size limit for scaling these systems, as well as the need to implant them entirely in the cortex, both wireless power and communication schemes are required for the neural dust nodes. The communication strategy is detailed below.\
\
*Neural electrophysiological data can be reported back via backscattering – i.e., modulating reflection of the incident carrier.*\
\
Radio frequency identification (RFID) technologies have found broad adoption in the past decade, and were made possible by advances in wireless powering techniques as well as the improved energy-efficiency of the computational substrates. In general, RFID employs two different mechanisms to communicate with sensor tags: active and passive [@Weinstein]. When queried, *active tags*, which are battery-powered and contain a low power radio like conventional wireless devices, internally generate electromagnetic radiation in order to transmit the data back to the reader. In contrast, *passive* and *semi-passive* tags transmit data by modulating the incoming RF energy and re-radiating the modulated RF energy back to the reader, a method called *backscattering*. Modulation of the backscattered RF energy can be achieved by varying the load impedance, which changes the coefficient of reflectivity. Furthermore, backscattering is amenable to parallel communication among sensor tags and one interrogator distinguishing among different receivers by using frequency diversity [@Finkenzeller]. Multi-mode strategies are discussed in Discussion and Conclusion.
![As we scale down the neural dust size, more power is needed to keep the noise floor down to maintain SNR while less power is captured. The intersection of these two trends is the smallest node that will still operate. Scaling with an SNR of 3 shows operation down to 50 $\mu$m. The analysis assumes the use of BaTiO$_3$, two different FDA-approved ultrasonic energy transfer protocols, and does not include the use of matching layers.[]{data-label="fig7"}](fig8.png){width=".47\textwidth"}
For the ultra-miniature, chronic implants discussed here (which have stringent requirements on both the size and power available to the implant), broadcasting the information back to the interrogator via backscattering is a more attractive choice than building a fully active transmitter on the implant. As a passive device, backscattering receivers do not need batteries or significant capacitive energy storage, thus extending lifetimes, eliminating the risk of battery leakage, and removing the significant impediment to size scaling that would be created by the dramatically reduced capacitance available on a small node. The powering and communication strategies developed for electromagnetic backscattering can be applied to any link, regardless of the transmission channel modality (i.e., ultrasound).\
\
*Co-integration of CMOS and piezoelectric transducer is challenging, but CMOS can provide dynamic control over the load impedance.*\
\
The CMOS component of an active neural dust node must at least consist of a full-wave bridge rectifier to convert the harvested piezoelectric AC signal to a DC level and regulators to generate a stable and appropriate DC supply voltage for the rest of the CMOS circuitry. The basic architecture of the CMOS front-ends will depend on the application. For the acquisition of the entire neural signal trace, we must capture both the LFP and action potentials. Given the relative amplitude, DC offset, and frequency range of these signals, the circuit must operate at a full bandwidth of 0 to 10 kHz with >70 dB of input dynamic range [@Muller]. Researchers have demonstrated a mixed-signal data acquisition architecture solution to extract LFP and action potentials, originally proposed in [@Muller], which cancels the DC offset in the analog domain to alleviate the dynamic range constraints and to eliminate bulky passive components used in [@Yazicioglu1; @Yazicioglu2]. Therefore, the CMOS front-ends include rectifiers, voltage regulators, low-noise amplifiers, DC-coupled analog-to-digital converters (ADC) and modulators to communicate the decoded information back to the interrogator.
Co-integration and packaging challenges and – most importantly – the footprint of current CMOS neural front-ends present major roadblocks to the active implant approach. The smallest CMOS neural front-end system published to date, not including rectifiers and modulators, occupies approximately 100 $\mu$m$^2$ of silicon real estate [@Muller], and packing the same functionality onto a smaller footprint may not be plausible. Thinned, multi-substrate integration to meet the volume requirements while keeping the overall CMOS area constant may resolve this issue, but requires substantial further technology development to represent a viable solution. Scaling the active electronics to appropriate dimensions is clearly a bottleneck, but presents an enticing opportunity for further innovation to address the issue.
System design and constraints: Passive node
===========================================
*A MOSFET (Metal-Oxide-Semiconductor field effect transistor) may be used to modulate the impedance of the transducer as a function of neural signals, obviating the need for active front-ends.*\
\
Ideally, the simplest neural dust would consist of a piezoelectric transducer with a set of surface electrodes that can record the occurrence of a neural spike, and the extracted measurement can be reported back to the interrogator by somehow encoding the information on top of the incoming ultrasound wave. The design methodology we adopt here is that of elimination: starting with current neural front-end architectures that consist of, but are not limited to, rectifiers, high resolution ADC, amplifiers, regulators and modulators, we start eliminating each component to truly understand its impact on overall system performance, and therefore assess its necessity for inclusion on the dust node itself. Rectifiers and voltage regulators are essential to provide a stable DC power supply for the transistors in the system. In order to prevent variations in the electrical response of the circuits with the variation of its power supply, it is important to have sufficient amount of capacitance to curb any supply ripple and filter out high frequency electrical noise. As a result, these two components tend to occupy the largest amount of space in the CMOS die footprint.
Here, let us re-examine the need for a DC supply as we entertain the idea of completely eliminating both the rectifiers and the voltage regulators. In this scenario, the piezoelectric transducer harvests the incoming ultrasonic wave and directly converts it to an AC electrical voltage. At this point, the design goal essentially boils down to devising ways of encoding neural data on top of this incoming ultrasound wave, to be reported back to the interrogator via modulation.
We propose a method outlined in [**Fig. 7**]{}, where the drain (D) and source (S) of a single FET sensor are connected to the two terminals of a piezoelectric transducer while the FET modulates the current $I_{DS}$ as a function of a gate (G) to source voltage, $V_{GS}$. In this scheme, given that the supplied $V_{DS}$ of the FET is an AC voltage that swings both positive and negative, the body (B) of the FET must be biased carefully. Normally, for an NFET, the body is connected to the source voltage to prevent the diode at the B-S and B-D junctions from turning on. However, keep in mind that since a FET is a symmetric device, the source and drain are defined only by which terminal is at a lower potential. Therefore, the electrical source/drain terminals, or left/right for disambiguation (from a cross section of MOS device), swap physical sides every half cycle of the harvested AC waveform. As a result, simply shorting the body to either physical terminal of the FET causes the diode formed at the B-S and B-D junctions to be forward-biased, so care must be taken to avoid neural signal from modulating the incoming sinusoid only half of the cycle.
As a result, we propose an alternative biasing scheme for the FET to modulate the entire sinusoid as shown in [**Fig. 7**]{}. The resistors $R_b$ act to cause the neural potential to appear between the gate and both of the left/right terminals of the transistors while superimposing the AC waveform from the ultrasonic transducer across these same two terminals. In this manner, even though the electrical source/drain terminals swap every half cycle, during both halves of the cycle the $V_{GS}$ of the FET is modulated by the neural signal.
{width=".885\textwidth"}
The circuit achieves this superposition by relying on the fact that the neural signals occupy a much lower frequency band than the ultrasound, and that the ultrasound transducer itself has a capacitive output impedance ($C_{piezo}$). Thus, $R_b$ should be chosen so that 1/($R_b\cdot C_{piezo}$) is placed well above the bandwidth of $V_{neural}$ (>10 kHz) but well below the ultrasound frequency ($\sim$10 MHz for a 100 $\mu$m node). $R_b$ along with the transistor width must also be chosen carefully to achieve the best reflectivity, as will be described shortly.
Since modulation of $I_{DS}$ in turn modulates the impedance seen across the two piezoelectric drive terminals, the FET effectively modulates the backscattered signal seen by a distant transmitter. The change in the nominal level of $I_{DS}$ is a function of $V_{GS}$, which can be up to 10 $\mu$V ($V_{neural}$) for a 100 $\mu$m dust node near an active neuron. The sensitivity, S, to the action potential, then, is defined as the change in $I_{DS}$ with respect to $V_{GS}$ normalized by the nominal $I_{DS}$ (in addition to the current through $R_b$) and $V_{neural}$, $$S=\mfrac{V_{neural}}{I_{DS}+V_{DS}/2R_b}\cdot\mfrac{\partial I_{DS}}{\partial V_{GS}}=V_{neural}\cdot\mfrac{g_{m}}{I_{DS}+V_{DS}/2R_b}$$
Since $g_{m}$ (transconductance of a FET) is directly proportional to $I_{DS}$, in order to maximize $g_{m}/I_{DS}$ (i.e., achieves the largest $g_{m}$ for a given $I_{DS}$), we would like to operate the FET in its steepest region – specifically, deep sub-threshold where it looks like a bipolar junction transistor (BJT). Therefore, the nominal $V_{GS}$ bias can be 0 V, which simplifies the bias circuitry. The modulation of the current is equivalent to a change in the effective impedance of the FET, or the electrical load to the piezoelectric transducer. This variation in the load impedance affects the ultrasonic wave reflectivity at the neural dust and modifies the wave that is backscattered. Note that in order to maximize the sensitivity (i.e., operating the transistor in deep sub-threshold), the system should be constrained such that the piezoelectric voltage is never too large compared to the threshold voltage.
A SPICE simulation of a typical low-threshold voltage NFET in a standard 65 nm CMOS technology was used in order to assess the nominal current level and the change in the effective impedance of the electrical load with $V_{neural}$. We assumed that we can implement suitably large $R_b$ in sufficiently small area of the neural dust nodes. As previously mentioned, in deep sub-threshold, the FET behaves as a BJT, where the physical limit on the achievable $g_{m}/I_{DS}$ = $q/k_BT$, determined by the Boltzmann distribution of carriers. As a result, we can obtain S = 400 ppm for $V_{neural}$ = 10 $\mu$V with a perfect BJT. Given the non-ideality factors associated with FETs, the sensitivity is reduced by a factor of 1.5 – 2, to roughly 250 ppm, which is confirmed by the simulation.
The implication of the modification in the electrical properties of the NFET (output load of the piezoelectric transducer) on the change in the acoustic signal and the corresponding design specifications for the interrogator is discussed in detail below.
System design and constraints: Interrogator
===========================================
*Shorter transmission distance and larger aperture of the interrogator allow efficient trans-cranial power delivery via electromagnetics.*\
\
The focus of the paper up to this point has been on the constraints associated with scaling the neural dust. In order to interface with the BMI electronics and to post-process recorded neural data for brain mapping, an interrogator that can extract the information of the sensor nodes, perform precise localization and addressing, and provide power for the communication needs to be designed. To achieve a BMI-relevant density of neural recordings, neural dust implants may need to be spaced as close as 100 $\mu$m (embedded up to a depth of 2 mm into the cortex). On the other hand, the interrogator elements will be larger than the sensor nodes and will be spaced at a larger pitch (between 100 $\mu$m and 1 mm). Furthermore, for the preliminary system, we assume that the interrogator is placed beneath the skull and below the dura mater, to avoid strong attenuation of ultrasound by bone ($\sim$22 dB/(cm$\cdot$MHz) [@Hoskins]) and to prevent wave reflection and efficiency loss from impedance mismatch between different tissue layers and the skull. The complete trans-cranial transmitter system then would nominally contain an EM link to couple information through the skull [@Sanni]. We do not discuss the design of the RF trans-cranial communication link as that is covered in other work.\
\
*Sufficient receiver sensitivity is required by the interrogator to resolve the occurrence of a neural spike.*\
\
A different set of challenges exist in implementing circuitry to generate, collect and process neural data. Namely, innovative approaches are essential to 1) ensure that the interrogator/sensor combination has sufficient sensitivity to meet the necessary data resolution for BMI and 2) allow for combination of various multi-node interrogation strategies to distinguish among different sensor nodes.
For the analysis carried out in this paper, we assumed that the power and size constraints of the neural dust, and not the interrogator, are the major bottlenecks in the scaling of ultrasound-mediated neural dust system. In order to verify the validity of this assumption, we can examine, to the zeroth order, the power required by the interrogator to achieve certain receiver sensitivity for a passive implementation of the neural dust node. From the complete link model shown in [**Fig. 4**]{}, we note that the change in the electrical impedance of the NFET load induces a change in the input admittance (or the input power) of the two-port network. The interrogator (receiver) must be able to detect this change in the input power level in order to resolve the occurrence of a neural spike. Therefore, we need to determine the size of the FET sensor on the dust node that maximizes the change in the input power level of the two-port network, or, $$\Delta P_{in} \propto \left|\mfrac{Y_{in,spike}-Y_{in,nom}}{Y_{in,nom}}\right|$$ where $Y_{in,spike}$ and $Y_{in,nom}$ denote input admittance of the two-port network with and without a neural spike, respectively. [**Fig. 8**]{} shows the result of the optimization problem with a standard 65 nm CMOS technology. For 100 $\mu$m and 20 $\mu$m dust nodes, 75 $\mu$m and 16 $\mu$m width FET maximize $\Delta P_{in}$, respectively. Note that since the optimum transistor width (i.e., nominal impedance) for achieving the largest reflection is pretty flat, passive node is insensitive to the effects of threshold variability in the transistors and DC offsets in the neural electrodes.
The FET sensor design variable (transistor width), however, is constrained due to the thermal noise of the FET (which sets the lower limit) and the maximum available power at the node and the neural dust form factor (which set the upper limits). Clearly, the small footprint of the neural dust restricts the maximum effective width of the FET sensor that we can pack on the dust, and we term this the *area limit*. More importantly, we need to ensure that the thermal voltage noise of the FET does not overwhelm the AP voltage. As a result, for a fixed bandwidth, in order to lower this voltage noise floor of the FET, it is necessary to increase the bias current, and hence the power consumption given a fixed output voltage. Given a simple single-ended transistor amplifier with a single dominant pole, a bias current of $I_{DS}$, and a transconductance of $g_m$, the minimum bias current required can be derived as, $$I_{DS} = \mfrac{\pi}{4}\cdot\mfrac{4k_BT}{v_n^2}\cdot\mfrac{k_BT}{q}\cdot BW$$ where $v_n^2$ is the input-referred voltage noise. As a result, the FET must be large enough to be able to sustain this minimum bias current. Therefore, for a BW = 10 kHz and voltage SNR at the input of the FET of 3 (which sets $v_n^2$ based on $V_{neural}$), we can compute the minimum allowable size of the FET, restricted by the *noise limit*. Finally, in order to reliably operate the FET, the drain-source voltage of the FET must be at least $\sim$4 $k_BT/q$ or 100 mV. As a result, neural dust must capture enough power from the interrogator to sustain both 100 mV and the minimum current required to ensure that the thermal noise does not dominate the AP voltage. This is defined as the *power limit*.
![Change in the input power level (i.e., power at the interrogator) as a function of transistors width for a 65nm CMOS process and with [**(a)**]{} 100 $\mu$m and [**(b)**]{} 20 $\mu$m neural dust nodes.[]{data-label="fig6"}](fig13.png){width=".49\textwidth"}
With such restrictions, [**Fig. 8**]{} shows that for a 100 $\mu$m dust node, we can design a FET sensor to generate a 16.6 ppm change in the input power with a measured $V_{neural}$. This results in $\sim$120 nW (-39 dBm) of backscattered power at the input given a 1 mm$^2$ interrogator aperture outputting 7.2 mW of power to satisfy safety regulations on output power density of 720 mW/cm$^2$. With such power levels, given a thermal noise spectral density of -174 dBm/Hz of input noise power, 10 kHz of BW, 10 dB of noise figure, and 10 dB of SNR, a traditional CMOS receiver should be sensitive enough to detect at minimum -114 dBm of input power. A number of highly-sensitivity receivers with < mW of DC power consumption have been demonstrated (e.g., [@Otis]).
For a 20 $\mu$m dust, however, [**Fig. 8**]{} shows that the upper limit on the FET size imposed by the power limit is lower than the lower limit set by the noise limit, indicating that the passive implementation of neural dust system scales roughly to 20 $\mu$m.
Re-design of neural dust node
=============================
The scaling of both *active* and *passive* node implementations presented above is limited by the noise requirement of the front-end architectures, which is determined by the achievable differential signals between the electrodes. Decoupling the inherent tradeoff between the size of individual implants and the achievable SNR can improve the scaling of these implementations.\
\
*Re-thinking the design of neural dust can enhance its scalability.*\
\
Since the trade-off derives directly not from the neural dust dimension, but from electrode separation, one approach may be to add very small footprint ($\sim$1 – 5 $\mu$m wide) “tails” which position a single (or multiple) electrode relatively far (> 50 – 100 $\mu$m) from the base of the neural dust implant. This would result in the design shown in [**Fig. 9**]{}, where instead of placing a single differential surface electrode on neural dust, the neural dust can consist of a short strand of flexible and ultra-compliant substrate populated with recording sites. Assuming that the achievable electrode separation in the tail of a 20 $\mu$m node is 100 $\mu$m, this implies that the noise limit, as shown in [**Fig. 8**]{}, will set the lower bound to 0.4 $\mu$m of transistor width and allow the design of a FET sensor on the dust node that achieves the optimal sensitivity, at 2.3e-3 ppm. This corresponds to 16.6 pW (-77.8 dBm) of backscattered power at the input, which is still in the realm of feasibility with a traditional CMOS receiver [@Otis]. Therefore, this approach can address one of the major pitfalls with only a minor adjustment to the original idea as this neural dust still operates under the same principle as before, but has higher achievable SNR.
Note that the exact technology used for the previous analysis is not critical to the conclusion we drew. Although the absolute value of the impedance level is important since it determines the reflection coefficient, and therefore, the efficacy of the backscatter, as shown in [**Fig. 8**]{}, the analysis above indicates that the optimal transistor width for the maximal sensitivity is small compared to the available neural dust footprint. Therefore, although the threshold voltage (hence the nominal impedance level per transistor width) may vary among different technology nodes, achieving the optimal impedance level within the footprint may not be an issue.
![Neural dust with an ultra-compliant flexible polyimide “tail”, populated with recording sites, can be envisioned to bypass the limits of the achievable differential signal between two electrodes placed on a neural dust footprint.[]{data-label="fig6"}](tail.png){width=".54\textwidth"}
In addition, since the analysis above does not take into accounts additional interference (e.g., ultrasonic wave reflection from other structures in the brain, such as vasculature), the sensitivity requirement of the interrogator are more stringent than predicted earlier. Such reflections will likely lead to intersymbol interference. In the case of an active node, such interference can be dealt with through adaptive equalization and/or error correcting codes [@Proakis]. For the passive system – which is effectively “transmitting” analog information back to the interrogator through the backscatter – some form of filtering could be applied to reverse the effects of these reflections. Alternatively, one could potentially utilize a pulse-based system to uniquely discriminate the various reflections based on their arrival times.
Discussion and Conclusions
==========================
The analysis presented points to three major challenges in the realization of ultra-small, ultrasound-based neural recording systems. The first is the design and demonstration of front-ends suitable for operating within the extreme constraints of decreasing available power and decreasing SNR with scale. This could be addressed with a combination of CMOS process and design innovation as well as thinned, multi-substrate integration strategies (see, for example, [@Sillon; @Smith]). The second challenge is the integration of extremely small piezoelectric transducers and CMOS electronics in a properly encapsulated package. The above discussion assumed the entire neural dust implant was encapsulated in an inert polymer or insulator film (a variety of such coatings are used routinely in neural recording devices; these include parylene, polyimide, silicon nitride and silicon dioxide, among others) while exposing two recording electrodes to the brain. The addition of “tails” as discussed above presents additional fabrication challenges. The third challenge arises in the design and implementation of suitably sensitive sub-cranial transceivers which can operate at low power (to avoid heating between skull and brain). In addition to these three challenges, this paper does not discuss *how* to deliver neural dust nodes into the cortex. The most direct approach would be to implant them at the tips of fine-wire arrays similar to those already used for neural recording. Neural dust nodes would be fabricated or post-fab assembled on the tips of array shanks, held there by surface tension or resorbable layers; a recent result demonstrates a similar approach to implant untethered LEDs into neural tissue [@Kim]. Once inserted and free, the array shanks would be withdrawn, allowing the tissue to heal. Kinetic delivery might also be an option, but there is no existing data to evaluate what effect such a method would have on brain tissue or the devices themselves.
The trans-cranial transmitter design also introduces multi-interrogator, multi-node communication possibilities that will need to be developed in order to enable the large number of recording sites envisioned in this paper. Because the neural dust nodes are smaller than a wavelength, the reflected signals will be subject to diffraction. With multiple nodes embedded and sufficiently wide transceivers, this presents an interesting inverse problem of potential benefit in resolving signals from different nodes. An alternative approach to multi-node communication would be to fabricate nodes with a variety of resonant frequencies and use frequency discrimination (i.e., each dust transmits on its own frequency channel). Lastly, neural dust nodes with aspect ratios close to 1:1:1 will not only couple energy into modes along the two axes perpendicular to the transmission axis, they will also re-radiate along those axes. This means nodes lying near each other on a “horizontal” plane (relative to the top surface of the cortex) may see inter-node signal mixing. This has interesting implications for node-to-node communication.
Lastly, one of the more compelling possibilities would be to harness the considerable volume of research that has gone into micro- and nanoelectromechanical RF resonators (which easily operate in the MHz range [@Sadek; @Lin] and thin-film piezoelectric transducers [@Przybyla; @TM] to produce devices with better power coupling as a function of scale, thus facilitating extremely small (10’s of $\mu$m) dust nodes. This remains an open opportunity.
The authors would like to thank Tim J. Blanche of Allen Institute of Brain Science, Konrad P. Kording of Northwestern University, Adam H. Marblestone of Harvard University, Emmanuel Quevy of Silicon Laboratories, Mikhail G. Shapiro of California Institute of Technology, Bradley M. Zamft of the US Department of Energy, and William Biederman, Peter Ledochowitsch, Nathan Narevsky, Christopher Sutardja, and Daniel J. Yeager of UC Berkeley for valuable discussions. This work was supported by the NSF Graduate Fellowship for DS and the Bakar Fellowship for JMC and MMM.
[10]{} Biederman W, Yeager DJ, Narevsky N, Koralek AC, Carmena JM, Alon E, Rabaey JM (2013) A Fully-Integrated, Miniaturized (0.125 mm$^2$) 10.5 $\mu$W Wireless Neural Sensor. *IEEE J Solid-State Circuits* 48(4):960-70.
Fan D, et al. (2011) A wireless multi-channel recording system for freely behaving mice and rats. *PLoS One* 6(7): 1-9.
Miranda H, Gilja V, Chestek CA, Shenoy KV, Meng TH (2010) HermesD : A High-Rate Long-Range Wireless Transmission System for Simultaneous Multichannel Neural Recording Applications. *IEEE Trans BioCAS* 4(3):181-91.
Szuts TA, et al. (2011) A wireless multi-channel neural amplifier for freely moving animals. *Nat Neurosci* 14(2):263-9.
Stevenson I, Kording K (2011) How advances in neural recording affect data analysis. *Nat Neurosci* 14(2):139-42.
Nicolelis MAL, Dimitrov D, Carmena JM, Crist R, Lehew G, Kralik JD, Wise SP (2003) Chronic, multisite, multielectrode recordings in macaque monkeys. *Proc Natl Acad Sci* 100:11041-6
Harrison RR, Watkins PT, Kier RJ, Lovejoy RO, Black DJ, Greger B, Solzbacher F (2007) A Low-Power Integrated Circuit for a Wireless 100-Electrode Neural Recording System. *IEEE J Solid-State Circuits* 42(1):123-33.
Ganguly K, Carmena JM (2009) Emergence of a stable cortical map for neuroprosthetic control. *PLoS Bio* 7(7):1-13.
Turner JN, Shain W, Szarowski DH, Andersen M, Martins S, Isaacson, M, Craighead H (1999) Cerebral astrocyte response to micromachined silicon implants. *Experimental Neurology* 156:33-49.
Polikov VS, Tresco PA, Reichert WM (2005) Response of brain tissue to chronically implanted neural electrodes. *J Neurosci Met* 148:1-18.
Chestek CA, et al. (2011) Long-term stability of neural prosthetic control signals from silicon cortical arrays in rhesus macaque motor cortex. *J Neural Engineering* 8:1-11
Suner S, Fellows MR, Vargas-Irwin C, Nakata GK, Donoghue JP (2005) Reliability of signals from a chronically implanted, silicon-based electrode array in non-human primate primary motor cortex. *IEEE Trans on NeuralSRE* 13(4):524-41.
Alivisatos AP, et al. (2013) The Brain Activity Map. *Science* 339:1284-5.
Alivisatos AP, et al. (2013) Nanotools for neuroscience and brain activity mapping. *ACS Nano* 7(3):1850-66.
Press Release: whitehouse.gov/infographics/brain-initiative (2013)
Buzsaki G (2004) Large-scale recording of neuronal ensembles. *Nat Neurosci* 7(5):446-51.
Xie C, Lin Z, Hanson L, Cui Y, Cui B (2012) Intracellular recording of action potentials by nanopillar electroporation.*Nat Nanotech* 7:185-90.
Du J, Riedel-Kruse IH, Nawroth JC, Roukes ML, Laurent G, Masmanidis SC (2009) High-resolution three-dimensional extracellular recording of neuronal activity with microfabricated electrode arrays. *J Neurophysiol* 101:1671-8.
Zamft BM, Marblestone AH, Kording K, Schmidt D, Martin-Alarcon D, Tyo K, Boyden ES, Church G (2012). Measuring Cation Dependent DNA Polymerase Fidelity Landscapes by Deep Sequencing. *PLoS One* 7(8):1-10.
Ziv Y, Burns LD, Cocker ED, Hamel EO, Ghosh KK, Kitch LJ, El Gamal A, Schnitzer MJ (2013). Long-term dynamics of CA1 hippocampal place codes. *Nat Neurosci* 16:264-6.
Cardin JA, Carlen M, Meletis K, Knoblich U, Zhang F, Deisseroth K, Tsai L, Moore CI (2010) Targeted optogenetic stimulation and recording of neurons in vivo using cell-type-specific expression of Channelrhodopsin-2. *Nat Prot* 5:247-54.
Filonov GS, Krumholz A, Xia J, Yao J, Wang LV, Verkhusha VV (2012) Deep-tissue photoacoustic tomography of a genetically encoded near-infrared fluorescent probe. *Angewandte Chemie* 51:1448-51.
Marblestone AH, Zamft BM, Maguire YG, Shapiro MG, Cybulski T, Glaser JI, Stranges PB, Kalhor R, Dalrymple DA, Seo D, Alon E, Maharbiz MM, Carmena JM, Rabaey JM, Boyden ES, Church GM, Kording KP (2013) Physical Principles for Scalable Neural Recording. *arXiv:1306.5709 \[q-bio.NC\]*.
Muller R, Gambini S, Rabaey JM (2012) A 0.013 mm$^2$ 5$\mu$W DC-coupled neural signal acquisition IC with 0.5 V supply. *IEEE J Solid-State Circuits* 47(1):232-43.
Seymour JP, Kipke DR (2006) Fabrication of polymer neural probes with sub-cellular features for reduced tissue encapsulation. *IEEE EMBS Conf* 4606-9.
Marin C, Fernandez E (2010) Biocompatibility of intracortical microelectrodes: current status and future prospects. *Front Neuroeng* 3:1-6.
Rabaey JM, et al. (2011) Powering and communicating with mm-size implants. *IEEE DATE Conf* 1-6.
Sodagar AM, Amiri P (2009) Capacitive coupling for power and data telemetry to implantable biomedical microsystems. *IEEE EMBS Conf* 411-4.
Lee SB, Lee H, Kiani M, Jow U, Ghovanloo M (2010) An inductively powered scalable 32-channel wireless neural recording system-on-a-chip for neuroscience applications. *IEEE Trans BioCAS* 4(6):360-71.
Yakovlev A, Kim S, Poon A (2012) Implantable biomedical devices: Wireless powering and communication. *IEEE Comm Mag* 50(4):152-9. Clark GM (2003) Cochlear implants: fundamentals and applications. *New York: Springer-Verlag*.
IEEE (2006) C95.1-2005 IEEE Standard for Safety Levels with Respect to Human Exposure to Radio Frequency Electromagnetic Fields, 3 kHz to 300 GHz.
Salim A, Baldi A, Ziaie B (2003) Inductive link modeling and design guidelines for optimum power transfer in implantable wireless microsystems. *IEEE EMBS Conf* 3368-71.
Fotopoulou K, Flynn BW (2011) Wireless power transfer in loosely coupled links: Coil misalignment model. *IEEE Trans Magnetics* 47(2):416-30.
Ishida K, et al. (2013) Insole Pedometer With Piezoelectric Energy Harvester and 2 V Organic Circuits. *IEEE J Solid-State Circuits* 48(1):255-64.
Wong SH, Kupnik M, Butts-Pauly K, Khuri-Yakub BT (2007) Advantages of Capacitive Micromachined Ultrasonics Transducers (CMUTs) for High Intensity Focused Ultrasound (HIFU). *IEEE Ultrasonics Symp*:1313-6.
Ozeri S, Shmilovitz D (2010) Ultrasonic transcutaneous energy transfer for powering implanted devices. *Ultrasonics* 50(6):556-66.
Richards CD, Anderson MJ, Bahr DF, Richards RF (2004) Efficiency of energy conversion for devices containing a piezoelectric component. *J Micromech Microeng* 14:717-21.
Rosen CA, Fish KA, Rothenberg HC (1958) Electromechanical Transducer. US patent no. 2830274.
Hoskins PR, Martin K, Thrush A, editors (2010) Diagnostic Ultrasound: Physics and Equipment. *New York: Cambridge University Press*.
Leighton TG (2007) What is ultrasound? *Progress Biophysics & Molecular Biology* 93:3-83.
FDA (2008) Information for Manufacturers Seeking Marketing Clearance of Diagnostic Ultrasound Systems and Transducers.
Tufail Y, Yoshihiro A, Pati S, Li MM, Tyler WJ (2011) Ultrasonic neuromodulation by brain stimulation with transcranial ultrasound. *Nat Prot* 6(9):1453-70.
King RL, Brown JR, Newsome WT, Butts-Pauly K (2012) Effective Parameters For Ultrasound-Induced In Vivo Neurostimulation. *Ultrasound Med & Bio* 39(2):312-31.
Foley JL, Little JW, Vaezy S (2007) Image-guided high-intensity focused ultrasound for conduction block of peripheral nerves. *Annals Biomed Engineering* 35(1):109-19.
Krasovitski B, Frenkel V, Shoham S, Kimmel E (2011) Intramembrane cavitation as a unifying mechanism for ultrasound-induced bioeffects. *Proc Natl Acad Sci* 108(8):1-6.
Tyler WJ, Tufail Y, Finsterwald M, Tauchmann ML, Olson EJ, Majestic C (2008) Remote excitation of neuronal circuits using low-intensity, low-frequency ultrasound. *PLoS One* 3(10):1-11.
Hameroff S, et al. (2013). Transcranial Ultrasound (TUS) effects on mental states: A pilot study. *Brain Stim* 6:409-15.
Tsui P, Wang S, Huang C (2005) In vitro effects of ultrasound with different energies on the conduction properties of neural tissue. *Ultrasonics* 43:560-65.
Zhou, Y. (2011). High intensity focused ultrasound in clinical tumor ablation. *World J Clin Oncol* 2(1):8-27.
Shung KK, Cannata JM, Zhou QF (2007) Piezoelectric materials for high frequency medical imaging applications: A review. *J Electroceram* 19:139-45.
Zenner HP, et al. (2000) Human studies of a piezoelectric transducer and a microphone for a totally implantable electronic hearing device. *Am J Otol* 21(2):196-204.
Maleki T, Cao N, Song S, Kao C, Ko SA, Ziaie B (2011) An ultrasonically powered implantable micro-oxygen generator (IMOG). *IEEE Trans BioE* 58(11):3104-11.
Przybyla RJ, Shelton SE, Guedes A, Izyumin II, Kline MH, Horsley DA, Boser BE (2011) In-air rangefinding with an aln piezoelectric micromachined ultrasound transducer. *IEEE Sensors J* 11(11):2690-7.
Krimholtz R, Leedom DA, Matthaei GA (1970) New equivalent circuits for elementary piezoelectric transducers. *Electronics Lett* 6(13):398-9.
Roa-Prada S, Scarton HA, Saulnier GJ, Shoudy DA, Ashdown JD, Das PK, Gavens AJ (2013) An Ultrasonic Through-Wall Communication (UTWC) System Model. *J Vib Acoust* 135(1):1-12.
Holland R (1968). Resonant properties of piezoelectric ceramic rectangular parallelepipeds. *J Acoust Soc Am* 43(5):988-97.
Baughman RH, Shacklette JM, Zakhidov AA, Stafstrom S (1998) Negative Poisson’s ratios as a common feature of cubic metals. *Nature* 392:362-5.
Aleshin VI, Raevski IP (2012). Negative Poisson’s ratio and piezoelectric anisotropy of tetragonal ferroelectric single crystals. *J Appl Phys* 112:1-8.
Mills DM, Smith SW (2002) Multi-layered PZT/polymer composites to increase signal-to-noise ratio and resolution for medical ultrasound transducers part II: Thick film technology. *IEEE Ultrasonics* 49(7):1005-14.
Kodandaramaiah SB, Franzesi GT, Chow BY, Boyden ES, Forest CR (2012) Automated whole-cell patch-clamp electrophysiology of neurons in vivo. *Nat Met* 9:585-7.
Robinson JT, Jorgolli M, Park H (2013) Nanowire electrodes for high-density stimulation and measurement of neural circuits. *Front Neural Circuits* 7:1-5.
Yao J, Yan H, Lieber CM (2013) A nanoscale combing technique for the large-scale assembly of highly aligned nanowires. *Nat Nanotech* 8:329-35.
Belitski A, et al. (2008) Low-frequency local field potentials and spikes in primary visual cortex convey independent visual information. *J Neurosci* 28(22):5696-709.
Gold C, Henze DA, Koch C (2007) Using extracellular action potential recordings to constrain compartmental models. *J Comput Neurosci* 23:39-58.
Du J, Blanche TJ, Harrison RR, Lester HA, Masmanidis SC (2011) Multiplexed, high density electrophysiology with nanofabricated neural probes. *PLoS One* 6(10):1-11.
Steyaert MSJ, Sansen WM, Zhongyuan C (1987) A micropower low-noise monolithic instrumentation amplifier for medical purposes. *IEEE J Solid-State Circuits* 22(6):1163-8.
Weinstein R (2005) RFID: a technical overview and its application to the enterprise. *IEEE IT Pro* 27:33.
Finkenzeller K (2003) RFID Handbook: Fundamentals and Applications in Contactless Smart Cards and Identification. *Wiley, New York*.
Yazicioglu RF, Kim S, Torfs T, Kim H, Van Hoof C (2011) A 30 $\mu$W Analog Signal Processor ASIC for Portable Biopotential Signal Monitoring. *IEEE J Solid-State Circuits* 46(1):209-23.
Fan Q, Huijsing J, Makinwa K (2012) A capacitively coupled chopper instrumentation amplifier with a $\pm$30V common-mode range, 160dB CMRR and 5$\mu$V offset. *IEEE ISSCC* 374-6.
Sanni A, Vilches A, Toumazou C (2012) Inductive and Ultrasonic Multi-Tier Interface for Low-Power, Deeply Implantable Medical Devices. *IEEE Trans BioCAS* 6(4):297-308.
Otis B, Chee YH, Rabaey JM (2005) A 400 $\mu$W-RX, 1.6 mW-TX super-regenerative transceiver for wireless sensor networks. *IEEE ISSCC* 396-7.
Proakis JG (2000) Digital Communication. *McGraw-Hill*.
Sillon N, Astier A, Boutry H, Di Cioccio L, Henry D, Leduc P (2008) Enabling technologies for 3D integration: From packaging miniaturization to advanced stacked ICs. *IEEE Elect Dev Meeting* 1-4.
Smith B, Kwok P, Thompson J, Mueller A, Racz L (2010) Demonstration of a Novel Hybrid Silicon-Resin High Density Interconnect (HDI) Substrate. *IEEE Proc Elect Comp Tech Conf* 816-21.
Kim T, et al. (2013) Injectable, Cellular-Scale Optoelectronics with Applications for Wireless Optogenetics. *Science* 340:211-6.
Sadek AS, Karabalin RB, Du J, Roukes ML, Koch C, Masmanidis SC (2010) Wiring nanoscale biosensors with piezoelectric nanomechanical resonators. *Nano Lett* 10:1769-73.
Lin Y, Li S, Ren Z, Nguyen CTC (2005) Low phase noise array-composite micromechanical wine-glass disk oscillator. *IEEE Elec Dev Meeting* 1-4.
Trolier-McKinstry S, Muralt P (2004) Thin film piezoelectrics for MEMS. *J Electroceram* 12:7-17.
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- |
Ruy Luiz Milidiú\
Department of Computer Science\
Pontifical Catholic University of Rio de Janeiro\
Rio de Janeiro, Brazil\
`[email protected]`\
Luis Felipe Müller\
Department of Computer Science\
Pontifical Catholic University of Rio de Janeiro\
Rio de Janeiro, Brazil\
`[email protected]`\
bibliography:
- 'references.bib'
title: 'SeismoGlow - Data augmentation for the class imbalance problem'
---
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We investigate the accuracy of several schemes to calculate ground-state correlation energies using the generator coordinate technique. Our test-bed for the study is the $sd$ interacting boson model, equivalent to a 6-level Lipkin-type model. We find that the simplified projection of a triaxial generator coordinate state using the $S_3$ subgroup of the rotation group is not very accurate in the parameter space of the Hamiltonian of interest. On the other hand, a full rotational projection of an axial generator coordinate state gives remarkable accuracy. We also discuss the validity of the simplified treatment using the extended Gaussian overlap approximation (top-GOA), and show that it works reasonably well when the number of boson is four or larger.'
address:
- '$^1$Yukawa Institute for Theoretical Physics, Kyoto University, Kyoto 606-8502, Japan '
- |
$^2$ Institute for Nuclear Theory and Department of Physics,\
University of Washington, Seattle, WA 98195
- |
$^3$Institut de Physique Nucléaire, IN2P3-CNRS,\
Université Paris-Sud, F-91406 Orsay Cedex, France
- |
$^4$ Institut für Theoretische Physik II, Universität Erlangen-Nürnberg,\
Staudtstrasse 7, D-91058 Erlangen, Germany
author:
- 'K. Hagino,$^{1,3}$ G.F. Bertsch,$^{2,3}$ and P.-G. Reinhard$^{4}$'
title: Generator Coordinate Truncations
---
INTRODUCTION
============
The self-consistent mean-field theories with a phenomenological nucleon-nucleon interaction have enjoyed a success in describing ground state properties of a wide range of atomic nuclei with only a few adjustable parameters (see Ref.[@BHR03] for a recent review). They are now at a stage where the ground state correlations beyond the mean-field approximation have to be taken into account seriously. This is partly due to the fact that much more accurate calculations have been increasingly required in recent years because of the experimental progress in the production of nuclei far from the stability line, where the ground state correlation beyond the mean-field approximation may play an important role. The major part of the correlations produces effects which have smooth trends with proton and neutron number. These are already incorporated into the energy functionals of effective mean-field models as, e.g., Skyrme-Hartree-Fock or the relativistic mean-field model. However, the correlations associated with low-energy modes show strong variations with shell structure, and cannot be contained in a smooth energy-density functional. This concerns the low-energy quadrupole vibrations and all zero-energy modes associated with symmetry restoration. In fact, the correlation effects appear most dramatically for these symmetry modes as there are: the center of mass localization, the rotational symmetry, and the particle number conservation. Those correlation effects must be taken into account explicietly in order to develop a global theory which can be extrapolated to the drip-line regions.
There are many ways that correlation energies can be calculated. In Ref. [@HB00], we investigated a method which uses the random phase approximation (RPA). We found that the RPA provides a useful correlation around a spherical as well as for a well deformed configurations, but it fails badly around the phase transition point between spherical and deformed. Because of this defect, the RPA approach does not seem the best method for a global theory. Recently, we have developed an alternative method, called the top-GOA, to calculate the ground state correlation energies based on the generator coordinate method [@HRB02]. This is a generalization of the Gaussian Overlap Approximation (GOA) by taking into account properly the topology of the generator coordinate[@R78]. This method can be easily applied to the variation after projection (VAP) scheme, where the energy is minimized after the mean-field wave function is projected on to the eigenstates of the symmetry [@RS80]. We have tested this method on the three-level Lipkin model, which consists of one vibrational degree of freedom and one rotational [@HRB02], and have confirmed that the method provides an efficient computational means to calculate ground state correlation energies for the full range of coupling strengths.
In this paper, we continue our study on the correlation energies using a model which contains the full degrees of freedom of quadrupole motion. To this end, we use a $sd$ interacting boson model (IBM)[@IBM; @GK80], which may be viewed as a 6-level extension of the Lipkin model [@LMG65]. The IBM is particularly tailored for the description of the low-lying collective modes, thus provides a good testing ground for the present studies of correlations. In realistic systems, treating all the five quadrupole degrees of freedom is a difficult task in many aspects. Even if one restricts oneself to the rotational degrees of freedom, one in general has to deal with integrals over the three Euler angles, $\phi$, $\theta$, and $\chi$. The full triaxial projection is still too costly, since a number of rotated wave functions may be required in order to get a converged result. Also, the top-GOA scheme for triaxial nuclei is not as simple as in the three-level Lipkin model, because one has to take into account properly the coupling among the three Euler angles. How can one overcome these difficulties? We shall study here two approximate projection methods. One is the approximate angular momentum projection proposed by Bonche [*et al.*]{}[@BDFH91], which uses the $S_3$ subgroup of the rotation group. With this approximation, one needs only five rotated wave functions. The other scheme which we consider is the axially symmetric approximation, where the energy is minimized with respect to the deformation $\beta$ only, setting the triaxiality $\gamma$ equal to zero. With this approximation, the integrations for the $\phi$ and $\chi$ angles become unnecessary, reducing the projection to a one-dimensional integral over $\theta$. The axially symmetric approximation has been widely used in the mean-field calculations [@V73; @GRT90], where the approximation seems reasonable given that the most nuclei do not have a static triaxial ground state. However, it is not obvious whether the approximation remains valid when the fluctuations around the mean-field configuration are included, especially when the deformation is small.
The paper is organized as follows. In Sec. II, we set up the model Hamiltonian and discuss several approaches. These include the mean-field approximation, the full triaxial angular momentum projection and its approximation, the axially symmetric approximation, and the top-GOA for the axial projection. In Sec. III, we compare these schemes with the exact solutions of the Hamltonian obtained from the matrix diagonalization. We especially focus on the feasibility of each method in realistic systems. We then summarize the paper in Sec. IV.
$sd$ boson Hamiltonian
======================
Consider a $N$ boson system whose Hamiltonian is given by, $$H=H_0+V=\epsilon\sum_{\mu}d_{\mu}^{\dagger}d_{\mu}
-\frac{1}{2}\sum_\mu Q_{\mu}^{\dagger}Q_{\mu}.
\label{bosonHam}$$ The first term expresses the single-particle Hamiltonian $H_0$, while the second term is the residual quadrupole-quadrupole interaction. The quadrupole operator $Q_\mu$ is defined as $$Q_\mu=\lambda_1(s^\dagger\tilde{d}_\mu + d_\mu^\dagger s)
+ \lambda_2[d^\dagger \tilde{d}]^{(2\mu)},$$ where $\tilde{d}_\mu=(-)^\mu\,d_{-\mu}$. When $\lambda_1=\lambda_2=0$ and $\epsilon > 0$, the ground state is the $s$-boson condensed state, whose wave function is given by $(s^\dagger)^N/\sqrt{N!}\,|\,\rangle$. For a finite value of $\lambda_1$ and $\lambda_2$, the Hamiltonian may be diagonalized using the number basis given by, $$|\{n\}\rangle = |n_s\,n_{d_{-2}}\,n_{d_{-1}}\,n_{d_0}\,n_{d_1}\,n_{d_2}\rangle,$$ taking only the configurations satisfying $$\begin{aligned}
n_s+n_{d_{-2}}+n_{d_{-1}}+n_{d_{0}}+n_{d_{1}}+n_{d_{2}}&=&N, \label{number}\\
-2n_{d_{-2}}-n_{d_{-1}}+n_{d_{1}}+2n_{d_{2}}&=&0. \label{zproj}\end{aligned}$$ The first condition, (\[number\]), constrains the boson number, while the second equation, (\[zproj\]), is the condition that the $z$ component of the angular momentum is zero. With these constraints, the basis has a dimenion of 5 for $N=2$, 18 for $N=4$, and 203 for $N=10$. We are going to compare the exact solutions obtained in this way with results of the collective treatment based on the mean-field approximation plus angular momentum projection.
Mean-field approximation
------------------------
We first solve the Hamiltonian in the mean-field approximation. To this end, we consider an intrinsic deformed mean-field state given by [@GK80] $$|\beta\gamma\rangle=\frac{1}{\sqrt{N!}}(b^\dagger)^N\,|\,\rangle,
\label{intrinsic}$$ where the deformed boson operator is defined as $$b^\dagger=\frac{1}{\sqrt{1+\beta^2}}\left(s^\dagger + \beta\cos\gamma\,
d_0^\dagger + \frac{\beta}{\sqrt{2}}\sin\gamma\,(d_2^\dagger+d_{-2}^\dagger)
\right).$$ The parameter $\beta$ accoutns for the global deformation and $\gamma$ for triaxiality. The deformation energy surface then reads [@GK80] $$\begin{aligned}
E_{\rm MF}(\beta,\gamma)&=&\langle \beta\gamma|H|\beta\gamma\rangle, \\
&=&
\epsilon\,\frac{N\beta^2}{1+\beta^2} \nonumber \\
&&-\frac{1}{2}\,\frac{N}{(1+\beta^2)^2}\,\lambda_1^2
\left\{(1+\beta^2)\left(5+
\left(1+\frac{\lambda_2^2}{\lambda_1^2}\right)\beta^2\right)\right.
\nonumber \\
&&\left.
+(N-1)\left(4\beta^2-\sqrt{\frac{32}{7}}\,\frac{\lambda_2}{\lambda_1}\,\beta^3
\cos 3\gamma+\frac{2}{7}\,\frac{\lambda_2^2}{\lambda_1^2}\,\beta^4\right)
\right\}. \label{emf}\end{aligned}$$ One finds that the energy minimum appears on the prolate side ($\beta > 0, \gamma=0$) when $\lambda_2/\lambda_1 < 0$, while it is on the oblate side ($\beta > 0, \gamma=\pi/3$) for $\lambda_2/\lambda_1 > 0$. When $\lambda_2$ is zero, the energy surface is independent of $\gamma$, corresponding to the $\gamma$-unstable case.
Triaxial Angular Momentum Projection
------------------------------------
When $\beta$ is non-zero, the intrinsic wave function (\[intrinsic\]) is not an eigenstate of the total angular momentum $J$. One can project this state onto the $J=0$ state as [@RS80] $$|\beta\gamma,J=0\rangle
\propto \int d\Omega\,\hat{R}(\Omega)\,|\beta\gamma\rangle=
\int^{2\pi}_0d\phi \int^{2\pi}_0d\chi
\int^{\pi}_0\sin\theta d\theta\, \hat{R}(\phi,\theta,\chi)\,
|\beta\gamma\rangle,
\label{projwf}$$ where $\hat{R}(\Omega)$ is the rotation operator. The corresponding energy is given by $$E_{\rm proj}(\beta,\gamma)
=\frac{\int d\Omega \,\langle\beta\gamma|H\hat{R}(\Omega)|\beta\gamma\rangle}
{\int d\Omega \,\langle\beta\gamma|\hat{R}(\Omega)|\beta\gamma\rangle}.
\label{eproj}$$ Notice that the rotated wave function can be expressed in terms of the rotated boson operator as $$|\beta\gamma\Omega\rangle \equiv \hat{R}(\Omega)\,|\beta\gamma\rangle
= \frac{1}{\sqrt{N!}}(b_R^\dagger)^N\,|\,\rangle,
\label{rotwf}$$ with $$\begin{aligned}
b_R^\dagger\equiv\hat{R}(\Omega)b^\dagger\hat{R}^{-1}(\Omega)
&=&
\frac{1}{\sqrt{1+\beta^2}}\left(s^\dagger + \beta\cos\gamma\,
\sum_mD^2_{m0}(\Omega)d_m^\dagger\right. \nonumber \\
&&\left.+ \frac{\beta}{\sqrt{2}}\sin\gamma\,\sum_m\left(D^2_{m2}(\Omega)
+D^2_{m\,-2}(\Omega)\right)\,d_m^\dagger\right), \end{aligned}$$ where $D^2_{mm'}(\Omega)$ is the Wigner’s $D$ function. The overlaps in the projected energy (\[eproj\]) can be expressed in terms of commutators such as $$\begin{aligned}
[b,b_R^\dagger] &=& \frac{1}{1+\beta^2}\left(
1+\beta^2\cos^2\gamma\,d^2_{00}(\theta)
+\beta^2\sin^2\gamma\,[d^2_{22}(\theta)\cos(2\phi+2\chi)
+d^2_{2-2}(\theta)\cos(2\phi-2\chi)]\right. \nonumber \\
&&\left.\hspace*{1.8cm}+\sqrt{2}\beta^2\sin\gamma\cos\gamma\,
d^2_{20}(\theta)\,[\cos(2\chi)+\cos(2\phi)]\right).\end{aligned}$$ The results are $$\begin{aligned}
I(\Omega)&\equiv&\langle\beta\gamma|\hat{R}(\Omega)|\beta\gamma\rangle =
[b,b_R^\dagger]^N, \\
\frac{H_0(\Omega)}{I(\Omega)}&\equiv&
\frac{
\langle\beta\gamma|H_0\hat{R}(\Omega)|\beta\gamma\rangle}
{\langle\beta\gamma|\hat{R}(\Omega)|\beta\gamma\rangle}
=\epsilon N\left(1-\frac{1}{[b,b_R^\dagger]}\right), \\
\frac{V(\Omega)}{I(\Omega)}&\equiv&
\frac{
\langle\beta\gamma|V\hat{R}(\Omega)|\beta\gamma\rangle}
{\langle\beta\gamma|\hat{R}(\Omega)|\beta\gamma\rangle}
=-\frac{N}{2}\,\frac{1}{[b,b^\dagger_R]}\,
\sum_m\left[[b,Q_m^\dagger],[Q_m,b_R^\dagger]\right] \nonumber \\
&& \hspace*{4.5cm}
-{N(N-1)\over 2 [b,b^\dagger_R]^2}\sum_m\left[b,[Q_m,b_R^\dagger]\right]
\left[[b,Q_m^\dagger],b_R^\dagger\right].
\label{Voverlap}\end{aligned}$$ Here, we have used the relation $$[\hat{A},\hat{B}^N]
=N\hat{B}^{N-1}[\hat{A},\hat{B}]+\frac{1}{2}N(N-1)
\hat{B}^{N-2}\,[[\hat{A},\hat{B}],\hat{B}]+\cdots,$$ for arbitrary operators $\hat{A}$ and $\hat{B}$. We give an explicit expression for the quadrupole commutators, $[Q_m,b_R^\dagger]$ and $[Q_m,b^\dagger]$, in the Appendix.
In practice, one can evaluate the integrals in Eq. (\[eproj\]) as follows. First notice that the integration intervals for the $\chi$ and $\phi$ angles can be reduced from $(0,2\pi)$ to $(0,\pi)$ since the $K$ quantum number is even for the intrinsic state (\[intrinsic\]) [@BD95]. Next, because of the reflection symmetry of the intrinsic wave function (\[intrinsic\]) with respect to the $z$ plane, the integration range for the $\theta$ angle can be reduced to $(0,\pi/2)$. One can then apply the Gauss-Legendre quadrature formula to the $\theta$ integral, and the Gauss-Chebyschev formula to the $\chi$ and $\phi$ integrals [@BD95; @ETY99]. One may also try the simpler Simpson formula. We will check the convergence of these formulas in the next section.
Approximate Triaxial Projection with Octahedral Group
-----------------------------------------------------
Bonche [*et al.*]{} have considered an approximation to the triaxial angular momentum projection (\[projwf\]) based on the octahedral rotation group, that is a group formed from permutations of the principal axes of inertia [@BDFH91]. With this representation, the projected wave function (\[projwf\]) is approximated as $$|\beta\gamma,J=0\rangle\approx
\sum_{i=1}^{24} \hat{S}_i |\beta\gamma\rangle,$$ where $\hat{S}_i$ are the 24 elements of the octahedral group. In our case with states even under parity, the octahedral group is reduced to $S_3$, the group of permutations of three objects (the $x,y,z$ axes). The 6 rotations to be treated are [@BDFH91], $$\begin{aligned}
\hat{S}_1&=& \hat{R}(0,0,0)=1, \nonumber \\
\hat{S}_2&=& \hat{R}(\pi,\pi/2,0), \nonumber \\
\hat{S}_3&=& \hat{R}(-\pi/2,-\pi/2,0), \nonumber \\
\hat{S}_4&=& \hat{R}(\pi/2,-\pi/2,\pi/2), \nonumber \\
\hat{S}_5&= &\hat{R}(0,\pi,\pi/2), \nonumber \\
\hat{S}_6&=& \hat{R}(0,\pi,-\pi/2). \end{aligned}$$
Axial Projection
----------------
When the triaxiality $\gamma$ is zero, the $\phi$ and $\chi$ integrals in Eq. (\[projwf\]) become trivial. The triple integral is then reduced to a much simpler single integral with respect to the angle $\theta$. This simplifies the projected energy (\[eproj\]) to $$E_{\rm proj}(\beta)=\frac{\int^1_{-1}\,d(\cos\theta)\,(H_0(\theta)+V(\theta))}
{\int^1_{-1}\,d(\cos\theta)\,I(\theta)},
\label{axial}$$ where the overlaps in this axial approximation read $$\begin{aligned}
I(\theta)&=&\frac{1}{(1+\beta^2)^N}
\left(1+\frac{\beta^2}{2}(2-3\sin^2\theta)\right)^N, \label{overlap}\\
\frac{H_0(\theta)}{I(\theta)}&=&
\epsilon N\cdot \frac{\beta^2(1-\frac{3}{2}\sin^2\theta)}
{1+\beta^2(1-\frac{3}{2}\sin^2\theta)}, \label{h0overlap}\\
\frac{V(\theta)}{I(\theta)}&=&
-\frac{N}{2}\cdot\frac{1}
{[1+\beta^2(1-\frac{3}{2}\sin^2\theta)]^2}\, \nonumber \\
&&\times\left\{\left(1+\beta^2(1-\frac{3}{2}\sin^2\theta)\right)
\left(5\lambda_1^2+(\lambda_1^2+\lambda_2^2)\beta^2
(1-\frac{3}{2}\sin^2\theta)\right)\right. \nonumber \\
&&\left.+(N-1)\beta^2\left(\lambda_1^2(1+3\cos^2\theta)
+\frac{4}{\sqrt{14}}\lambda_1\lambda_2\beta(1-3\cos^2\theta)\right.\right.
\nonumber \\
&&
\hspace*{5.5cm}
\left.\left.+\frac{\lambda_2^2}{14}\beta^2(4-9\sin^2\theta\cos^2\theta)
\right)\right\}. \label{voverlap} \end{aligned}$$ The axiallay projected energy (\[axial\]) depends, of course, only on the global deformation $\beta$. VAP means then to minimize the projected energy with respect to the deformation parameter $\beta$.
Top-GOA for Axial Projection
----------------------------
A further simplification may be achieved using a second-order approach, the extended Gaussian Overlap Approximation (top-GOA). In this scheme, the overlaps are expanded up to second order derivatives with respect to the generator coordinate while retaining its topology. For the axial projection considered in the previous subsection, the procedure is very similar as in Ref. [@HRB02] for the three-level Lipkin model. From Eqs. (\[overlap\] – \[voverlap\]), it is clear that a natural choice for the expansion variable is $\sin\theta$. Expanding the overlaps with respect to $\sin\theta$, one obtains $$\begin{aligned}
I(\theta)&\approx& \exp\left(-\frac{3}{2}\,\frac{N\beta^2}{1+\beta^2}
\sin^2\theta\right), \\
\frac{H_0(\theta)+V(\theta)}{I(\theta)}&\approx&
E_{\rm MF}(\beta) + \frac{H_2(\beta)}{2}\sin^2\theta, \end{aligned}$$ where $E_{\rm MF}(\beta)$ is the mean-field energy given by Eq. (\[emf\]) (with $\gamma=0$), and $H_2(\beta)$ is defined as $$H_2(\beta)=\left.\frac{d^2}{d\theta^2}\,
\frac{H_0(\theta)+V(\theta)}{I(\theta)}\right|_{\theta=0}.$$ Note that we have exponentiated the normalization overlap $I(\theta)$ following the idea of the Gaussian overlap approximation [@RG87].
Numerical Results
=================
Comparison of projection schemes
--------------------------------
The exact ground state for the model Hamiltonian (\[bosonHam\]) and the various integrals needed for the projection schemes are solved numerically by standard methods. Figure 1 compares the exact solution of the Hamiltonian with the several approximations to the triaxial angular momentum projection for $N$=4 and $\epsilon=1$. The interaction strength $\lambda_2$ is set to be $\lambda_2/\lambda_1=-\sqrt{7}/4$ for each $\lambda_1$, that is a half the SU(3) value, $(\lambda_2/\lambda_1)_{\rm SU(3)}
=-\sqrt{7}/2$ [@E58; @A99]. The top panel of the figure shows the ground state correlation energy, i.e., a difference between the ground state and the mean-field energies, as a function of the interaction strength $\lambda_1$. The mean-field energy is obtained by minimizing the energy surface (\[emf\]). The optimum deformation parameter $\beta$ thus obtained is shown by the thin solid line in the middle panel. One sees the phase transition between the spherical and the deformed configurations at $\lambda_1=0.47$. The results of full triaxial angular momentum projection, obtained by minimizing the projected energy surface (\[eproj\]), are shown by the solid circles in the top panel. These results reproduce well the exact results, indicating that the vibrational contribution is not large in this model. The optimum deformations $\beta$ and $\gamma$ are shown by the thick solid line in the middle and the bottom panels, respectively. In contrast to the mean-field approximation, the optimum deformation $\beta$ is finite for all the values of $\lambda_1$, showing no phase transition [@HRB02]. This is a well-known feature of the variation after projection (VAP) scheme [@RS80]. The dotted line in the figure denotes the results of the approximate triaxial angular momentum projection by the $S_3$ subgroup of the rotation group. This method does not seem to provide enough correlation energy, and the agreement with the exact results is poor for all the region of $\lambda_1$.
What is the role played by the triaxiality $\gamma$ in these calculations? In order to study this, we show the results of full axial projection by the dashed line in the figure. These are obtained by minimizing the energy function (\[axial\]), which is equivalent to minimizing (\[eproj\]) while keeping $\gamma=$0. We find that this approximation reproduces the exact solution remarkably well. The result might appear surprising, since the axially symmetric approximation is not expected to work near spherical, where all the five quadrupole degrees of freedom should contribute in a similar way. However, as we have already discussed, the VAP scheme always leads to a well developed deformation even when the mean-field configuration is spherical (see the middle panel), and such “dangerous” region can be avoided. Moreover, even though the optimum deformation can be small when the interaction strength is very small, this is an irrelevant case since the correlation effect is small there. Figure 2 shows the projected energy surface $E_{\rm proj}(\beta,\gamma)$, measured with respect to the energy of the pure configuration, $s^4$, at $\lambda_1=0.5$ and $\beta$=0.741 as a function of triaxiality $\gamma$. One sees that the energy gain due to the triaxial deformation is indeed small, being consistent with the performance of the axially symmetric approximation shown in fig.1. We summarize the results for $\lambda_1=0.5$ in Table I.
As a further test of the axially symmetric approximation, we repeat the calculations for $\lambda_2/\lambda_1=0$, that is the $\gamma$ unstable case. The results are shown in fig. 3, where the meaning of each line is the same as in fig. 1. Note that the optimum triaixiality parameter $\gamma$ in the triaxial angular momentum projection is 30 degree for all the values of $\lambda_1$, reflecting the $\gamma$ unstable nature of the mean-field approximation. In this case, the performance of the axial approximation is not as good as in fig.1 (see the dashed line). However, it still provides about 80% of correlation energy at $\lambda_1$ =1, and slightly larger at smaller values of $\lambda_1$, which may be acceptable even in realistic systems.
We notice here that the axially symmetric approximation is sufficient for $N=2$ irrespective of the value of $\lambda_1$ and $\lambda_2$. From Eqs. (\[projwf\]) and (\[rotwf\]), the (normalized) wave function for $J=0$ state reads $$|\beta\gamma,J=0\rangle =
\frac{1}{\sqrt{2+\frac{2}{5}\,\beta^4}}\left.\left.
\left[
\sqrt{2}\,\frac{s^\dagger s^\dagger}{\sqrt{2}}
+\frac{\beta^2}{5}\left(2\,d_2^\dagger d_{-2}^\dagger-2\,
d_1^\dagger d_{-1}^\dagger
+\sqrt{2}\,\frac{d_0^\dagger d_0^\dagger}{\sqrt{2}}\right)\right]\,
\right|\,\right\rangle,
\label{n=2}$$ for any value of $\gamma$. The projected wave function is thus independent of $\gamma$, and so is the projected energy surface. We also note that the axially symmetric approximation becomes exact in the limit of $N\to\infty$, as was argued by Kuyucak and Morrison using the $1/N$ expansion technique [@KM88]. For $N=2$, the wave function (\[n=2\]) is in fact exact, when $\beta$ is minimized. This follows from the observation there are only two $J=0$ states in the $(sd)^4$ configuration space, and their relative amplitudes can be set by a suitable choice of $\beta$, in case of attractive interactions. We have checked the trend in between the two limits, $N=2$ and large $N$. The influence of triaxiality are found strongest around $N=4$, where the correlation effects are also largest. The effect of triaxiality then decreases slowly as the boson number $N$ increases.
Efficient angular momentum projection
-------------------------------------
We next discuss the feasibility of the angular momentum projection. From a computational point of view, it is a costly operation to apply the rotation operator to a mean-field configuration and take overlaps with it. Thus one wants to minimize the number of points in the angular integration mesh. Figure 4 shows the convergence of the angular integrals in the projected energy surface (\[eproj\]) with respect to the number of rotated wave functions $N_{\rm rot}$, for the same parameter set as in fig. 2. Notice that the relations $[H,P_J]=0$ and $(P_J)^2=P_J$ are used in deriving Eq. (\[eproj\]), where $P_J$ is the projection operator. For a finite value of $N_{\rm rot}$, these relations may be violated, and consequently, the numerical formula does not give an upper bound of the energy. The open circles are the results of the Simpson method, while the closed circles are obtained with the Gaussian quadrature formulas (see Sec. II-C). These are for fixed values of deformation parameters $\beta$ and $\gamma$, as indicated in the inset of the figure. The upper panel is for the axial projection, while the lower panel for the triaxial projection. Note that the former is plotted as a function of $N_{\rm rot}$, while the latter involves the three integrals and is plotted as a function of $(N_{\rm rot})^{1/3}$. For the Simpson method, we exclude the $(\phi,\theta,\chi)=(0,0,0)$ point in counting the number of state $N_{\rm rot}$ in the horizontal axis. This state corresponds to the unrotated state from which the rotated wave functions are constructed, regardless of which quadrature formula one uses. The figure also shows the result of top-GOA and the approximate triaxial projection with the $S_3$ group as a comparison, which correspond to $N_{\rm rot}=1$ and 5, respectively. From the figure, one observes that the convergence for the axial projection is quick if one uses the Gauss-Legendre quadrature formula. The energy is almost converged at $N_{\rm rot}$ = 3. The Simpson method, on the other hand, requires more terms to achieve the convergence. For the triaxial projection, a similar convergence is seen for each of the three integrals. However, the required number of rotated wave functions is as large as 27 in total, making the triaxial angular momentum projection with the VAP minimization impractical. The situation is even worse for a larger value of $N$. To demonstrate this, figure 5 shows the results for $N=10$. The convergence is somewhat slower in this system compared with the $N=4$ case. Note that the $N_{\rm rot}$ points-Gauss-Legendre formula is exact when the maximum spin in the intrinsic state is $J_{\rm max}=2N_{\rm rot}-2$ [@BD95; @NBT86]. In the present $sd$ model, the maximum spin $J_{\rm max}$ is given by $2N$, and therefore the more points are needed in order to get a convergence for the larger value of $N$.
Lastly, we discuss the applicability of the top-GOA approach to axial projection (see Sec. II-E). This approach requires only one slightly rotated wave function in order to evaluate the second derivatives. Figure 6 shows the correlation energy for $N=4$ obtained with the top-GOA approximation (the dotted line), and with the full axial projection (the dashed line). The figure also contains the exact solutions as a comparison. The upper panel is for $\lambda_2/\lambda_1=-\sqrt{7}/4$, while the lower panel is for $\lambda_2/\lambda_1=0$. We see that the top-GOA approximation reproduces the full projection reasonably well. The performance is somewhat better for $\lambda_2/\lambda_1=-\sqrt{7}/4$. As was discussed in Ref. [@HRB02], the applicability of the top-GOA approaches increases quickly for a larger value of boson number $N$. Indeed, the upper panel of figs. 4 and 5 indicates that the agreement between the top-GOA and the exact projection significantly improves when $N$=10.
Summary
=======
We have used the $sd$-interacting boson model to investigate projections in a generator coordinate approach to calculate the ground state correlation energy associated with the quadrupole motion. Our conclusions about the efficiency of various approximations are clear. The full angular momentum projection of a triaxial intrinsic state requires a large number of rotated wave functions, and it is too costly for realistic calculations. On the other hand, we found that the angular momentum projection of an axial intrinsic state provides a useful ground state correlation energy. The axially symmetric approximation is exact for $N=2$ and $N=\infty$. The number of rotated wave functions needed there is order of 4 if one uses the Gauss-Legendre quadrature formula to compute the angle integral. The approximate triaxial projection using the $S_3$ group requires 5 rotated wave functions and still performs rather poorly. We thus conclude that the axial projection provides the most promising method to compute systematically the ground state correlation energy for deformation.
In applying any projection or generator coordinate expansion, however, one has to bear in mind that up to now the energy density functional is defined for a single Slater determinant state. It is not designed for a multi-determinantal wave function such as the projected state, and there are ambiguities in calculating the density-dependent interaction energy using the energy functional. Although several recipes have been proposed, they are all subject to a conceptional problem. This difficulty can be avoided in either the following ways. One is to use the top-GOA approximation, which can be formulated in terms of the expectation values in the mean-field wave function [@HRB02]. We have studied the applicability of the top-GOA with the present model, and have shown that it already gives a reasonable result for $N=4$ and the performance improves for larger values of $N$. Alternatively, one may also specify the density-dependence in more detail to remove ambiguities. Along these lines, a new form of the Skyrme interaction was recently proposed by Duguet and Bonche [@DB03]. In either way, the axially symmetric approximation leads to a substantial simplification to perform the angular momentum projection with only a few Slater determinants, providing a useful means to construct a microscopic global theory for the nuclear binding energy systematics.
acknowledgments {#acknowledgments .unnumbered}
===============
The authors wish to thank H. Flocard, P.-H. Heenen, E. Khan, J. Libert, P. Schuck, Nguyen Van Giai, and N. Vin Mauh, for discussions motivating this study. K.H. and G.F.B. also thank the IPN Orsay for their warm hospitality where this work was carried out. G.F.B. also received support from the Guggenheim Foundation and the U.S. Department of Energy. K.H. acknowledges support from the the Kyoto University Foundation. P.-G.R. acknowledges support from the Bundesministerium für Bildung und Forschung (BMBF), Project No. 06 ER 808.
Quadrupole commutators
======================
In this Appendix, we give an explicit expression for the quadrupole commutators, $[Q_m,b_R^\dagger]$ and $[Q_m,b^\dagger]$, in Eq. (\[Voverlap\]). For this purpose, it is convenient to use a compact notation for the boson operator, $b_{lm}$, where $b_{00}=s$ and $b_{2m}=d_m$. Using this notation, we express the quadrupole operator $Q_m$ and the rotated boson operator $b_R^\dagger$ as $$Q_m=
\sum_{l_1,m_1}\sum_{l_2,m_2}q^{(m)}_{l_1m_1,l_2m_2}
b_{l_1m_1}^\dagger b_{l_2m_2},
\label{q}$$ and $$b_R^\dagger
=\sum_{l,m}B_{lm}(\Omega)\,b_{lm}^\dagger,
\label{br}$$ respectively. Here, the coefficients $q^{(m)}_{l_1m_1,l_2m_2}$ and $B_{lm}(\Omega)$ are given by, $$\begin{aligned}
q^{(m)}_{00,2m_1}&=&(-)^m\lambda_1\,\delta_{m_1,-m}, \\
q^{(m)}_{2m_1,00}&=&\lambda_1\,\delta_{m_1,m}, \\
q^{(m)}_{2m_1,2m_2}&=&(-)^{m_2}\langle 2\,m_1\,2\,-m_2|2\,m\rangle\,
\lambda_2, \\
B_{00}(\Omega)&=&\frac{1}{\sqrt{1+\beta^2}}\, , \\
B_{2m}(\Omega)&=&
\frac{1}{\sqrt{1+\beta^2}}
\left(\beta\cos\gamma\,D^2_{m0}(\Omega)+\frac{\beta}{\sqrt{2}}
\sin\gamma\,\left(D^2_{m2}(\Omega)
+D^2_{m\,-2}(\Omega)\right)\right). \end{aligned}$$ From Eqs. (\[q\]) and (\[br\]), one finds $$[Q_m,b_R^\dagger] =
\sum_{l_1,m_1}\sum_{l_2,m_2}q^{(m)}_{l_1m_1,l_2m_2}B_{l_2m_2}(\Omega)
\,b_{l_1m_1}^\dagger.
\label{qcommu}$$ The commutator $[Q_m,b^\dagger]$ can be obtained by setting $\Omega=0$ in Eq. (\[qcommu\]). This yields, $$\begin{aligned}
\left[[b,Q_m^\dagger],[Q_m,b_R^\dagger]\right]
&=&
\sum_{l_1,m_1}\sum_{l_2,m_2}\sum_{l_3,m_3}
q^{(m)}_{l_1m_1,l_2m_2}q^{(m)}_{l_1m_1,l_3m_3}
B_{l_2m_2}(\Omega)B_{l_3m_3}(0), \\
\left[b,[Q_m,b_R^\dagger]\right]
&=&
\sum_{l_1,m_1}\sum_{l_2,m_2}
q^{(m)}_{l_1m_1,l_2m_2}B_{l_2m_2}(\Omega)B_{l_1m_1}(0), \\
\left[[b,Q_m^\dagger],b_R^\dagger\right]
&=&
\sum_{l_1,m_1}\sum_{l_2,m_2}
q^{(m)}_{l_1m_1,l_2m_2}B_{l_1m_1}(\Omega)B_{l_2m_2}(0). \end{aligned}$$
M. Bender, P.-H. Heenen, and P.-G. Reinhard, Rev. Mod. Phys. [**75**]{}, 121 (2003).
K. Hagino and G.F. Bertsch, Phys. Rev. C[**61**]{}, 024307 (2000); Nucl. Phys. [**A679**]{}, 163 (2000).
K. Hagino, P.-G. Reinhard, and G.F. Bertsch, Phys. Rev. C[**65**]{}, 064320 (2002)
P.-G. Reinhard, Z. Phys. A[**285**]{}, 93 (1978).
P. Ring and P. Schuck, [*The Nuclear Many-Body Problem*]{} (Springer-Verlag, New York, 1980).
F. Iachello and A. Arima, [*The Interacting Boson Model*]{}, (Cambridge University Press, Cambridge, England, 1987).
J.N. Ginocchio and M.W. Kirson, Phys. Rev. Lett. [**44**]{}, 1744 (1980); Nucl. Phys. [**A350**]{}, 31 (1980).
H.J. Lipkin, N. Mechkov, and A.J. Glick, Nucl. Phys. [**62**]{}, 188 (1965); [**62**]{}, 199 (1965); [**62**]{}, 211 (1965).
P. Bonche, J. Dobaczewski, H. Flocard, and P.-H. Heenen, Nucl. Phys. [**A530**]{}, 149 (1991).
D. Vautherin, Phys. Rev. C[**7**]{}, 296 (1973).
Y.K. Gambhir, P. Ring, and A. Thimet, Ann. Phys. (N.Y.) [**198**]{}, 132 (1990).
K. Burzynski and J. Dobaczewski, Phys. Rev. C[**51**]{}, 1825 (1995).
K. Enami, K. Tanabe, and N. Yoshinaga, Phys. Rev. C[**59**]{}, 135 (1999); C[**63**]{}, 044322 (2001).
P.-G. Reinhard and K. Goeke, Rep. Prog. Phys. [**50**]{}, 1 (1987).
J.P. Elliott, Proc. R. Soc. [**A245**]{}, 128 (1958); [**A245**]{}, 562 (1958).
A. Arima, J. Phys. G[**25**]{}, 581 (1999).
S. Kuyucak and I. Morrison, Ann. Phys. (N.Y.) [**181**]{}, 79 (1988); Phys. Rev. C[**36**]{}, 774 (1987).
M.A. Nagarajan, A.B. Balantekin, and N. Takigawa, Phys. Rev. C[**34**]{}, 894 (1986).
T. Duguet and P. Bonche, Phys. Rev. C, in press. e-print: nucl-th/0210057.
Scheme $E-E(s^4)$ $\beta$ $\gamma$ (deg.)
--------------------- ------------ --------- -----------------
Exact $-$0.8193 – –
Triaxial Projection $-$0.8189 0.741 17.64
Axial Projection $-$0.8017 0.723 0.0
: Comparison of the ground state energy $E$ and the optimum deformation parameters $\beta$ and $\gamma$ obtained with several methods. The parameters of the Hamiltonian are taken to be $N=4$, $\epsilon=1$, $\lambda_1=0.5$, and $\lambda_2/\lambda_1=-\sqrt{7}/4$. The energy is measured with respect to that of the pure configuration, $s^4$.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'The mid-infrared spectral energy distributions (SEDs) of 83 active galaxies, mostly Seyfert galaxies, selected from the extended 12 [$\mu$m]{} sample are presented. The data were collected using all three instruments, IRAC, IRS, and MIPS, aboard the [*Spitzer Space Telescope*]{}. The IRS data were obtained in spectral mapping mode, and the photometric data from IRAC and IRS were extracted from matched, 20 diameter circular apertures. The MIPS data were obtained in SED mode, providing very low resolution spectroscopy ($R \sim 20$) between $\sim 55$ and 90 [$\mu$m]{} in a larger, 20 $\times$ 30 synthetic aperture. We further present the data from a spectral decomposition of the SEDs, including equivalent widths and fluxes of key emission lines; silicate 10 [$\mu$m]{} and 18 [$\mu$m]{} emission and absorption strengths; IRAC magnitudes; and mid-far infrared spectral indices. Finally, we examine the SEDs averaged within optical classifications of activity. We find that the infrared SEDs of Seyfert 1s and Seyfert 2s with hidden broad line regions (HBLR, as revealed by spectropolarimetry or other technique) are qualitatively similar, except that Seyfert 1s show silicate emission and HBLR Seyfert 2s show silicate absorption. The infrared SEDs of other classes within the 12 [$\mu$m]{} sample, including Seyfert 1.8-1.9, non-HBLR Seyfert 2 (not yet shown to hide a type 1 nucleus), LINER, and HII galaxies, appear to be dominated by star-formation, as evidenced by blue IRAC colors, strong PAH emission, and strong far-infrared continuum emission, measured relative to mid-infrared continuum emission.'
author:
- 'J. F. Gallimore, A. Yzaguirre, J. Jakoboski, M. J. Stevenosky, D. J. Axon, S. A. Baum, C. L. Buchanan, M. Elitzur, M. Elvis, C. P. O’Dea, A. Robinson'
- 'draft:'
title: 'Infrared Spectral Energy Distributions of Seyfert Galaxies: Spitzer Space Telescope Observations of the 12 [$\mu$m]{} Sample of Active Galaxies'
---
Introduction
============
Three components dominate the mid-infrared spectrum of an active galaxy : (1) thermal radiation from a dusty, compact medium that surrounds the active nucleus (AGN) and can obscure direct sight-lines to it; (2) PAH features and thermal dust continuum associated with star-formation or perhaps a powerful starburst; and (3) line features arising from molecular, atomic, and ionic species.
The dusty medium surrounding the AGN is commonly referred to as “the dusty torus.” Its presence is inferred for many AGNs based, for example, on spectropolarimetry , X-ray spectroscopy [@2007ApJ...659L.111R; @2002ApJ...571..234R], and infrared aperture synthesis measurements . Observations further indicate that the dusty medium must be axisymmetric, permitting low extinction sight-lines to type 1 AGNs, where the broad-line region (BLR) is apparent in total intensity (Stokes $I$) spectra (the “pole-on” view), but high-extinction sight-lines to type 2 AGNs, which show suppressed broad line emission and AGN continuum (the “edge-on” view).
Since the dusty torus re-radiates incident AGN continuum in the mid-infrared, its SED provides an indirect measure of the AGN luminosity [@2008ApJ...685..160N], important especially for heavily obscured AGNs where other indirect diagnostics may not be available. Mid-infrared fine-structure lines can also constrain the intrinsic shape of the AGN SED, because particular line ratios are sensitive to the shape of the SED but less sensitive to extinction compared to optical/UV lines of species with similar ionization energies [@2000ApJ...536..710A].
The torus SED depends on the geometry and clumpiness of the torus, among other properties [@2002ApJ...570L...9N; @2008ApJ...685..160N]. For example, smooth, cylindrical tori produce very strong 10 [$\mu$m]{} silicate (Sil) features, whether in emission when viewed more nearly pole-on, or deep absorption when viewed edge-on [@1992ApJ...401...99P]. Clumpy tori, somewhat independent of the assumed geometry, instead produce much weaker Sil features, because inter-clump sight-lines can provide a view of hotter dust on the far side of the torus; our view of clumpy tori includes a mix of cold and hot clump surfaces that dilutes the Sil features [@2008ApJ...685..160N]. Individual clumps are heated from outside and cannot produce the strong absorption that a centrally heated dust shell can have [@2008ApJ...678..729S; @2008ApJ...685..147N; @2007ApJ...654L..45L].
The mid-infrared SED is also an important diagnostic of star-formation. Young star clusters embedded in GMCs are predicted to produce weaker PAH equivalent width in comparison to older clusters, owing to photodestruction and continuum dilution by hot dust grains [@2000MNRAS.313..734E]. In global models, PAH features are sensitive to the fractional luminosity of OB associations and the gas density of their surrounding ISM . Since the dusty medium re-radiates incident stellar radiation, the mid-far infrared luminosity constrains the luminous contribution of star-formation.
The mid-infrared SED of active galaxies therefore contains diagnostics of the AGN, its surrounding dusty torus, and any surrounding star-formation. The described tracers can, in principle, be disentangled by spectral decomposition of the global SED [@2003MNRAS.343..585F; @2007ApJ...670..129M]. At the present, available infrared instruments, at least in their default mode of use, suffer from mismatched apertures leading to discontinuities in the observed SEDs or mismatched coverage of spectral lines that bias line ratios.
To remedy this problem of aperture matching, we have used the [ *Spitzer Space Telescope*]{} to observe a sample of active galaxies, mostly Seyferts, in synthetically matched, 20 diameter apertures spanning $\lambda$3.6-36 [$\mu$m]{} and a larger aperture, $\sim 20\arcsec \times 30\arcsec$, covering $\lambda$55-90 [$\mu$m]{}. These moderate to low resolution SEDs are well-suited for spectral decomposition, studies of the PAH and Sil features, as well as global constraints on the AGN and star-formation contributions to the luminosity.
The observations and data obtained for this survey are presented in this work. Section \[sec:sample\] describes the sample selection. Section \[sec:observations\] provides a detailed account of the observations and data reduction, with attention to artifact correction, synthetic aperture matching, and corrections for extended emission. Section \[sec:analysis\] summarizes the extraction of spectral features using a modified version of the PAHFIT tool [@2007ApJ...656..770S] and measurements based on the line-subtracted SEDs. We conclude in Section \[sec:conclusions\] with a summary of the properties of the sample and avenues for future analysis.
Sample Selection {#sec:sample}
================
The sample, listed in Table \[tab:sample\], comprises a subset of the extended 12 [$\mu$m]{} sample of AGNs [@1993ApJS...89....1R]. This parent sample was defined by an IRAS detection at 12 [$\mu$m]{}, $F_{12}({\rm
IRAS}) > 0.22$ Jy, and color selection $F_{60}({\rm IRAS}) > 0.5 F_{12}({\rm
IRAS})$ or $F_{100}({\rm IRAS}) > F_{12}({\rm IRAS})$ to remove stars but few galaxies. AGNs were identified based on prior (usually optical) classification. We restricted the sample to include (1) only those objects categorized by @1993ApJS...89....1R as Seyferts or LINERs, and (2) only those sources with $cz < 10,000$ [km s$^{-1}$]{}. Three sources were removed subsequent to observations owing either to pointing errors or saturation in the Spitzer observations: NGC 1068, M-3-34-64, and NGC 4922. Our sample ultimately includes 83 Seyfert and LINER nuclei.
The main advantage of this sample over other AGN catalogs is its large collection of published and archival multiwavelength observations [@1996ApJ...473..130R; @1999ApJ...516..660H; @1999ApJ...510..637H; @2001MNRAS.325..737T; @2000MNRAS.314..573T; @2006AJ....132..546G; @2002ApJ...572..105S]. In addition, there are comparable numbers of Seyfert 1 (S1) and Seyfert 2 (S2) nuclei, and their redshift distributions are statistically indistinguishable [@2001MNRAS.325..737T; @2000MNRAS.314..573T].
The original survey paper of @1993ApJS...89....1R broadly assigned optical classifications of type 1 (broad-line AGN) and type 2 (narrow-line AGN), but @2003ApJ...583..632T pointed out that, even given such coarse binning of activity type, there were many misclassifications. To aid in a more sophisticated comparison of infrared properties to optical classifications, we endeavored to collect updated and more precise classifications from the literature. The revised classifications are included in Table \[tab:sample\] and illustrated in Figure \[fig:sampleclasses\]. We find that 20% (7 / 35) of the Rush et al. Type 1s are re-classified as hidden broad line region (HBLR) Seyfert 2s (S1h or S1i), LINER, or HII (star-forming galaxy), and 28% (13 / 46) of the Rush et al. Type 2s are re-classified as S1.n, LINER, or HII. Note that HBLRs have been sought in all but three of the 20 S2s in our survey: NGC 1125, E33-G2, and NGC 4968 [@2003ApJ...583..632T].
Observations and Data Reduction {#sec:observations}
===============================
The sample galaxies were observed using all of the instruments of the [*Spitzer Space Telescope*]{} (Program ID 3269, Gallimore, P.I.): the four broadband channels (3.6 [$\mu$m]{}, 4.5 [$\mu$m]{}, 5.8 [$\mu$m]{}, and 8.0 [$\mu$m]{}) of the Infrared Array Camera [IRAC; @2004ApJS..154...10F]; the low resolution gratings of the Infrared Spectrograph [IRS; @2004ApJS..154...18H], operating in spectral mapping mode; and the Multiband Imaging Photometer for Spitzer [MIPS; @2004ApJS..154...25R], operating in SED mode. The resulting SEDs are provided in Figure \[fig:allseds\].
Several sample galaxies were observed as part of other [*Spitzer*]{} programs that made use of different observing strategies; for example, in some cases only single-pointing (“Staring Mode”) IRS spectra are available. We summarize the observations and data reduction techniques both for our observing program and archival [*Spitzer*]{} data.
Spitzer IRAC Observations {#sec:iracobs}
-------------------------
IRAC observations were centered on the NED coordinates of the sample galaxies based on catalog names listed in @1993ApJS...89....1R. A beamsplitter and supporting optics centers the target on two detectors simultaneously [@2004ApJS..154...10F], either at 3.6 and 5.8 [$\mu$m]{} or 4.5 and 8.0 [$\mu$m]{}, and so each observation consisted of two pointings at a common orientation of the focal plane relative to sky. These observations were taken as snapshots with no attempt to mosaic or dither. To guard against saturation, we used the high-dynamic range (HDR) mode which provides 0.6 s and 12 s integrations at each pointing.
The data were initially processed and calibrated by the IRAC basic calibration data (BCD) pipeline, version S14.0. The pipeline performs basic processing tasks, including bias and dark current subtraction; response linearization for pixels near saturation; flat-fielding based on observations of high-zodiacal background regions; saturation and cosmic-ray flagging, the latter indicated by signal detections more compact than the PSF; and finally flux calibration based on observations of standard stars[^1]. The nominal photometric stability for compact sources is better than 3% in all detectors [@2005PASP..117..978R]. Corrections for extended sources[^2] are accurate to $\sim 10$%, but the contribution of extended emission and the resulting correction is small for most of the sample galaxies. Color-corrections are typically much greater, especially in the 8.0 [$\mu$m]{} channel where in-band PAH emission can result in factors of two or greater corrections.
The data were further processed for remaining artifacts, including cosmic-rays and detector artifacts not corrected by the BCD pipeline. The steps performed for artifact removal and photometric extraction are detailed below in Sections \[sec:cosmicray\]–\[sec:lastiracstep\]. Figure \[fig:iracprocess\] illustrates the effect of our artifact removal techniques.
The IRAC photometry is listed in Table \[tab:iracphot\]. Detailed below, the photometry is presented in the IRAC magnitude system with zero-point flux densities 280.9, 179.7, 115.0, and 64.13 Jy for the 3.6, 4.5, 5.8, and 8.0 [$\mu$m]{} channels, respectively [@2005PASP..117..978R]. The photometry includes corrections for extended emission and color-corrections.
### Cosmic-ray and bandwidth-effect mitigation {#sec:cosmicray}
All of the images were affected to varying degrees by cosmic-ray and solar proton hits. Moreover, 5.8 and 8.0 [$\mu$m]{} images near saturation suffered from the bandwidth effect, which manifests as a row-wise trail of fading source images repeating every four pixels from the affected source. These multiple images do not conserve flux but artificially add signal to the sources; for our purposes, these bandwidth effect artifacts behave like cosmic ray hits near our target galaxies.
The BCD pipeline attempts to identify cosmic-ray hits by locating detections that are narrower than the PSF. Extended cosmic ray tracks are however missed by this procedure. The v. S14.0 BCD pipeline further has no means of mitigating the bandwidth effect.
We corrected these artifacts in three, post-BCD steps. Firstly, we generated difference maps between the long and short exposure images. Difference signal that exceeded 5$\sigma$ on the long exposure images was flagged as an artifact, and these artifacts were replaced with signal from the short exposure image. The matching mask and uncertainty images were updated accordingly. This technique was particularly successful at removing bandwidth artifacts with little image degradation; the affected image regions are at relatively high signal-to-noise on the short exposure images.
We next applied van Dokkum’s [[email protected]] algorithm to flag cosmic rays not picked up on difference images. Images are convolved with a Laplacian filter, which enhances sharp edges on image features and effectively identifies cosmic ray tracks. The IRAC point response function (PRF) is however undersampled in all four detectors, and real, compact sources such as field stars or the AGN appear as false positives. These false positives can be filtered by measuring the asymmetry of the detected source, parameter $f_{lim}$ in van Dokkum’s notation. Cosmic ray hits tend to be more asymmetric than the PRF, corresponding to a larger value of $f_{lim}$. Using a few sample images and trial-and-error, we determined lower threshold values for $f_{lim}$ that flag obvious cosmic rays but pass field stars and galaxy images; specifically, we found $f_{lim} = 8$ worked well for the 4.5, 5.8, and 8.0 [$\mu$m]{} images, and $f_{lim} = 10$ for the 3.6 [$\mu$m]{}images.
Finally, the few remaining cosmic rays were flagged interactively by inspection of four-color images. Residual cosmic rays appear as color-saturated pixels in this representation and so were easily identified, flagged, and replaced by bilinear interpolation of neighboring pixels.
### Bias artifacts
Residual bias artifacts affect the pipeline-processed data. Software is currently available at the Spitzer Science Center to mitigate these artifacts where the image comprises mainly compact sources, but the algorithm breaks down in the presence of extended or diffuse emission. The host galaxy is detected for many of the sample AGNs, and so we had to develop new techniques to eliminate these bias artifacts.
The 3.6 and 4.5 [$\mu$m]{} images show the effects of column pull-down and multiplexer bleed (“muxbleed”). Column pull-down is evident as a depressed bias level along columns that run through pixels near saturation. The bias adjustment is nearly constant along a column, but there may be slightly different bias offsets above and below saturated pixels.
One approach is to evaluate the bias depression in source-free regions, but, for some sources, the presence of extended emission over a majority fraction of the array reduces or eliminates valid background regions. We suppressed the diffuse emission by performing row-wise median filtering across pull-down columns and used the median difference to determine the bias correction.
Muxbleed also affects rows containing pixels near saturation, most evident as a row pull-up, but also impacting the bias level on neighboring rows. The images are read as four separate readout channels that are interlaced every four columns. The bias on each readout channel is affected differently, resulting in a vertical pinstripe pattern over some region of the array near saturated pixels.
To mitigate the muxbleed artifact in the presence of extended emission, we first median-filtered the image using an $8 \times 8$ kernel to smooth out the pinstripe pattern over two readout cycles, subtracted the median-filtered image to remove diffuse emission, and generated residual images of each readout channel. Muxbleed appears on each channel readout image as decaying, horizontal stripes along rows containing pixels near saturation with surrounding bands of weaker, constant bias offset. We determined a final muxbleed model by fitting the brighter muxbleed stripes with a cubic polynomial and determining the median DC-level offset in the surrounding bands.
The 5.8 [$\mu$m]{} images were affected by residual dark current, appearing as a slowly varying surface brightness gradient of the background. This “first frame” artifact results from the sensitivity of the dark current to properties of the previous observation and the time elapsed since that observation. To remove this dark current gradient, we first masked bright stars and diffuse emission from galaxies and then fit a bilinear surface brightness model to the background.
Several 5.8 and 8.0 [$\mu$m]{} images were also affected by banding, which is a decaying signal along rows or columns containing saturated pixels. These bands were reduced by fitting separate row-wise and column-wise polynomials to the surface brightness of off-source regions of the array.
### Saturation and Distortion
Final corrections were performed using custom IDL scripts and the [ *MOPEX*]{} software package[^3] [@2005ASPC..347...81M; @2006SPIE.6274E..10M]. The short exposure images were used to replace saturated pixels on the long-exposure images. The images were then corrected for distortion and registered to a 122 $\times$ 122 grid. The resulting data products are the science image, calibrated in surface brightness units MJy sr[$^{-1}$]{}; an uncertainty image based on the propagation of statistical uncertainties through the pipeline and post-BCD processing, but which does not include systematic uncertainties associated with calibration; and a coverage image, which, for these snapshot exposures, marks good pixels as “1,” bad pixels (i.e., known bad pixels or cosmic ray hits) as “0,” and intermediate values indicating that good and bad pixels were used in the distortion correction for that pixel.
### Photometric extraction {#sec:lastiracstep}
We extracted flux density measurements using a synthetic, 20diameter circular aperture centered on the brightest infrared source associated with the active galaxy. The exception is NGC 1097, which has an off-nucleus star-forming region that is brighter than the central point source; in that case, the aperture was centered on the central point source. To determine the point-source contribution to the aperture, each image was convolved with a 2-D, rotationally symmetric Ricker wavelet [@2006MNRAS.369.1603G]. The width of the central peak of the wavelet was tuned to match the width of the nominal IRAC PRF, effectively subtracting extended emission and enhancing point sources. We used field stars to calibrate the point source response of the wavelet-convolved image; this calibration includes a correction for aperture losses. We next applied extended source corrections[^4] to the residual signal (total $-$ point source).
Finally, the resulting photometry was color-corrected following the prescription described in the IRAC Data Handbook. The uncorrected calibration provides a flux density measurement at a nominal wavelength assuming a nominal spectral shape $\nu F_{\nu} = $ constant over the broadband response. The color-correction adjusts the calibration based on the true shape of the spectrum. The result is the true flux density at a nominal wavelength rather than a broadband average; for example, the color-corrected flux reported for the 8 [$\mu$m]{} camera will be closer to peak of the 7.7 [$\mu$m]{} PAH feature plus any underlying continuum at that wavelength rather than bandpass-weighted average. For reference, the nominal wavelengths for the IRAC cameras are 3.550, 4.493, 5.731, and 7.872 [$\mu$m]{}.
Color-correction involves integrating the infrared spectrum weighted by the IRAC filter bandpasses. To perform this integration, we used our IRS spectra (Section \[sec:irsobs\], below) with extrapolation to shorter wavelengths assuming a power law slope matching the observed, uncorrected 3.6 [$\mu$m]{} flux density. Note that this color-correction technique does not force a match between the IRS and IRAC photometry; the flux scale of the spectrum is normalized so that only the shape of the IRS spectrum influences the color correction. Photometric corrections for extended sources are not accurately known for the IRS spectra, and so the accuracy of the IRS spectral extractions precluded separating the color corrections for extended and point source contributions for a given source; rather, the color correction was determined based on the integrated signal in the aperture.
Archival Spitzer IRAC Observations {#sec:iracarchive}
----------------------------------
IRAC observations of 16 sample galaxies were obtained through public release to the [*Spitzer*]{} archive. Data processing largely followed the techniques described above for our observations, except that, where our observations comprise snapshots, the archival observations all employed dithering or mosaicking techniques. The archival data were initially mosaicked using standard procedures with [*MOPEX*]{}, including background matching based on overlaps.
Artifacts, particularly banding, seriously affected the matching of mosaic overlap regions and so we developed a modified mosaicking procedure building on the algorithm of @1995ASPC...77..335R. We modified their least-squares approach to employ least absolute deviations as a more robust measure of the quality of overlap matching, and we further masked bright source and artifact regions prior to overlap matching. The final images were assembled using a median stack of astrometrically aligned images, background subtracted, and finally sub-imaged to roughly 5 square to match our survey observations.
Residual bias artifacts were removed by computing column or row biases where sufficient background was available for bias determination. Cosmic rays were largely mitigated by the median stack, but residual bad pixels were identified and flagged interactively, as described in Section \[sec:cosmicray\].
Spitzer IRS Spectral Mapping {#sec:irsobs}
----------------------------
We observed the sample galaxies using the [*Spitzer*]{} IRS in spectral mapping mode and using the Short-Low (SL) and Long-Low (LL) modules. The modules cover a wavelength range of $\sim 5$ – 36 [$\mu$m]{}with resolution $R=\lambda / \Delta \lambda$ ranging from 64 to 128. The integration times were 6 seconds per slit pointing.
The spectral maps were constructed to span $> 20\arcsec$ in the cross-slit direction and centered on the target source. The observations were stepped perpendicular to the slit by half slit-width spacings. For the SL data, the mapping involved 13 observations stepped by 18 perpendicular to the slit, and, for the LL data, 5 observations stepped by 525 perpendicular to the slit. The resulting spectral cubes span roughly $25\farcs2 \times 54\farcs6
\times$ (5.3 – 14.2 [$\mu$m]{}) for the SL data and $29\farcs1 \times
151\arcsec \times$ (14.2 – 36 [$\mu$m]{}) for the LL data.
The raw data were processed through the [*Spitzer*]{} BCD pipeline, version S15.3.0. The pipeline handled primary processing tasks including identification of saturated pixels; detection of cosmic ray hits; correction for “droop,” in which charge stored in an individual pixel is affected by the total flux received by the detector array; dark current subtraction; and flat-fielding and response linearization. Details are provided in the IRS Data Handbook[^5].
Sky subtraction used off-source orders. In SL and LL observations, the source is centered in the first or second order separately, with the “off-order” observing the sky at a position offset parallel to the slit: 79 away for SL and 192 away for LL. Sky frames were constructed using median combinations of the off-source data and subtracted from the on-source data of matching order. No detectable contamination of the sky frame appeared in our observations based on inspection of the sky frame data. In addition, a portion of the first order slit spectrum appears on the second order observation and is used as an additional check of spectral features that appear near the first / second order spectral boundary.
Data cubes were constructed by first registering the individual, single-slit spectra onto a uniform grid (slit-position vs. wavelength). The IRS slits do not align with the detector grid, and the detector pixels undersample the spatial resolution. Interpolating the undersampled slit onto a uniform grid produces resampling noise [@1989MNRAS.238..603A], which significantly impacts the spectra of unresolved sources. We instead used the pixel re-gridding algorithm described by @2007PASP..119.1133S that minimizes resampling noise. Pixels from the original image are effectively placed atop the new, registered pixel grid. Uniform surface brightness is assumed across the original pixel, and that original signal is weighted and distributed based on the fractional overlap with pixels on the new grid. After registering the single-slit frames, the data were re-gridded by bilinear interpolation, one wavelength plane at a time, into the final data cube. Flux uncertainty images were similarly processed, with modifications to accommodate variance propagation, to produce an uncertainty cube.
The spectra were extracted from synthetic, 20 diameter circular apertures centered on the brightest, compact IR source nearest the target coordinates. Fractional pixels at the edge of the aperture were accounted for by assuming uniform surface brightness across the pixel and weighting by the area of intersection between the pixel and aperture mask.
The cube-extracted spectra were optimally weighted based on a three-dimensional modification of Horne’s [[email protected]] two-dimensional slit extraction algorithm. Successful application of this algorithm requires estimation of spatial profiles $P_{x\lambda}$, the probability that a detected photon falls in a given pixel $x$ rather than some other pixel on the spectral map at wavelength $\lambda$. Optimal extraction requires that the fractional uncertainty of the estimator for $P_{x\lambda}$ is less than the fractional uncertainty of original spectral image. Generation of $P_{x\lambda}$ therefore requires some smoothing along the wavelength axis. To help preserve real variations of $P_{x\lambda}$ with wavelength, we employed Savitzky-Golay [[email protected]] polynomial smoothing. In this smoothing scheme, $P_{x\lambda}$ is calculated by a polynomial fit to the spectrum at pixel $x$ over the fitting window $\delta
\lambda$ either centered on $\lambda$ or limited by the ends of the spectrum. We performed trial-and-error smoothing experiments on CGCG381-051, which shows relatively weak continuum at short wavelengths but high eqw PAH features, to decide on the polynomial order and window smoothing parameters; ultimately we selected quadratic polynomials and 5-pixel smoothing windows to balance improved signal-to-noise and the tracking of real variations of $P_{x\lambda}$ with wavelength.
The effect of optimal extraction is illustrated in Figures \[fig:optimumplot1\] and \[fig:optimumplot2\], which compare the optimally-extracted and non-weighted extraction of the IRS SL spectrum of NGC 3079 and F01475-0740, and Table \[tab:optnoopt\], which compares the measurements derived from these extractions (see Section \[sec:pahfit\] for a description of the measurement technique). NGC 3079 presents a challenging case because it is edge-on and shows bright, extended PAH emission. The spatial profile is therefore a complex function of wavelength, and coarse smoothing in the wavelength direction could potentially affect the measurement of the PAH fluxes and equivalent widths. We find however that the fractional difference between the optimally-weighted spectrum and the unweighted spectrum is typically only a few %, comparable to the statistical uncertainties in the unweighted spectrum. Furthermore, the measured fluxes and equivalent widths agree to within the measurement uncertainties; in fact, the measurement uncertainties are dominated by the systematics of the measurement technique.
Compared to NGC 3079, F01475-0740 is a compact source at relatively low signal-to-noise. Figure \[fig:optimumplot2\] clearly illustrates the advantage of optimum weighting. The continuum shape and PAH spectral features are preserved, but the formal statistical uncertainties are reduced by factor of 2–3 over the SL spectral range. Again, the fluxes and equivalent widths of lines agree to within the measurement uncertainties, but the optimally weighted spectrum produced signficant ($>3\sigma$) detections of PAH 6.2 [$\mu$m]{}, 14.3 [$\mu$m]{}, and 18.7 [$\mu$m]{} that are too faint for the unweighted spectrum.
Fringing at the 5% – 10% level was apparent in the LL observations, particularly those of sample objects with a strong point source contribution. The spectra undersample the fringing, and so the fringe pattern could not be removed using a conventional filtering in Fourier space. We employed instead the technique described by @2003ESASP.481..375K, which involves fitting sinusoids in wavenumber to line-free sections of the spectrum. Successive fringe components are added to the model until the next model fringe amplitude falls below the noise level of the spectrum. The effect of this technique is illustrated in Figure \[fig:fringeremoval\].
The flux scale was calibrated against archival staring and spectral mapping observations of the stars HR7341 (SL) and HR6606 (LL). The staring mode spectra were extracted using SPICE[^6], and the spectral mapping observations were processed into cubes as described above for our sample galaxies. The flux calibration curve was determined by the ratio of the flux-calibrated staring mode spectra to the uncalibrated, cube-extracted spectra and smoothed by a polynomial fit.
No correction was produced for resolved, extended emission. The flux calibration includes an aperture loss correction factor appropriate for compact sources, but extended sources do not suffer as much aperture loss. Therefore, wavelength bands containing extended emission relative to the PRF will be calibrated systematically high. Discussed below in Section \[sec:iracirs\], the systematic error introduced is of order 10%, particularly near the 8 [$\mu$m]{} PAH features.
We note that these IRS observations have been independently and differently processed and recently presented in @2009ApJ...701..658W. The main differences in data processing are: (1) Wu et al. used the CUBISM tool and its calibrations for spectral cube construction [@2007PASP..119.1133S]; (2) it is not clear that Wu et al. extracted the spectra optimally; (3) a rectangular aperture, which covers a similar solid angle to our extraction aperture, was commonly used (204 $\times$ 153; larger for SINGS galaxies); (4) there appears to have been no attempt to de-fringe the long wavelength (LL) data; (5) for an unknown number of sources, Wu et al. had to scale the SL data to match up with the flux calibration of the LL data. We address these differences in turn, excepting (3), the minor difference of extraction aperture geometry.
We found that (1) CUBISM forced the data onto a new grid in which the original pointing center was shifted, commonly resulting in nuclei that were centered between pixels rather than on a pixel. This shift apparently introduced a systematic error in the resampling of the surface brightness, because we noticed artifacts and a few percent reduced signal in the spectrum extracted from cubes produced by CUBISM. Our cubing technique was designed to require minimal resampling of the observational grid and mitigated these artifacts.
We also found that (2) non-optimal extractions decreased the signal-to-noise of the extracted spectra of fainter sources by factors of $\sim 50\%$ or more (see Figure \[fig:optimumplot2\] and the discussion above). Although it varies source to source, (4) fringing can produce $\sim 10\%$ artifacts between 14 and 36 [$\mu$m]{} (cf. their extraction of MRK 1239 and M-6-30-15, among others). Finally, (5) we did not need to apply any additional scaling to the spectra extracted from SL cubes; rather our calibrations of the SL module show excellent agreement with both the IRAC measurements and LL measurements, even though the calibrations for each respective [*Spitzer*]{} module were based on different calibration stars; this result suggests that the calibration against spectrally mapped stars, as presented here, was more robust for calibration and systematic corrections, accepting the $\sim 10\%$ calibration uncertainty for sources dominated by extended emission.
Archival IRS Data {#sec:irsarchive}
-----------------
Archival data of our sample galaxies, such as obtained by the SINGS collaboration, were processed in the same manner as for our program data. Spectral mapping data were available in LL for all sources, but there are however a few sources for which only SL staring mode observations are available: NGC 526A, NGC 4941, NGC 3227, IC 5063, NGC 7172, and NGC 7314. The staring mode spectra were extracted using SPICE and subjectively scaled to achieve the best match to align with LL measurements and, where possible, IRAC 5.8 and 8.0 [$\mu$m]{} measurements. Not surprisingly, there result disagreements between the IRAC measurements and scaled, IRS staring observations owing to the contribution of the host galaxy in the 20 aperture.
Spitzer MIPS-SED Observations {#sec:mipsobs}
-----------------------------
The sample galaxies were observed using the 70 [$\mu$m]{} low-resolution spectrometer (SED mode) of the MIPS instrument. The spectrometer provides spectral resolution $R\sim 15$–25 between $\lambda$55 and 95 [$\mu$m]{}. The slit dimensions are $20\arcsec \times 120\arcsec$.
Each observation consisted of three pairs of on-source and off-source measurements, where the off-position was located 1 – 3 from the on-source position. Integration times were 3 or 10 seconds depending on the IRAS 60 [$\mu$m]{} flux density of the source. The observations include measurements of built-in calibration light sources, called stimulators, for flux calibration.
We used post-BCD products of the S14.4.0 pipeline; the pipeline processing includes flux calibration, background subtraction, co-addition of the on-source pointings, and uncertainty images. Details are provided in the MIPS data handbook [^7].
The MIPS spectra were extracted by summing across a 29-wide synthetic aperture centered on the target source (three-column extraction). The spectra were further corrected for aperture losses assuming a point source model; the present extractions include no corrections for spatially extended emission. The photometric accuracy is expected to be 10% for compact sources and $\sim 15$% for extended sources [@2008PASP..120..328L].
Note that the MIPS aperture covers a solid angle $\sim 1.8\times$ larger than the IRAC and IRS apertures. Modeling SEDs of sources with extended ($> 20\arcsec$) far-infrared emission will therefore require an (unknown) aperture correction that is potentially much greater than the reported calibration uncertainty. Candidates for MIPS aperture corrections will show a jump in the MIPS flux relative to a suitable extrapolation of the IRS spectrum, although a real spectral peak in the 40–50 [$\mu$m]{} range might similarly result in an observed MIPS SED that appears too blue even in the absence of aperture effects.
Comparison of IRS and IRAC Photometry {#sec:iracirs}
-------------------------------------
The IRS spectra overlap with the 5.8 and 8.0 [$\mu$m]{} IRAC channels. To compare the relative spectrophotometry, we interpolated the IRS spectra to find the flux densities at the effective wavelengths of the color-corrected IRAC data, 5.731 [$\mu$m]{} and 7.872 [$\mu$m]{}, respectively. The average relative difference of the overlapping flux densities, $F_{\nu}({\rm IRS}) / F_{\nu}({\rm IRAC}) - 1$, are $4\pm 1$% at 5.8 [$\mu$m]{} and $6\pm 1$% at 8.0 [$\mu$m]{}. The frequency distributions are shown in Figure \[fig:irsiracoverlap\]. For comparison, the average standard scores, $(F_{\nu}[{\rm IRS}] - F_{\nu}[{\rm IRAC}])/\sigma$, are $0.00\pm 0.03$ and $0.18 \pm 0.05$.
These results indicate that the IRS flux densities are, on average, systematically higher than the IRAC measurements by a few percent. However, the excess at 5.8 [$\mu$m]{} is dominated by statistical uncertainties, as the average standard score is consistent with zero.
Strong, extended PAH emission however contributes significantly to the 8.0 [$\mu$m]{} IRAC channel. This spectral feature is difficult to color-correct accurately owing to numerical integration errors that arise at sharp spectral features. In addition, PAH emission is often spatially resolved in this sample, and the present IRS calibration includes no correction for extended emission. The systematic offset of 6% may result from the overcorrection of aperture losses for extended PAH sources. The IRAC photometry of extended sources is moreover accurate to only $\sim 10$%.
Figure \[fig:eightmicronexcess\] shows the 8 [$\mu$m]{} IRS / IRAC flux ratio as a function of the fractional contribution of the central point source to the 20 nuclear aperture. The fainter sources provide appreciable scatter, but the brighter sources reveal a trend in which point-source dominated objects show better agreement, but more extended objects present $\sim 10$% IRS excesses. This trend supports the interpretation that residual extended source calibration uncertainties are the primary cause for the IRS – IRAC discrepancies at 8 [$\mu$m]{}.
Analysis {#sec:analysis}
========
Spectrum Decomposition {#sec:pahfit}
----------------------
In addition to continuum radiation from dust grains and associated Sil emission and absorption bands, the infrared spectrum of Seyfert galaxies includes diagnostics such as fine structure lines tracing a range of ionization states, [H$_2$]{} lines, and PAH emission. We used the spectrum fitting tool PAHFIT [@2007ApJ...656..770S], which is tailored to low resolution IRS spectroscopy. The fits to the SEDs of NGC 4151 and NGC 7213 are provided in Figure \[fig:n4151decomposition\] as examples of the decomposition. The results are summarized in the following tables: integrated PAH fluxes are given in Table \[tab:pahstr\]; PAH equivalent widths (EQWs) in Table \[tab:paheqw\]; H$_2$ line fluxes (mostly upper limits) in Table \[tab:h2str\] and EQWs in Table \[tab:h2eqw\]; ionic fine structure line fluxes in Table \[tab:fsstr\], and their EQWs in Table \[tab:fseqw\]. The line measurements are not corrected for model extinction, but for completeness we list the best-fit model dust opacity (normalized to 10 [$\mu$m]{}) in Table \[tab:tauapcor\].
As provided, PAHFIT is best suited for nearly normal or star-forming galaxies; its model includes thermal continuum radiation from dust grains, PAH features, fine-structure lines from lower ionization state species, [H$_2$]{}, and Sil absorption, whether by assumed mixed or foreground screen extinction. We added two components to the PAHFIT model to fit the Seyfert SEDs: (1) fine-structure lines from high ionization states, such as \[\] and \[\]; and (2), to fit silicate emission features, a simple model for warm dust clouds that are optically thin at infrared wavelengths.
By default, PAHFIT models extinction based on the dust opacity law of @2004ApJ...609..826K. @2008ApJ...678..729S found that the cold dust model of better matches the high 18 [$\mu$m]{} Sil / 10 [$\mu$m]{} Sil absorption found in active ultraluminous infrared galaxies. From inspection, the present spectra similarly show relatively strong 18 [$\mu$m]{} features, whether in emission or absorption, and so we further modifed PAHFIT to use the cold dust model of .
The thin, warm dust model assumes clouds with simple, slab geometry and opacity at 10 [$\mu$m]{}, $\tau_{10} < 1$. These warm clouds are further assumed to be partially covered by cold, absorbing dust clouds. The model spectrum is then, $$F_{\nu} = \left(1 - C_f\right) B_{\nu}(T_W)\left(1 - e^{-\tau_W(\nu)}\right) + C_f
B_{\nu}(T_W) \left(1-e^{-\tau_W(\nu)}\right) e^{-\tau_C(\nu)} , \label{eqn:hotdustmodel}$$ where $C_f$ is the covering fraction of cold, foreground clouds, modeled independent of the galaxy extinction; $\tau_W$ is the opacity through the warm clouds; $\tau_C$ is the opacity through the cold, foreground clouds, and $B_{\nu}(T_{w})$ is the source function, for which we adopt a scaled Planck spectrum at (fitted) temperature $T_W$ for simplicity. Note that the cold dust opacities described by Eq. \[eqn:hotdustmodel\] are taken to be independent of the global PAHFIT dust opacity model; the opacity values listed in Table \[tab:tauapcor\] are determined by the global dust opacity fit. The intention of including this additional model component is to provide a realistic continuum baseline for emission line fits and Sil strength determination rather than an interpretable, radiative transfer model, and details of the opacity law or more realistic source functions are beyond the scope of the present spectral decomposition.
We further modified PAHFIT better to accommodate the IRAC broadband and MIPS-SED measurements. Both instruments provide much lower sampling density in wavelength than IRS data, and consequently they receive lower weight in the $\chi^2$ minimization procedure. We therefore employed the sampling weight correction described by @2007ApJ...670..129M, which effectively re-weights individual data points based on the sampling density local to that data point; regions of low sampling density receive increased weight. The weighting is normalized so that each data point carries, on average, unity sampling weight. We also introduced as a fitted parameter an aperture correction factor for the MIPS-SED wavelength range that boosts the model SED by a factor up to 1.81, which is the areal aperture ratio of the MIPS-SED extraction to the nominal 20 diameter extraction aperture. Table \[tab:tauapcor\] includes the best-fit aperture corrections.
There are a few caveats to the results of this decomposition. These low spectral resolution data are not best suited for measuring line fluxes, particularly in spectrally crowded regions. PAHFIT takes a conservative approach, severely restricting the centroid wavelengths and widths of the fitted lines; for example, the fine-structure lines are assumed to be unresolved and their widths are fixed at the instrumental resolution. Even so, the fine-structure lines near 35 [$\mu$m]{} are crowded, and furthermore the IRS measurements are very noisy at that region of the spectrum, with sensitivity $\sim
10\times$ poorer than at 25 [$\mu$m]{}, depending on background (). These limitations are reflected in the uncertainties and upper limits reported in Tables \[tab:pahstr\]–\[tab:fseqw\].
Extracted line fluxes were however affected by occasional, residually poor fits to the local continuum or PAH features. For example, if the PAHFIT model placed the continuum too low locally to an emission line, PAHFIT would grow the emission line to meet the data and therefore produce a line flux greater than observed. Similarly, the PAHFIT model might produce a local continuum that is too high and the line flux is reported too low. A good illustration of this problem is the too-high continuum model surrounding the \[\] $\lambda$14.3 [$\mu$m]{} line of NGC 4151 (Figure \[fig:n4151decomposition\]). To compensate for locally poor continuum models, the mean and rms of the model-subtracted spectrum was evaluated in spectral regions surrounding each line. Each fitted line peak was accordingly adjusted by subtracting the local, mean residual continuum. Line fluxes were similarly adjusted; line widths are unaffected as they were held fixed to the instrumental resolution during the fit.
There is a weak PAH feature near 14.3 [$\mu$m]{} that potentially contaminates the \[\] $\lambda$14.3 [$\mu$m]{} fine structure line . Deblending these features uses the fact that the PAH feature is somewhat broader (FWHM $\sim 0.4$ [$\mu$m]{}) than the unresolved (FWHM $< 0.1$ [$\mu$m]{}) \[\] line [@2007ApJ...656..770S]. In the worst-case scenario, the code may fail to fit a weak but present PAH 14.3 feature resulting in an artificially enhanced \[\] line strength. This effect is at least partially ameliorated by the residuals analysis and is reflected in the large uncertainties of Table \[tab:fsstr\]. The \[\] 14 / 24 ratio provides however a good check for contamination. This ratio is a density diagnostic with lower limit $\sim 0.9$ [e.g., @1999ApJ...512..204A]. We demonstrate in a companion paper (Baum et al. 2009, submitted) that for all of the sources where there is a \[\] 14.3 [$\mu$m]{} detection, the line ratio is consistent with the low density limit ; contamination from PAH 14.3 would push the ratio to a (forbidden) value below the low density limit. We conclude that the PAH 14.3 [$\mu$m]{} feature does not significantly contaminate the \[\] $\lambda$14.3 [$\mu$m]{} line strengths in this study or that any contamination is within the uncertainties of the line strength.
### Silicate Strengths {#sec:silicates}
This spectrum decomposition technique provides a reasonable model for the 9 – 20 [$\mu$m]{} continuum, suitable to measure the relative strength of Sil features. @2007ApJ...654L..49S defined the Sil strength as the log ratio of the observed flux density at the center of the Sil feature, 10 [$\mu$m]{}or 18 [$\mu$m]{}, and the local continuum; e.g., $$S_{10} = \ln\left(\frac{F_{10 \mu{\rm m}}[{\rm
observed}]}{F_{10 \mu{\rm m}}[{\rm continuum}]}\right).$$
To measure the Sil strength for the 12 [$\mu$m]{} sample, we first subtracted PAH and other emission line features as determined by PAHFIT to obtain $S_{10 \mu{\rm m}}[{\rm observed}]$. The continuum was derived from the spectrum decomposition as the sum of the (optically thick) dust components, stars, and the continuous part of the warm, thin dust component; the Sil emission features of the warm, thin dust component were replaced by quadratic interpolation between bracketing spectral regions. We used Monte Carlo variation of the PAHFIT model parameters and the data uncertainties to determine the Sil strength uncertainties.
The measured Sil strengths are provided in Table \[tab:silicates\].
### Continuum Spectral Indices {#sec:indices}
We further used the PAHFIT spectral decomposition to produce line-free continuum spectra over $\lambda$20–30 [$\mu$m]{}. The MIPS SED data are similarly line-free, except for a few possible detections of ($\lambda$63 [$\mu$m]{}); see, e.g., Figure \[fig:n4151decomposition\].
The data show a range of continuum slopes, and we characterized the spectral shape by fitting a powerlaw model, $F_{\lambda} \propto
\lambda^{\alpha}$ where $\alpha$ is the spectral index, to the rest wavelength ranges 20–30 [$\mu$m]{} and 55–90 [$\mu$m]{}. The results are listed in Table \[tab:spectralindices\]. Note that in this convention for $\alpha$, the Rayleigh-Jeans tail of the Planck spectrum would give $\alpha = -4$.
### Comparison with Measurements Employing Spline Approximations for the IR Continuum
@2009ApJ...701..658W adopt a different but conventional approach to the measurement of the PAH 6.2 [$\mu$m]{} and 11.2 [$\mu$m]{} features and the 10 [$\mu$m]{} Sil strength, and we next consider systematic differences with our measurements. Rather than decompose the spectrum with a dust and lines model as PAHFIT does, their approach was to define a local continuum level based on a spline fit to wavelength ranges narrowly bracketing PAH features. To measure Sil strengths, they adopted the technique of @2007ApJ...654L..49S, which requires the identification of apparently feature-free continuum points to anchor a broader spline interpolation across the Sil features.
@2007ApJ...656..770S demonstrated that, for the nearly normal galaxies in the SINGS sample, the PAH line strengths measured by PAHFIT are systematically greater, by factors of 2–3, than line strengths that are based on a spline fit to the neighboring pseudo-continuum. The reason is qualitatively illustrated in the PAHFIT decomposition of NGC 7213 (Figure \[fig:n4151decomposition\]). Both the 6.2 [$\mu$m]{} and 11.3 [$\mu$m]{} features blend with weaker, overlapping PAH features. By defining the continuum level based on neighboring spectral points without accounting for PAH blending, the continuum level is overestimated, and the line strength and eqw are underestimated. It is further evident from this decomposition that Sil strengths will be systematically affected if the influence of PAH blends and the underlying continuum shape are not reasonably accounted for; it would not be surprising that the spline technique of produced very different Sil strengths particularly for this source.
We illustrate the systematic differences between the present analysis and that of @2009ApJ...701..658W in Figs. \[fig:wupahcomparison\] and \[fig:wusilcomparison\]. As expected, PAHFIT measures, on average, systematically higher values of PAH fluxes and eqws, because PAHFIT removes the contamination of neighboring PAH features to measurement of the local continuum. The measurements of @2009ApJ...701..658W fall somewhat below the average ratio (PAHFIT / spline continuum) reported by @2007ApJ...656..770S, but the spline measurements will be sensitive to systematic differences in how the local continuum anchor points were defined in the analysis; such a detailed reconciliation is beyond the scope of the present work.
Similarly, our PAHFIT-derived Sil strengths are systematically more positive, by $\sim 0.1$–0.3 dex, indicating weaker Sil absorption or stronger Sil emission depending on the sign of the Sil strength. Again, this result is unsurprising, because PAH blends that are not accounted for by decomposition can falsely mimic enhanced continuum surrounding the Sil 10 [$\mu$m]{} feature, pushing the Sil strength to lower (more negative) values. Recall that we also use an interpolation technique similar to that of @2007ApJ...654L..49S to measure the strength of the Sil features; the difference is that we perform the analysis on the line-subtracted continuum model produced by PAHFIT.
SEDs Averaged by Optical Classification
---------------------------------------
A key goal of this project is to identify and compare the infrared characteristics of AGNs segregated by optical classification. Toward a qualitative first look, we calculated average SEDs within the following classification bins: (1) S1.0-1.5 & S1n; (2) S1.8-1.9; (3) HBLR Seyfert 2s (S1h & S1i); (4) non-HBLR Seyfert 2s (S2); (5) LINERs; and (6) HII. The results are presented in Figure \[fig:avgdSEDs\]. The separation of the non-HBLR and HBLR S2s was motivated by inspection of Figure \[fig:allseds\]; non-HBLR S2s appear to have higher-eqw PAH features, for example. Recall that the non-HBLR S2s may in fact harbor a BLR that has not appeared in spectropolarimetric or infrared measurements. All but three of the 20 non-HBLR S2s have been searched for an HBLR, but the inclusion of these three sources in the non-HBLR subsample does not appear to dilute the striking differences between the averaged spectra of non-HBLR S2s and HBLR S2s.
To perform the averaging, all of the data were corrected for redshift and interpolated to a common wavelength grid. The SEDs were converted to $\lambda F_{\lambda}$ and normalized to the flux density integrated between 5 and 35 [$\mu$m]{}, $F$(5–35[$\mu$m]{}). Objects within a given classification bin were averaged, and the median absolute deviation was computed as a robust estimator of the characteristic spread of SEDs within a classification bin.
@2003ApJ...583..632T demonstrated that S1s and HBLR S2s show similar IRAS broadband colors, but non-HBLR S2s tend to show cooler IRAS colors [cf. @1997Natur.385..700H]. The present study confirms this result in some finer detail based on the averaged SEDs. From inspection of Figure \[fig:avgdSEDs\], S1s (group 1) and HBLR S2s (group 3) show the flattest infrared SEDs. They both present fine-structure emission lines of high-ionization state species, such as \[\], \[\], and \[\], with similar equivalent width. Hidden S1s have a slightly redder SED, on average, and also show evidence for Sil 10[$\mu$m]{} absorption.
Similarly, the average SEDs of S1.8-1.9s (group 2) and non-HBLR S2s (group 4) are essentially indistinguishable. Both groups show strong PAH features, red continuum, and blue IRAC colors suggesting significant contribution of stellar photospheric emission at the short wavelength end. The SEDs of these groups most closely resemble optically classified star-forming galaxies, or starbursts (HII; group 6), except that the average HII SED for this sample shows PAH features with somewhat greater equivalent widths. The \[\] and \[\] average equivalent widths appear comparable between S1.8-1.9, non-HBLR S2, and HII galaxies.
The average LINER SED stands out by showing a bowl-shaped infrared SED. The SEDs appear to be more strongly dominated by stellar photospheres, or perhaps very hot dust, at shorter wavelengths compared to the other groups, including HII galaxies where starlight appears to dominate the IRAC bands. In this way, the LINERs in our survey are similar to the IR-faint LINERs in the larger sample studied by @2006ApJ...653L..13S. This result is somewhat tempered by the broad range of IRAC colors observed among the LINERs in this survey; from inspection of the six individual LINER SEDs, four show bowl-like SEDs resembling the average (NGC 2639, NGC 4579, NGC 4594, & NGC 5005), and two show HII-like SEDs (NGC 1097 & NGC 3079).
PAH features are commonly detected in this sample, and, even though such features appear at reduced equivalent width in S1 and HBLR S2 objects , they are sufficiently strong to obscure Sil features, especially Sil emission. We therefore repeated the SED averaging after subtracting PAH, [H$_2$]{}, and fine-structure lines based on the results of the PAHFIT decomposition (Section \[sec:pahfit\]); the results are provided in Figure \[fig:avgdLineSubs\]. Here the average SED of S1 and HBLR S2 distinguish more clearly, with S1 showing clear 10 [$\mu$m]{} and 18 [$\mu$m]{} Sil emission features, similar to that observed in QSOs [@2005ApJ...625L..75H]. In contrast, the averaged SED of known HBLR S2s show Sil 10 [$\mu$m]{} in absorption.
The other classes again show broadly similar SEDs after line subtraction. The notable exception is the non-HBLR S2 average, where weak 10 [$\mu$m]{} Sil absorption appears underneath the subtraction of very strong PAH features. Sil features are essentially absent among intermediate Seyferts (S1.8-1.9), LINERs, and HII galaxies. It is further interesting to note that the IRAC color \[3.6\] $-$ \[4.5\] of the averaged S2, LINER, and HII SEDs is consistent with an undiluted Rayleigh-Jeans continuum. The averaged SEDs of the other classes present redder \[3.6\] $-$ \[4.5\], indicating dilution from warm dust or some flat-spectrum component.
Discussion {#sec:conclusions}
==========
We have presented the data reduction and decomposition of [ *Spitzer*]{} Space Telescope 3.6 – 90 [$\mu$m]{} spectrophotometry of active galaxies from the extended 12 [$\mu$m]{} survey. Careful attention was provided to matching 20, circular diameter apertures across the IRAC and IRS bands (3.6 – 36 [$\mu$m]{}) with appropriate color and extended source corrections where possible or with an evaluation of the systematic error where such corrections were not available.
We further present SEDs averaged within groups defined by optical AGN classification. We demonstrate that, within this sample, Seyfert 1s show Sil emission on average, known HBLR Seyfert 2s show Sil absorption. This result is broadly compatible with the obscuring torus interpretation, in which case Seyfert 1s are viewed more nearly pole-on, affording a more direct view of hot, Sil emitting dust. HBLR S2s are viewed more nearly edge-on, preferentially through colder, Sil absorbing dust. That the Sil features are, on average, weak is further compatible with the clumpy torus model [@2008ApJ...685..160N; @2008ApJ...678..729S; @2008ApJ...685..147N; @2007ApJ...654L..45L].
The other classes, Seyfert 1.8-1.9, non-HBLR S2, LINER, and HII galaxies, produce very weak or absent Sil features. They further show stronger PAH features, bluer IRAC colors, and stronger far-infrared emission (relative to $F$\[5–35[$\mu$m]{}\]). Such SEDs appear to be more commonly dominated by stellar photospheres and star-forming processes. Based on the present analysis, however, we are unable to conclude whether Seyfert 1.8-1.9 and non-HBLR S2 galaxies are in fact more commonly dominated by star formation or whether this result is peculiar to the 12 [$\mu$m]{} sample owing to selection effects; for example, they may harbor less luminous AGNs, or more heavily absorbed AGNs, but the contribution from star-formation enhanced the 12 [$\mu$m]{} flux density sufficiently to be included in the 12 [$\mu$m]{} sample. On the other hand, our results are consistent with the interpretation that the host galaxy dominates the emission of non-HBLR S2s, diminishing our ability to detect the HBLR [@2001MNRAS.320L..15A].
In companion work, we present a statistical analysis of the present measurements with attention to differences and similarities between sources grouped by optical classification (Baum et al. 2009, submitted). We are also investigating a decomposition of the SEDs using gridded radiative transfer models with the goal of measuring bolometric contributions of the AGN vs. star-formation as well as constraints on clumpy torus parameters (Gallimore et al. in prep.).
The authors gratefully acknowledge the anonymous referee for a careful reading of the manuscript and very helpful comments. This work is based on observations made with the Spitzer Space Telescope, which is operated by the Jet Propulsion Laboratory, California Institute of Technology under a contract with NASA. Support for this work at Bucknell University, the University of Rochester and, the Rochester Institute of Technology was provided by NASA through an award issued by JPL/Caltech. A. Yzaguirre received support from the National Science Foundation REU Program, grant 0097424. J. Jakoboski received support as a Bucknell Presidential Fellow.
[*Facility:*]{}
[119]{} natexlab\#1[\#1]{}
, E. L., [D[í]{}az]{}, R. J., & [Bajaja]{}, E. 2004, , 414, 453
, D. M. 2001, , 320, L15
, T., [Lutz]{}, D., [Sturm]{}, E., [Genzel]{}, R., [Sternberg]{}, A., & [Netzer]{}, H. 2000, , 536, 710
, T., [Sturm]{}, E., [Lutz]{}, D., [Sternberg]{}, A., [Netzer]{}, H., & [Genzel]{}, R. 1999, , 512, 204
, J. R., [et al.]{} 1989, , 238, 603
, D., [Edmunds]{}, M. G., [Lindblad]{}, P. O., & [Pagel]{}, B. E. J. 1981, , 101, 377
, D., [Pelat]{}, D., [Phillips]{}, M., & [Whittle]{}, M. 1985, , 288, 205
, A., [Rieke]{}, M. J., [Rieke]{}, G. H., & [Shields]{}, J. C. 2000, , 530, 688
, R. 1993, , 31, 473
, R. R. J., & [Miller]{}, J. S. 1985, , 297, 621
, I., [Joguet]{}, B., [Kunth]{}, D., [Melnick]{}, J., & [Terlevich]{}, R. J. 1999, , 519, L123
, J., & [Joly]{}, M. 1984, , 131, 87
, J., [et al.]{} 2000, , 357, 839
, R., [Ribeiro]{}, A. L. B., [de Carvalho]{}, R. R., & [Capelato]{}, H. V. 1998, , 493, 563
, O., & [De Robertis]{}, M. M. 1988, , 67, 249
, M. H. K., [Keel]{}, W. C., [Miley]{}, G. K., [Goudfrooij]{}, P., & [Lub]{}, J. 1992, , 96, 389
, M. M., & [Osterbrock]{}, D. E. 1986, , 301, 98
, G. C., & [Griffiths]{}, R. E. 2005, , 625, L31
, A., [Rowan-Robinson]{}, M., & [Siebenmorgen]{}, R. 2000, , 313, 734
, D., [Afonso]{}, J., [Efstathiou]{}, A., [Rowan-Robinson]{}, M., [Fox]{}, M., & [Clements]{}, D. 2003, , 343, 585
, G. G., [et al.]{} 2004, , 154, 10
, J. J., [Brandt]{}, W. N., [Elvis]{}, M., [Fabian]{}, A. C., [Iwasawa]{}, K., & [Mathur]{}, S. 1999, , 510, 167
, A. V., & [Halpern]{}, J. P. 1984, , 285, 458
, A. V., & [Sargent]{}, W. L. W. 1985, , 57, 503
, J. F., [Axon]{}, D. J., [O’Dea]{}, C. P., [Baum]{}, S. A., & [Pedlar]{}, A. 2006, , 132, 546
, R., & [Cesarsky]{}, C. J. 2000, , 38, 761
, E. M., & [Stirpe]{}, G. M. 1996, , 314, 419
, R. M., [Heckman]{}, T., [Leitherer]{}, C., [Meurer]{}, G., [Krolik]{}, J., [Wilson]{}, A. S., [Kinney]{}, A., & [Koratkar]{}, A. 1998, , 505, 174
, R. M., & [Perez]{}, E. 1996, , 280, 53
, J., [Arg[ü]{}eso]{}, F., [L[ó]{}pez-Caniego]{}, M., [Toffolatti]{}, L., [Sanz]{}, J. L., [Vielva]{}, P., & [Herranz]{}, D. 2006, , 369, 1603
, R. W. 1989, , 340, 190
, R. W., [Veilleux]{}, S., & [Hill]{}, G. J. 1994, , 422, 521
, L., [et al.]{} 2005, , 625, L75
—. 2005, , 129, 1783
, T. M. 1980, , 87, 152
, C. A., [Lumsden]{}, S. L., & [Bailey]{}, J. A. 1997, , 385, 700
, C. A., [Vader]{}, J. P., & [Frogel]{}, J. A. 1989, , 97, 986
, L. C., [Filippenko]{}, A. V., & [Sargent]{}, W. L. W. 1997, , 112, 315
, K. 1986, , 98, 609
, J. R., [et al.]{} 2004, , 154, 18
, L. K., & [Malkan]{}, M. A. 1999, , 516, 660
, L. K., [Malkan]{}, M. A., [Moriondo]{}, G., & [Salvati]{}, M. 1999, , 510, 637
, M. D., [Brindle]{}, C., [Hough]{}, J. H., [Young]{}, S., [Axon]{}, D. J., [Bailey]{}, J. A., & [Ward]{}, M. J. 1993, , 263, 895
, W., [et al.]{} 2004, , 429, 47
, L. E. 1994, , 430, 196
, W. C. 1983, , 269, 466
, F., [Vriend]{}, W. J., & [Tielens]{}, A. G. G. M. 2004, , 609, 826
, D. J. M., [Beintema]{}, D. A., & [Lutz]{}, D. 2003, in ESA Special Publication, Vol. 481, The Calibration Legacy of the ISO Mission, ed. [L. Metcalfe, A. Salama, S. B. Peschke, & M. F. Kessler]{}, 375
, L. J., [Heisler]{}, C. A., [Dopita]{}, M. A., & [Lumsden]{}, S. 2001, , 132, 37
, D.-C., [Sanders]{}, D. B., [Veilleux]{}, S., [Mazzarella]{}, J. M., & [Soifer]{}, B. T. 1995, , 98, 129
, S. D., & [Steiner]{}, J. E. 1990, , 99, 1722
, W., [Biermann]{}, P., [Fricke]{}, K. J., [Huchtmeier]{}, W., & [Witzel]{}, A. 1983, , 119, 80
, W., & [Fricke]{}, K. J. 1985, , 143, 393
, N. A., [Sirocky]{}, M. M., [Hao]{}, L., [Spoon]{}, H. W. W., [Marshall]{}, J. A., [Elitzur]{}, M., & [Houck]{}, J. R. 2007, , 654, L45
, N., [et al.]{} 2008, , 120, 328
, S. L., [Heisler]{}, C. A., [Bailey]{}, J. A., [Hough]{}, J. H., & [Young]{}, S. 2001, , 327, 459
, B. F., & [Steer]{}, I. P. 2008, [Master List of Galaxy Distances]{}, <http://nedwww.ipac.caltech.edu/level5/NED1D/intro.html>
, D., & [Khan]{}, I. 2005, in Astronomical Society of the Pacific Conference Series, Vol. 347, Astronomical Data Analysis Software and Systems XIV, ed. P. [Shopbell]{}, M. [Britton]{}, & R. [Ebert]{}, 81
, D., [Roby]{}, T., [Khan]{}, I., & [Booth]{}, H. 2006, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 6274, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series
, I., [et al.]{} 2004, , 416, 475
, J. A., [Herter]{}, T. L., [Armus]{}, L., [Charmandaris]{}, V., [Spoon]{}, H. W. W., [Bernard-Salas]{}, J., & [Houck]{}, J. R. 2007, , 670, 129
, J. M., & [Balzano]{}, V. A. 1986, , 62, 751
, J. S., & [Goodrich]{}, R. W. 1990, , 355, 456
, E. C., [Barth]{}, A. J., [Kay]{}, L. E., & [Filippenko]{}, A. V. 2000, , 540, L73
, J., & [Kennicutt]{}, Jr., R. C. 2006, , 164, 81
, J. S., [Wilson]{}, A. S., & [Tsvetanov]{}, Z. 1996, , 102, 309
, N. M., [Wilson]{}, A. S., [Mulchaey]{}, J. S., & [Gallimore]{}, J. F. 1999, , 120, 209
, M., [Ivezi[ć]{}]{}, [Ž]{}., & [Elitzur]{}, M. 2002, , 570, L9
, M., [Sirocky]{}, M. M., [Ivezi[ć]{}]{}, [Ž]{}., & [Elitzur]{}, M. 2008, , 685, 147
, M., [Sirocky]{}, M. M., [Nikutta]{}, R., [Ivezi[ć]{}]{}, [Ž]{}., & [Elitzur]{}, M. 2008, , 685, 160
, V., [Henning]{}, T., & [Mathis]{}, J. S. 1992, , 261, 567
, D. E. 1977, , 215, 733
, D. E., & [Koski]{}, A. T. 1976, , 176, 61P
, D. E., & [Pogge]{}, R. W. 1985, , 297, 166
, B. M., [et al.]{} 2000, , 542, 161
, M. M., [Pagel]{}, B. E. J., [Edmunds]{}, M. G., & [Diaz]{}, A. 1984, , 210, 701
, E. A., & [Krolik]{}, J. H. 1992, , 401, 99
, P., [Osterbrock]{}, D. E., & [Pogge]{}, R. W. 1990, , 99, 53
, W. T., [et al.]{} 2005, , 117, 978
, M. W., & [Gruendl]{}, R. A. 1995, in Astronomical Society of the Pacific Conference Series, Vol. 77, Astronomical Data Analysis Software and Systems IV, ed. R. A. [Shaw]{}, H. E. [Payne]{}, & J. J. E. [Hayes]{}, 335
, J., [Kotilainen]{}, J. K., & [Prieto]{}, M. A. 2002, , 331, 154
, G. H., [et al.]{} 2004, , 154, 25
, G., [Elvis]{}, M., [Fabbiano]{}, G., [Baldi]{}, A., [Zezas]{}, A., & [Salvati]{}, M. 2007, , 659, L111
, G., [Elvis]{}, M., & [Nicastro]{}, F. 2002, , 571, 234
, B., [Malkan]{}, M. A., & [Edelson]{}, R. A. 1996, , 473, 130
, B., [Malkan]{}, M. A., & [Spinoglio]{}, L. 1993, , 89, 1
, A., & [Golay]{}, M. J. E. 1964, Analytical Chemistry, 36, 1627
, J. C., & [Filippenko]{}, A. V. 1990, , 100, 1034
, R., & [Kr[ü]{}gel]{}, E. 2007, , 461, 445
, M. M., [Levenson]{}, N. A., [Elitzur]{}, M., [Spoon]{}, H. W. W., & [Armus]{}, L. 2008, , 678, 729
, J. D. T., [et al.]{} 2007, , 119, 1133
—. 2007, , 656, 770
, J. E., [Robinson]{}, A., [Alexander]{}, D. M., [Young]{}, S., [Axon]{}, D. J., & [Corbett]{}, E. A. 2004, , 350, 140
, L., [Andreani]{}, P., & [Malkan]{}, M. A. 2002, , 572, 105
, H. W. W., [Marshall]{}, J. A., [Houck]{}, J. R., [Elitzur]{}, M., [Hao]{}, L., [Armus]{}, L., [Brandl]{}, B. R., & [Charmandaris]{}, V. 2007, , 654, L49
, T., [Kinney]{}, A. L., & [Challis]{}, P. 1995, , 98, 103
, E., [Lutz]{}, D., [Tran]{}, D., [Feuchtgruber]{}, H., [Genzel]{}, R., [Kunze]{}, D., [Moorwood]{}, A. F. M., & [Thornley]{}, M. D. 2000, , 358, 481
, E., [Lutz]{}, D., [Verma]{}, A., [Netzer]{}, H., [Sternberg]{}, A., [Moorwood]{}, A. F. M., [Oliva]{}, E., & [Genzel]{}, R. 2002, , 393, 821
, E., [et al.]{} 2006, , 653, L13
, D., [Dyson]{}, J. E., [Axon]{}, D. J., & [Pedlar]{}, A. 1989, , 240, 487
, A., [Pedlar]{}, A., [Kukula]{}, M. J., [Baum]{}, S. A., & [O’Dea]{}, C. P. 2000, , 314, 573
—. 2001, , 325, 737
, H. D. 1995, , 440, 565
—. 2001, , 554, L19
—. 2003, , 583, 632
, K. R. W., [et al.]{} 2007, , 474, 837
, R. B., & [Fisher]{}, J. R. 1988, [Catalog of Nearby Galaxies]{} (Cambridge: Cambridge University Press)
, C. M., & [Padovani]{}, P. 1995, , 107, 803
, M. S., [Viegas]{}, S. M., [Gruenwald]{}, R., & [de Souza]{}, R. E. 1997, , 114, 1345
, P. G. 2001, , 113, 1420
, S. 1988, , 95, 1695
, S., [Kim]{}, D.-C., [Sanders]{}, D. B., [Mazzarella]{}, J. M., & [Soifer]{}, B. T. 1995, , 98, 171
, M.-P., & [V[é]{}ron]{}, P. 2006, , 455, 773
, M.-P., [V[é]{}ron]{}, P., & [Gon[ç]{}alves]{}, A. C. 2001, , 372, 730
, M., [Pedlar]{}, A., [Meurs]{}, E. J. A., [Unger]{}, S. W., [Axon]{}, D. J., & [Ward]{}, M. J. 1988, , 326, 125
, H. 1988, , 234, 703
—. 1992, , 257, 677
, Y., [Charmandaris]{}, V., [Huang]{}, J., [Spinoglio]{}, L., & [Tommasin]{}, S. 2009, , 701, 658
, S., [Hough]{}, J. H., [Efstathiou]{}, A., [Wills]{}, B. J., [Bailey]{}, J. A., [Ward]{}, M. J., & [Axon]{}, D. J. 1996, , 281, 1206
[lllrrrrl]{} MRK335 &S1n & 1 & 00:06:19.53 & 20:12:10.5 & 7730 & 110.4 & 3269\
MRK938 &HII & 2 & 00:11:06.56 & $-$12:06:27.3 & 5881 & 84.0 & 3269, 3672\
E12-G21 &S1n & 3 & 00:40:45.93 & $-$79:14:24.2 & 9000 & 128.6 & 3269\
MRK348 &S1h & 4 & 00:48:47.16 & 31:57:25.2 & 4507 & 64.4 & 3269\
NGC424 &S1h & 5 & 01:11:27.66 & $-$38:05:00.0 & 3527 & 50.4 & 3269\
NGC526A &S1.9 & 6 & 01:23:54.39 & $-$35:03:55.4 & 5725 & 81.8 & 86, 3269\
NGC513 &S1h & 7 & 01:24:26.78 & 33:47:58.4 & 5859 & 83.7 & 3269\
F01475-0740 &S1h & 8 & 01:50:02.69 & $-$07:25:48.4 & 5296 & 75.7 & 3269\
NGC931 &S1.0-1.5 & 9 & 02:28:14.49 & 31:18:41.7 & 4992 & 71.3 & 3269\
NGC1056 &HII & 10 & 02:42:48.29 & 28:34:26.1 & 1545 & 22.1 & 3269\
NGC1097 &LINER & 11, 12 & 02:46:18.91 & $-$30:16:28.8 & 1271 & \*17.5 & 159, 3269\
NGC1125 &S2 & 13 & 02:51:40.44 & $-$16:39:02.4 & 3277 & 46.8 & 3269\
NGC1143-4 &S2 & 8, 10 & 02:55:11.66 & $-$00:11:03.4 & 8648 & 123.5 & 21, 3269\
M-2-8-39 &S1h & 8, 14 & 03:00:30.62 & $-$11:24:57.2 & 8962 & 128.0 & 3269\
NGC1194 &S1.8-1.9 & 15, 16 & 03:03:49.12 & $-$01:06:13.2 & 4076 & 58.2 & 3269\
NGC1241 &S2 & 8, 17 & 03:11:14.63 & $-$08:55:18.1 & 4052 & 57.9 & 3269\
NGC1320 &S2 & 8, 18 & 03:24:48.69 & $-$03:02:32.0 & 2663 & \*37.7 & 3269\
NGC1365 &S1.8 & 19 & 03:33:36.39 & $-$36:08:25.8 & 1636 & \*17.7 & 3269, 3672\
NGC1386 &S1i & 20 & 03:36:46.20 & $-$35:59:57.0 & 868 & \*16.2 &3269\
F03450+0055 &S1n & 1, 21 & 03:47:40.22 & 01:05:13.7 & 9294 & 132.8 & 3269\
NGC1566 &S1.5 & 22, 23, 24 & 04:20:00.41 & $-$54:56:16.7 & 1504 & \*11.8 & 159, 3269\
F04385-0828 &S1h & 8 & 04:40:54.96 & $-$08:22:21.9 & 4527 & 64.7 & 3269\
NGC1667 &S2 & 8 & 04:48:37.15 & $-$06:19:11.9 & 4547 & 65.0 & 3269\
E33-G2 &S2 & 25 & 04:55:58.88 & $-$75:32:28.4 & 5426 & 77.5 & 3269\
M-5-13-17 &S1.5 & 23 & 05:19:35.84 & $-$32:39:28.1 & 3790 & 54.1 & 3269\
MRK6 &S1.5 & 26, 27 & 06:52:12.35 & 74:25:37.2 & 5640 & 80.6 & 3269\
MRK79 &S1.2 & 28 & 07:42:32.84 & 49:48:34.5 & 6652 & 95.0 & 3269\
NGC2639 &LINER & 29 & 08:43:38.06 & 50:12:20.4 & 3336 & 47.7 & 3269\
MRK704 &S1.5 & 30 & 09:18:25.98 & 16:18:20.0 & 8764 & 125.2 & 3269\
NGC2992 &S1i & 31 & 09:45:41.93 & $-$14:19:34.6 & 2311 & \*30.5 & 96, 3269\
MRK1239 &S1n & 32 & 09:52:19.09 & $-$01:36:43.5 & 5974 & 85.3 & 3269\
NGC3079 &LINER & 33 & 10:01:57.85 & 55:40:46.9 & 1116 & \*19.7 & 59, 3269\
NGC3227 &S1.5 & 34 & 10:23:30.55 & 19:51:54.6 & 1157 & \*20.9 & 96, 3269\
NGC3511 &HII & 25 & 11:03:23.81 & $-$23:05:12.3 & 1109 & \*14.6 & 3269\
NGC3516 &S1.2 & 34 & 11:06:47.49 & 72:34:07.6 & 2649 & \*38.9 & 3269\
M+0-29-23 &HII & 10 & 11:21:12.27 & $-$02:59:02.5 & 7464 & 106.6 & 3269\
NGC3660 &S1.8 & 8, 35 & 11:23:32.27 & $-$08:39:30.4 & 3679 & 52.6 & 3269\
NGC3982 &S1.9 & 34 & 11:56:28.12 & 55:07:31.3 & 1109 & \*21.8 & 3269\
NGC4051 &S1n & 36 & 12:03:09.61 & 44:31:53.0 & 700 & \*17.0 &3269\
UGC7064 &S1.9 & 37 & 12:04:43.32 & 31:10:38.1 & 7494 & 107.1 & 3269\
NGC4151 &S1.5 & 34 & 12:10:32.57 & 39:24:21.0 & 995 & \*20.3 &3269\
MRK766 &S1n & 32 & 12:18:26.51 & 29:48:46.9 & 3876 & 55.4 & 3269\
NGC4388 &S1h & 38 & 12:25:46.81 & 12:39:43.3 & 2524 & \*18.1 & 3269, 20695\
NGC4501 &S2 & 8 & 12:31:59.18 & 14:25:13.3 & 2281 & \*20.7 & 3269\
NGC4579 &LINER & 34, 39 & 12:37:43.52 & 11:49:05.4 & 1519 & \*16.8 & 159, 3269\
NGC4593 &S1 & 40 & 12:39:39.44 & $-$05:20:39.0 & 2698 & \*44.0 & 3269\
NGC4594 &LINER & 34 & 12:39:59.44 & $-$11:37:22.9 & 1024 & \*10.9 & 159, 3269\
NGC4602 &HII & 40 & 12:40:36.98 & $-$05:07:58.5 & 2539 & \*34.4 & 3269\
TOL1238-364 &S1h & 8 & 12:40:52.86 & $-$36:45:21.2 & 3275 & 46.8 & 3269\
M-2-33-34 &S1n & 37 & 12:52:12.49 & $-$13:24:53.0 & 4386 & 62.7 & 3269\
NGC4941 &S2 & 5 & 13:04:13.13 & $-$05:33:05.8 & 1108 & \*13.8 & 86, 3269\
NGC4968 &S2 & 34 & 13:07:05.96 & $-$23:40:36.4 & 2957 & 42.2 & 3269\
NGC5005 &LINER & 34 & 13:10:56.29 & 37:03:32.9 & 946 & \*17.5 &3269\
NGC5033 &S1.8 & 34 & 13:13:27.49 & 36:35:37.6 & 875 & \*20.6 &159, 3269\
NGC5135 &S2 & 41 & 13:25:44.04 & $-$29:50:00.2 & 4105 & 58.6 & 3269\
M-6-30-15 &S1.5 & 23 & 13:35:53.78 & $-$34:17:44.2 & 2323 & 33.2 & 3269\
NGC5256 &S2 & 42 & 13:38:17.25 & 48:16:32.4 & 8211 & 117.3 & 3269\
IC4329A &S1.5 & 23 & 13:49:19.24 & $-$30:18:34.4 & 4813 & 68.8 & 3269, 30318\
NGC5347 &S2 & 8, 43 & 13:53:17.80 & 33:29:27.3 & 2335 & \*36.7 & 3269\
NGC5506 &S1i & 31 & 14:13:14.87 & $-$03:12:27.6 & 1853 & \*28.7 & 3269\
NGC5548 &S1.5 & 34 & 14:17:59.52 & 25:08:12.6 & 5149 & 73.6 & 69, 86, 3269\
MRK817 &S1.5 & 44 & 14:36:22.08 & 58:47:39.6 & 9430 & 134.7 & 3269\
NGC5929 &S2 & 45, 47 & 15:26:06.20 & 41:40:14.5 & 2492 & \*38.5 & 3269\
NGC5953 &S2 & 47 & 15:34:32.39 & 15:11:37.2 & 1965 & \*33.0 & 59, 3269\
M-2-40-4 &S1.9 & 48, 49, 50 & 15:48:24.96 & $-$13:45:26.9 & 7553 & 107.9 & 3269\
F15480-0344 &S1h & 8, 38 & 15:50:41.48 & $-$03:53:18.1 & 9084 & 129.8 & 3269\
NGC6810 &HII & 25 & 19:43:34.42 & $-$58:39:20.3 & 2031 & \*29.0 & 3269\
NGC6860 &S1.5 & 23 & 20:08:46.90 & $-$61:05:59.6 & 4462 & 63.7 & 3269\
NGC6890 &S2 & 5, 51 & 20:18:18.02 & $-$44:48:24.7 & 2419 & \*31.8 & 3269\
IC5063 &S1h & 52 & 20:52:02.29 & $-$57:04:07.5 & 3402 & 48.6 & 86, 3269\
UGC11680 &S2 & 49 & 21:07:41.35 & 03:52:17.9 & 7791 & 111.3 & 3269\
NGC7130 &S2 & 53, 54, 55, 56 & 21:48:19.52 & $-$34:57:04.8 & 4842 & 69.2 & 3269, 3672\
NGC7172 &S2 & 41, 57 & 22:02:01.90 & $-$31:52:10.4 & 2603 & \*33.9 & 86, 3269\
NGC7213 &S1 & 58 & 22:09:16.21 & $-$47:09:59.7 & 1750 & \*22.0 & 3269\
NGC7314 &S1i &59 & 22:35:46.21 & $-$26:03:01.5 & 1428 & \*19.0 & 86, 3269\
M-3-58-7 &S1h & 8 & 22:49:37.17 & $-$19:16:26.2 & 9432 & 134.7 & 3269\
NGC7469 &S1.5 & 44 & 23:03:15.61 & 08:52:26.3 & 4892 & 69.9 & 32, 3269\
NGC7496 &S2 & 38, 49, 60 & 23:09:47.29 & $-$43:25:40.2 & 1649 & \*20.1 & 3269\
NGC7582 &S1i & 61 & 23:18:23.63 & $-$42:22:13.1 & 1575 & \*18.8 & 3269\
NGC7590 &S2 & 60 & 23:18:54.81 & $-$42:14:20.0 & 1575 & \*23.7 & 3269\
NGC7603 &S1.5 & 44 & 23:18:56.67 & 00:14:38.1 & 8851 & 126.4 & 3269\
NGC7674 &S1h & 4, 9 & 23:27:56.72 & 08:46:44.4 & 8671 & 123.9 & 3269, 3672\
CGCG381-051 &HII & 8 & 23:48:41.74 & 02:14:23.5 & 9194 & 131.3 & 3269\
[lrrrrrrrr]{} MRK335 & 8.99 & 0.07 & 8.31 & 0.07 & 7.67 & 0.06 & 6.69 & 0.05\
MRK938 & 9.55 & 0.05 & 9.24 & 0.04 & 7.74 & 0.05 & 5.09 & 0.02\
E12-G21 & 9.63 & 0.09 & 9.14 & 0.10 & 8.31 & 0.08 & 6.22 & 0.07\
MRK348 & 9.27 & 0.08 & 8.46 & 0.08 & 7.67 & 0.06 & 6.38 & 0.04\
NGC424 & 7.83 & 0.05 & 6.94 & 0.04 & 6.11 & 0.03 & 4.95 & 0.02\
NGC526A & 9.00 & 0.07 & 8.33 & 0.07 & 7.66 & 0.07 & 6.67 & 0.06\
NGC5135 & 8.95 & 0.10 & 8.60 & 0.09 & 7.35 & 0.08 & 4.82 & 0.07\
F01475-0740 & 10.83 & 0.16 & 9.89 & 0.14 & 8.59 & 0.10 & 7.16 & 0.06\
NGC931 & 8.38 & 0.05 & 7.75 & 0.05 & 6.87 & 0.04 & 5.78 & 0.03\
NGC1056 & 9.26 & 0.11 & 9.32 & 0.12 & 8.23 & 0.12 & 5.50 & 0.10\
NGC1097 & 7.73 & 0.10 & 7.75 & 0.11 & 6.54 & 0.10 & 3.98 & 0.10\
NGC1125 & 10.05 & 0.11 & 9.67 & 0.11 & 8.45 & 0.10 & 6.51 & 0.05\
NGC1143-4 & 9.68 & 0.09 & 9.44 & 0.08 & 8.37 & 0.09 & 5.66 & 0.09\
M-2-8-39 & 10.86 & 0.14 & 10.25 & 0.15 & 9.27 & 0.13 & 7.35 & 0.07\
NGC1194 & 9.29 & 0.08 & 8.38 & 0.07 & 7.31 & 0.05 & 6.10 & 0.04\
NGC1241 & 9.87 & 0.11 & 9.86 & 0.12 & 9.56 & 0.16 & 7.11 & 0.09\
NGC1320 & 9.10 & 0.08 & 8.47 & 0.07 & 7.48 & 0.06 & 5.96 & 0.04\
NGC1365 & 7.23 & 0.07 & 6.85 & 0.06 & 5.70 & 0.08 & 3.25 & 0.09\
NGC1386 & 8.64 & 0.08 & 8.15 & 0.07 & 7.16 & 0.05 & 5.56 & 0.03\
F03450+0055 & 9.11 & 0.08 & 8.46 & 0.07 & 7.74 & 0.06 & 6.61 & 0.04\
NGC1566 & 8.67 & 0.09 & 8.59 & 0.08 & 7.94 & 0.07 & 6.14 & 0.05\
F04385-0828 & 9.09 & 0.08 & 8.04 & 0.06 & 7.04 & 0.05 & 5.64 & 0.03\
NGC1667 & 9.60 & 0.11 & 9.65 & 0.12 & 8.41 & 0.14 & 5.96 & 0.11\
E33-G2 & 9.40 & 0.09 & 8.69 & 0.08 & 7.91 & 0.07 & 6.65 & 0.05\
M-5-13-17 & 9.74 & 0.10 & 9.38 & 0.10 & 8.26 & 0.09 & 6.65 & 0.07\
MRK6 & 8.71 & 0.06 & 8.12 & 0.06 & 7.60 & 0.05 & 6.63 & 0.05\
MRK79 & 8.73 & 0.06 & 8.05 & 0.06 & 7.24 & 0.05 & 6.05 & 0.04\
NGC2639 & 9.17 & 0.11 & 9.25 & 0.12 & 8.67 & 0.15 & 7.27 & 0.12\
MRK704 & 8.84 & 0.07 & 8.06 & 0.06 & 7.35 & 0.05 & 6.08 & 0.04\
NGC2992 & 8.46 & 0.06 & 8.07 & 0.05 & 7.25 & 0.05 & 5.35 & 0.05\
MRK1239 & 7.70 & 0.04 & 7.03 & 0.04 & 6.31 & 0.03 & 5.27 & 0.03\
NGC3079 & 8.12 & 0.09 & 8.09 & 0.09 & 6.61 & 0.09 & 3.78 & 0.08\
NGC3227 & 8.48 & 0.06 & 8.16 & 0.06 & 7.18 & 0.06 & 5.36 & 0.04\
NGC3511 & 10.03 & 0.12 & 10.13 & 0.13 & 9.41 & 0.21 & 6.20 & 0.11\
NGC3516 & 8.42 & 0.06 & 7.96 & 0.06 & 7.27 & 0.06 & 6.16 & 0.04\
M+0-29-23 & 9.93 & 0.11 & 9.65 & 0.11 & 8.30 & 0.09 & 5.59 & 0.06\
NGC3660 & 10.68 & 0.13 & 10.66 & 0.16 & 10.35 & 0.34 & 7.65 & 0.12\
NGC3982 & 9.91 & 0.11 & 9.95 & 0.12 & 8.81 & 0.14 & 6.36 & 0.10\
NGC4051 & 8.53 & 0.06 & 7.94 & 0.06 & 7.10 & 0.04 & 5.56 & 0.03\
UGC7064 & 9.98 & 0.10 & 9.67 & 0.11 & 8.88 & 0.09 & 6.62 & 0.12\
NGC4151 & 7.42 & 0.05 & 6.67 & 0.03 & 5.86 & 0.03 & 4.52 & 0.02\
MRK766 & 9.06 & 0.07 & 8.32 & 0.07 & 7.60 & 0.06 & 5.99 & 0.04\
NGC4388 & 8.79 & 0.07 & 8.35 & 0.06 & 7.11 & 0.05 & 5.49 & 0.08\
NGC4501 & 8.53 & 0.10 & 8.61 & 0.11 & 8.42 & 0.14 & 6.98 & 0.11\
NGC4579 & 8.25 & 0.09 & 8.23 & 0.09 & 7.87 & 0.08 & 7.12 & 0.06\
NGC4593 & 8.38 & 0.07 & 7.82 & 0.06 & 7.12 & 0.05 & 5.86 & 0.03\
NGC4594 & 7.04 & 0.10 & 7.16 & 0.11 & 6.93 & 0.10 & 6.95 & 0.09\
NGC4602 & 10.23 & 0.12 & 10.26 & 0.13 & 9.38 & 0.19 & 7.37 & 0.10\
TOL1238-364 & 9.36 & 0.09 & 8.88 & 0.08 & 7.77 & 0.07 & 5.47 & 0.10\
M-2-33-34 & 10.12 & 0.11 & 9.87 & 0.12 & 9.06 & 0.13 & 7.39 & 0.08\
NGC4941 & 9.65 & 0.10 & 9.58 & 0.11 & 9.02 & 0.28 & 8.01 & 0.13\
NGC4968 & 9.66 & 0.09 & 9.04 & 0.09 & 8.02 & 0.07 & 6.20 & 0.04\
NGC5005 & 7.81 & 0.10 & 7.85 & 0.10 & 7.35 & 0.10 & 5.63 & 0.09\
NGC5033 & 8.26 & 0.10 & 8.30 & 0.10 & 7.59 & 0.10 & 5.45 & 0.10\
NGC513 & 9.68 & 0.10 & 9.45 & 0.10 & 8.72 & 0.13 & 6.32 & 0.09\
M-6-30-15 & 8.55 & 0.06 & 7.87 & 0.06 & 7.16 & 0.05 & 6.07 & 0.04\
NGC5256 & 10.19 & 0.12 & 10.01 & 0.12 & 8.56 & 0.11 & 5.67 & 0.06\
IC4329A & 7.38 & 0.03 & 6.68 & 0.04 & 5.94 & 0.03 & 4.93 & 0.02\
NGC5347 & 9.87 & 0.10 & 9.21 & 0.10 & 8.18 & 0.07 & 6.56 & 0.05\
NGC5506 & 7.02 & 0.03 & 6.25 & 0.03 & 5.49 & 0.02 & 4.41 & 0.02\
NGC5548 & 9.49 & 0.05 & 8.85 & 0.04 & 7.96 & 0.03 & 6.48 & 0.02\
MRK817 & 9.10 & 0.07 & 8.45 & 0.07 & 7.61 & 0.06 & 6.33 & 0.04\
NGC5929 & 10.33 & 0.12 & 10.36 & 0.14 & 9.59 & 0.20 & 8.52 & 0.14\
NGC5953 & 9.13 & 0.10 & 9.24 & 0.10 & 7.70 & 0.11 & 4.85 & 0.10\
M-2-40-4 & 8.51 & 0.06 & 7.86 & 0.06 & 7.12 & 0.04 & 5.58 & 0.04\
F15480-0344 & 10.38 & 0.12 & 9.66 & 0.12 & 8.75 & 0.10 & 7.03 & 0.07\
NGC6810 & 8.14 & 0.09 & 8.17 & 0.10 & 7.22 & 0.09 & 4.49 & 0.08\
NGC6860 & 9.06 & 0.07 & 8.44 & 0.07 & 7.57 & 0.06 & 6.36 & 0.04\
NGC6890 & 9.50 & 0.10 & 9.21 & 0.10 & 8.35 & 0.09 & 6.35 & 0.06\
IC5063 & 8.79 & 0.07 & 7.90 & 0.06 & 6.80 & 0.05 & 5.14 & 0.03\
UGC11680 & 10.02 & 0.10 & 9.64 & 0.11 & 9.09 & 0.11 & 7.28 & 0.10\
NGC7130 & 9.43 & 0.08 & 9.16 & 0.07 & 7.92 & 0.07 & 5.15 & 0.06\
NGC7172 & 8.36 & 0.07 & 7.88 & 0.06 & 7.00 & 0.07 & 5.31 & 0.06\
NGC7213 & 8.02 & 0.08 & 7.79 & 0.08 & 7.31 & 0.07 & 6.43 & 0.06\
NGC7314 & 9.94 & 0.10 & 9.53 & 0.11 & 8.87 & 0.19 & 7.37 & 0.08\
M-3-58-7 & 8.94 & 0.07 & 8.25 & 0.07 & 7.49 & 0.05 & 6.04 & 0.04\
NGC7469 & 8.18 & 0.05 & 7.69 & 0.04 & 6.57 & 0.05 & 4.33 & 0.04\
NGC7496 & 9.99 & 0.11 & 9.83 & 0.11 & 8.46 & 0.10 & 5.93 & 0.06\
NGC7582 & 7.57 & 0.05 & 7.00 & 0.04 & 5.87 & 0.05 & 3.73 & 0.05\
NGC7590 & 9.44 & 0.11 & 9.55 & 0.12 & 8.85 & 0.16 & 6.37 & 0.11\
NGC7603 & 8.23 & 0.05 & 7.65 & 0.05 & 7.02 & 0.04 & 5.83 & 0.03\
NGC7674 & 8.99 & 0.04 & 8.20 & 0.03 & 7.23 & 0.03 & 5.40 & 0.02\
CGCG381-051 & 11.14 & 0.15 & 10.83 & 0.17 & 9.67 & 0.20 & 6.91 & 0.07\
[lrrrrrrrrr]{}\
H$_2$S(1) & 17.05 & 51 & (5) & 51 & (4) & 137 & (14) & 135 & (10)\
$[$NeII$]$ & 12.81 & 183 & (11) & 190 & (12) & 345 & (22) & 352 & (22)\
$[$NeIII$]$ & 15.56 & 22 & (6) & 23 & (6) & 41 & (12) & 44 & (11)\
$[$OIV$]$ & 25.91 & 30 & (7) & 31 & (6) & 38 & (8) & 39 & (8)\
$[$SIII$]$ & 18.71 & 18 & (3) & 16 & (3) & 51 & (7) & 44 & (7)\
$[$SIII$]$ & 33.50 & 63 & (14) & 61 & (15) & 43 & (10) & 42 & (11)\
$[$SiII$]$ & 34.82 & 260 & (27) & 262 & (26) & 165 & (17) & 166 & (16)\
PAH & 6.22 & 2520 & (110) & 2570 & (120) & 3890 & (170) & 3950 & (180)\
PAH & 11.33 & 1380 & (50) & 1370 & (51) & 5200 & (190) & 5170 & (190)\
\
$[$NeII$]$ & 12.81 & 16 & (1) & 16 & (1) & 44 & (4) & 42 & (2)\
$[$NeIII$]$ & 15.56 & 11 & (1) & 10 & (1) & 28 & (4) & 25 & (2)\
$[$NeV$]$ & 14.32 & $< 4$ & & 2 & (1) & $<10$ & & 6 & (2)\
$[$SIII$]$ & 18.71 & $< 4$ & & 5 & (1) & $<9$ & & 11 & (3)\
PAH & 6.22 & $< 56$ & & 40 & (9) & $<160$ & & 113 & (26)\
PAH & 11.33 & 70 & (7) & 69 & (5) & 165 & (17) & 161 & (11)\
[lrrrrrrrrrrrrrrrrrrrrrr]{} MRK335 & $< 29 $ & & $< 105 $ & & $< 29 $ & & $ 58 $ & (10) & $ 37 $ & (11) & $ 28 $ & (8) & $< 7 $ & & $ 22 $ & (6) & $ 27 $ & (6) & $ 31 $ & (4) & $< 18 $ &\
MRK938 & $ 735 $ & (34) & $ 1470 $ & (150) & $ 789 $ & (45) & $ 943 $ & (40) & $ 415 $ & (29) & $ 451 $ & (21) & $ 152 $ & (4) & $ 378 $ & (12) & $ 223 $ & (15) & $ 487 $ & (20) & $ 225 $ & (26)\
E12-G21 & $ 208 $ & (21) & $< 201 $ & & $ 258 $ & (18) & $ 263 $ & (17) & $ 57 $ & (11) & $ 148 $ & (9) & $ 33 $ & (4) & $ 190 $ & (10) & $ 67 $ & (8) & $ 142 $ & (15) & $ 96 $ & (8)\
MRK348 & $< 37 $ & & $ 244 $ & (31) & $< 44 $ & & $ 113 $ & (16) & $< 39 $ & & $ 58 $ & (9) & $ 14 $ & (3) & $ 51 $ & (6) & $< 28 $ & & $ 24 $ & (7) & $< 46 $ &\
NGC424 & $ 41 $ & (12) & $ 509 $ & (76) & $ 137 $ & (33) & $ 189 $ & (38) & $ 145 $ & (24) & $ 88 $ & (19) & $< 21 $ & & $ 55 $ & (14) & $ 57 $ & (14) & $< 56 $ & & $< 60 $ &\
NGC526A & $ 22 $ & (7) & $ 207 $ & (29) & $ 35 $ & (8) & $ 37 $ & (11) & $ 43 $ & (9) & $ 26 $ & (6) & $< 7 $ & & $ 30 $ & (6) & $ 45 $ & (5) & $ 26 $ & (5) & $< 31 $ &\
NGC513 & $ 186 $ & (20) & $< 312 $ & & $ 355 $ & (31) & $ 236 $ & (31) & $< 84 $ & & $ 153 $ & (19) & $ 99 $ & (3) & $ 150 $ & (10) & $ 76 $ & (12) & $ 105 $ & (12) & $ 109 $ & (11)\
F01475-0740 & $ 40 $ & (9) & $ 90 $ & (29) & $ 103 $ & (10) & $< 33 $ & & $< 23 $ & & $ 30 $ & (6) & $ 13 $ & (2) & $ 69 $ & (5) & $ 60 $ & (6) & $ 66 $ & (6) & $< 27 $ &\
NGC931 & $ 78 $ & (10) & $< 176 $ & & $ 73 $ & (22) & $ 118 $ & (26) & $< 50 $ & & $ 39 $ & (12) & $ 16 $ & (4) & $ 112 $ & (7) & $ 35 $ & (9) & $ 36 $ & (9) & $ 36 $ & (12)\
NGC1056 & $ 648 $ & (33) & $ 665 $ & (88) & $ 898 $ & (42) & $ 716 $ & (48) & $ 225 $ & (34) & $ 468 $ & (12) & $ 202 $ & (11) & $ 395 $ & (30) & $ 157 $ & (11) & $ 320 $ & (16) & $ 205 $ & (18)\
NGC1097 & $ 2219 $ & (96) & $ 2710 $ & (240) & $ 2850 $ & (150) & $ 3190 $ & (180) & $ 1050 $ & (130) & $ 1625 $ & (36) & $ 642 $ & (30) & $ 1703 $ & (78) & $ 868 $ & (41) & $ 1810 $ & (140) & $ 548 $ & (42)\
NGC1125 & $ 167 $ & (15) & $ 435 $ & (82) & $ 206 $ & (26) & $ 192 $ & (21) & $ 93 $ & (8) & $ 79 $ & (6) & $ 51 $ & (3) & $ 130 $ & (7) & $ 52 $ & (7) & $ 117 $ & (8) & $ 56 $ & (8)\
NGC1143-4 & $ 364 $ & (24) & $ 402 $ & (48) & $ 484 $ & (14) & $ 474 $ & (12) & $ 125 $ & (7) & $ 227 $ & (6) & $ 83 $ & (4) & $ 252 $ & (11) & $ 110 $ & (4) & $ 204 $ & (10) & $ 142 $ & (9)\
M-2-8-39 & $< 27 $ & & $< 188 $ & & $< 53 $ & & $< 50 $ & & $< 31 $ & & $ 23 $ & (7) & $< 5 $ & & $< 15 $ & & $< 20 $ & & $ 13 $ & (4) & $ 84 $ & (14)\
NGC1194 & $ 32 $ & (8) & $< 138 $ & & $ 113 $ & (17) & $ 96 $ & (18) & $< 46 $ & & $< 34 $ & & $< 8 $ & & $ 50 $ & (7) & $ 48 $ & (12) & $ 35 $ & (7) & $ 64 $ & (10)\
NGC1241 & $ 112 $ & (16) & $ 338 $ & (93) & $< 104 $ & & $ 190 $ & (38) & $ 63 $ & (18) & $ 62 $ & (12) & $ 31 $ & (3) & $ 79 $ & (8) & $ 32 $ & (10) & $ 45 $ & (5) & $ 81 $ & (9)\
NGC1320 & $ 118 $ & (10) & $< 255 $ & & $ 213 $ & (29) & $ 195 $ & (32) & $ 61 $ & (17) & $ 95 $ & (13) & $ 39 $ & (4) & $ 98 $ & (8) & $ 25 $ & (7) & $ 76 $ & (10) & $< 37 $ &\
NGC1365 & $ 3730 $ & (180) & $ 4330 $ & (580) & $ 4890 $ & (260) & $ 5610 $ & (310) & $ 1720 $ & (190) & $ 2436 $ & (84) & $ 839 $ & (42) & $ 2510 $ & (110) & $ 1243 $ & (64) & $ 2590 $ & (180) & $ 1220 $ & (82)\
NGC1386 & $ 85 $ & (15) & $< 264 $ & & $ 270 $ & (28) & $ 274 $ & (30) & $ 82 $ & (16) & $ 93 $ & (10) & $ 42 $ & (4) & $ 220 $ & (9) & $ 108 $ & (14) & $ 148 $ & (14) & $ 146 $ & (21)\
F03450+0055 & $ 32 $ & (6) & $ 249 $ & (36) & $< 24 $ & & $ 69 $ & (7) & $ 76 $ & (6) & $ 28 $ & (5) & $< 6 $ & & $< 16 $ & & $ 33 $ & (6) & $ 36 $ & (6) & $< 30 $ &\
NGC1566 & $ 160 $ & (14) & $ 286 $ & (53) & $ 218 $ & (17) & $ 303 $ & (20) & $ 106 $ & (14) & $ 122 $ & (9) & $ 79 $ & (3) & $ 227 $ & (9) & $ 85 $ & (6) & $ 142 $ & (8) & $ 159 $ & (7)\
F04385-0828 & $ 54 $ & (11) & $< 134 $ & & $ 103 $ & (18) & $< 51 $ & & $< 44 $ & & $< 33 $ & & $< 10 $ & & $ 104 $ & (9) & $ 91 $ & (12) & $ 119 $ & (11) & $< 45 $ &\
NGC1667 & $ 387 $ & (29) & $ 563 $ & (85) & $ 436 $ & (35) & $ 518 $ & (40) & $ 121 $ & (29) & $ 290 $ & (20) & $ 106 $ & (5) & $ 279 $ & (14) & $ 102 $ & (12) & $ 230 $ & (11) & $ 159 $ & (17)\
E33-G2 & $< 40 $ & & $ 335 $ & (54) & $< 50 $ & & $< 47 $ & & $ 55 $ & (8) & $ 39 $ & (7) & $< 8 $ & & $ 33 $ & (8) & $< 33 $ & & $< 18 $ & & $< 21 $ &\
M-5-13-17 & $ 114 $ & (9) & $< 227 $ & & $ 96 $ & (23) & $ 143 $ & (24) & $ 44 $ & (12) & $ 102 $ & (9) & $ 56 $ & (3) & $ 103 $ & (7) & $ 35 $ & (6) & $ 47 $ & (6) & $ 61 $ & (9)\
MRK6 & $ 30 $ & (7) & $< 132 $ & & $ 51 $ & (13) & $< 24 $ & & $< 30 $ & & $< 28 $ & & $ 12 $ & (2) & $< 16 $ & & $< 15 $ & & $< 22 $ & & $ 73 $ & (14)\
MRK79 & $ 29 $ & (6) & $ 335 $ & (74) & $< 62 $ & & $ 63 $ & (14) & $ 57 $ & (9) & $< 21 $ & & $< 9 $ & & $ 51 $ & (7) & $ 40 $ & (7) & $ 50 $ & (10) & $< 47 $ &\
NGC2639 & $ 68 $ & (17) & $< 249 $ & & $ 119 $ & (28) & $ 110 $ & (27) & $< 66 $ & & $< 51 $ & & $ 31 $ & (3) & $ 80 $ & (9) & $< 36 $ & & $ 39 $ & (8) & $ 61 $ & (10)\
MRK704 & $< 29 $ & & $< 92 $ & & $< 39 $ & & $ 62 $ & (16) & $< 36 $ & & $< 24 $ & & $< 6 $ & & $< 18 $ & & $< 22 $ & & $< 23 $ & & $< 30 $ &\
NGC2992 & $ 352 $ & (26) & $ 706 $ & (96) & $ 434 $ & (29) & $ 521 $ & (34) & $ 193 $ & (18) & $ 258 $ & (13) & $ 119 $ & (5) & $ 330 $ & (14) & $ 142 $ & (12) & $ 187 $ & (27) & $ 274 $ & (25)\
MRK1239 & $ 45 $ & (13) & $ 246 $ & (58) & $ 133 $ & (28) & $ 134 $ & (17) & $ 90 $ & (15) & $ 74 $ & (13) & $ 24 $ & (6) & $ 54 $ & (13) & $ 59 $ & (12) & $ 78 $ & (11) & $< 42 $ &\
NGC3079 & $ 2570 $ & (120) & $ 4140 $ & (340) & $ 3350 $ & (150) & $ 3960 $ & (190) & $ 1330 $ & (140) & $ 1233 $ & (73) & $ 361 $ & (18) & $ 1372 $ & (51) & $ 859 $ & (45) & $ 1804 $ & (95) & $ 949 $ & (45)\
NGC3227 & $ 329 $ & (16) & $ 572 $ & (62) & $ 370 $ & (30) & $ 506 $ & (22) & $ 164 $ & (14) & $ 239 $ & (12) & $ 120 $ & (9) & $ 391 $ & (24) & $ 171 $ & (13) & $ 216 $ & (18) & $ 328 $ & (27)\
NGC3511 & $ 321 $ & (22) & $ 315 $ & (69) & $ 420 $ & (25) & $ 407 $ & (29) & $ 143 $ & (25) & $ 230 $ & (15) & $ 103 $ & (4) & $ 213 $ & (11) & $ 78 $ & (8) & $ 170 $ & (9) & $ 144 $ & (11)\
NGC3516 & $< 38 $ & & $< 256 $ & & $ 75 $ & (24) & $< 67 $ & & $ 58 $ & (14) & $< 28 $ & & $ 11 $ & (3) & $ 31 $ & (6) & $< 28 $ & & $< 27 $ & & $< 37 $ &\
M+0-29-23 & $ 409 $ & (23) & $ 787 $ & (69) & $ 487 $ & (22) & $ 538 $ & (25) & $ 202 $ & (14) & $ 300 $ & (12) & $ 112 $ & (4) & $ 305 $ & (8) & $ 138 $ & (6) & $ 281 $ & (14) & $ 193 $ & (13)\
NGC3660 & $ 57 $ & (15) & $ 201 $ & (47) & $< 64 $ & & $ 175 $ & (23) & $ 45 $ & (10) & $ 36 $ & (8) & $ 23 $ & (2) & $ 27 $ & (5) & $ 21 $ & (6) & $ 43 $ & (6) & $< 32 $ &\
NGC3982 & $ 258 $ & (32) & $ 480 $ & (110) & $ 443 $ & (46) & $ 352 $ & (52) & $ 122 $ & (33) & $ 218 $ & (12) & $ 102 $ & (4) & $ 187 $ & (10) & $ 67 $ & (10) & $ 130 $ & (13) & $ 153 $ & (11)\
NGC4051 & $ 139 $ & (10) & $< 176 $ & & $ 260 $ & (20) & $ 214 $ & (22) & $ 77 $ & (14) & $ 96 $ & (10) & $ 66 $ & (5) & $ 171 $ & (12) & $ 75 $ & (10) & $ 143 $ & (9) & $ 133 $ & (16)\
UGC7064 & $ 122 $ & (16) & $< 254 $ & & $ 229 $ & (26) & $ 134 $ & (21) & $ 44 $ & (11) & $ 132 $ & (9) & $ 35 $ & (4) & $ 110 $ & (12) & $ 46 $ & (10) & $ 70 $ & (6) & $ 66 $ & (10)\
NGC4151 & $ 55 $ & (14) & $< 465 $ & & $ 287 $ & (55) & $ 472 $ & (64) & $ 276 $ & (47) & $ 235 $ & (41) & $ 45 $ & (14) & $< 78 $ & & $< 77 $ & & $ 168 $ & (40) & $< 208 $ &\
MRK766 & $ 102 $ & (13) & $ 594 $ & (48) & $ 56 $ & (17) & $ 192 $ & (19) & $ 99 $ & (14) & $ 118 $ & (10) & $ 35 $ & (4) & $ 103 $ & (8) & $ 56 $ & (9) & $< 40 $ & & $ 150 $ & (22)\
NGC4388 & $ 233 $ & (9) & $ 440 $ & (140) & $ 393 $ & (53) & $ 451 $ & (53) & $ 134 $ & (16) & $ 186 $ & (14) & $ 82 $ & (7) & $ 350 $ & (19) & $ 180 $ & (24) & $ 270 $ & (29) & $ 244 $ & (24)\
NGC4501 & $< 75 $ & & $< 197 $ & & $ 79 $ & (25) & $ 102 $ & (22) & $< 52 $ & & $< 37 $ & & $ 36 $ & (4) & $ 124 $ & (9) & $ 58 $ & (12) & $ 83 $ & (10) & $ 90 $ & (7)\
NGC4579 & $ 63 $ & (11) & $ 213 $ & (41) & $< 43 $ & & $ 55 $ & (15) & $< 31 $ & & $ 23 $ & (6) & $ 24 $ & (2) & $ 95 $ & (4) & $ 53 $ & (7) & $ 55 $ & (10) & $ 63 $ & (5)\
NGC4593 & $ 90 $ & (10) & $< 171 $ & & $ 145 $ & (20) & $ 170 $ & (20) & $ 76 $ & (15) & $ 72 $ & (12) & $ 30 $ & (4) & $ 94 $ & (9) & $ 27 $ & (9) & $ 58 $ & (11) & $< 47 $ &\
NGC4594 & $ 34 $ & (10) & $< 172 $ & & $< 51 $ & & $ 84 $ & (20) & $< 33 $ & & $< 16 $ & & $ 7 $ & (2) & $ 16 $ & (3) & $< 10 $ & & $< 17 $ & & $< 7 $ &\
TOL1238-364 & $ 335 $ & (19) & $ 550 $ & (89) & $ 399 $ & (35) & $ 394 $ & (40) & $ 94 $ & (15) & $ 221 $ & (10) & $ 97 $ & (6) & $ 324 $ & (16) & $ 168 $ & (13) & $ 253 $ & (18) & $< 58 $ &\
NGC4602 & $ 107 $ & (11) & $ 353 $ & (64) & $< 74 $ & & $ 145 $ & (21) & $< 52 $ & & $ 104 $ & (14) & $ 39 $ & (3) & $ 82 $ & (6) & $ 47 $ & (8) & $ 65 $ & (8) & $ 59 $ & (6)\
M-2-33-34 & $ 40 $ & (7) & $< 229 $ & & $ 76 $ & (22) & $ 56 $ & (16) & $< 33 $ & & $< 33 $ & & $ 24 $ & (2) & $ 67 $ & (4) & $ 25 $ & (4) & $ 55 $ & (5) & $ 57 $ & (8)\
NGC4941 & $ 36 $ & (5) & $ 173 $ & (22) & $ 35 $ & (8) & $< 24 $ & & $ 28 $ & (4) & $ 20 $ & (2) & $< 9 $ & & $ 40 $ & (6) & $ 30 $ & (6) & $ 26 $ & (7) & $< 29 $ &\
NGC4968 & $ 133 $ & (9) & $ 364 $ & (41) & $ 181 $ & (17) & $ 156 $ & (11) & $ 108 $ & (16) & $ 92 $ & (12) & $ 41 $ & (4) & $ 145 $ & (9) & $ 77 $ & (9) & $ 83 $ & (9) & $ 59 $ & (16)\
NGC5005 & $ 326 $ & (22) & $ 722 $ & (76) & $ 350 $ & (29) & $ 534 $ & (34) & $ 194 $ & (29) & $ 246 $ & (17) & $ 195 $ & (10) & $ 525 $ & (27) & $ 226 $ & (13) & $ 386 $ & (28) & $ 358 $ & (13)\
NGC5033 & $ 483 $ & (28) & $ 579 $ & (94) & $ 650 $ & (39) & $ 751 $ & (44) & $ 243 $ & (27) & $ 357 $ & (12) & $ 179 $ & (6) & $ 472 $ & (16) & $ 187 $ & (12) & $ 383 $ & (18) & $ 319 $ & (15)\
NGC5135 & $ 854 $ & (26) & $ 1150 $ & (130) & $ 1085 $ & (38) & $ 1205 $ & (43) & $ 387 $ & (29) & $ 645 $ & (20) & $ 276 $ & (12) & $ 691 $ & (31) & $ 349 $ & (16) & $ 638 $ & (51) & $ 368 $ & (33)\
M-6-30-15 & $ 30 $ & (9) & $ 223 $ & (60) & $< 57 $ & & $ 84 $ & (15) & $ 43 $ & (9) & $ 29 $ & (6) & $< 10 $ & & $ 61 $ & (8) & $ 44 $ & (6) & $ 25 $ & (6) & $< 29 $ &\
NGC5256 & $ 364 $ & (23) & $ 661 $ & (62) & $ 570 $ & (18) & $ 471 $ & (22) & $ 139 $ & (11) & $ 241 $ & (8) & $ 76 $ & (4) & $ 250 $ & (12) & $ 137 $ & (13) & $ 248 $ & (24) & $ 160 $ & (16)\
IC4329A & $ 53 $ & (14) & $ 572 $ & (70) & $< 92 $ & & $ 141 $ & (32) & $ 168 $ & (22) & $ 98 $ & (17) & $< 17 $ & & $ 65 $ & (14) & $ 114 $ & (15) & $< 66 $ & & $< 106 $ &\
NGC5347 & $ 51 $ & (10) & $< 180 $ & & $< 52 $ & & $ 70 $ & (15) & $ 48 $ & (9) & $ 27 $ & (7) & $ 12 $ & (3) & $ 73 $ & (6) & $ 62 $ & (7) & $ 85 $ & (11) & $< 34 $ &\
NGC5506 & $ 161 $ & (21) & $< 443 $ & & $ 429 $ & (50) & $ 580 $ & (41) & $< 92 $ & & $ 82 $ & (21) & $< 39 $ & & $ 352 $ & (38) & $ 295 $ & (49) & $ 329 $ & (42) & $ 132 $ & (34)\
NGC5548 & $ 59 $ & (8) & $ 149 $ & (39) & $< 41 $ & & $ 65 $ & (15) & $< 40 $ & & $ 48 $ & (12) & $ 15 $ & (3) & $ 92 $ & (6) & $< 26 $ & & $ 68 $ & (6) & $< 31 $ &\
MRK817 & $ 71 $ & (13) & $ 234 $ & (60) & $ 97 $ & (16) & $ 147 $ & (11) & $< 38 $ & & $ 60 $ & (10) & $ 12 $ & (3) & $ 54 $ & (7) & $ 47 $ & (8) & $ 46 $ & (8) & $< 37 $ &\
NGC5929 & $ 41 $ & (9) & $< 213 $ & & $< 77 $ & & $< 69 $ & & $< 33 $ & & $< 24 $ & & $ 20 $ & (3) & $ 53 $ & (7) & $< 17 $ & & $ 25 $ & (7) & $< 22 $ &\
NGC5953 & $ 1030 $ & (56) & $ 1080 $ & (200) & $ 1437 $ & (82) & $ 1424 $ & (96) & $ 439 $ & (56) & $ 749 $ & (22) & $ 322 $ & (17) & $ 748 $ & (44) & $ 301 $ & (21) & $ 604 $ & (48) & $ 431 $ & (36)\
M-2-40-4 & $ 133 $ & (12) & $< 291 $ & & $ 282 $ & (32) & $ 193 $ & (26) & $ 37 $ & (11) & $ 121 $ & (9) & $ 50 $ & (5) & $ 186 $ & (9) & $ 69 $ & (8) & $ 145 $ & (11) & $ 57 $ & (14)\
F15480-0344 & $< 37 $ & & $< 202 $ & & $< 67 $ & & $ 75 $ & (14) & $< 21 $ & & $< 17 $ & & $ 22 $ & (3) & $ 47 $ & (6) & $ 38 $ & (5) & $ 30 $ & (6) & $< 39 $ &\
NGC6810 & $ 1238 $ & (58) & $ 1600 $ & (170) & $ 1600 $ & (66) & $ 1699 $ & (80) & $ 484 $ & (37) & $ 819 $ & (25) & $ 385 $ & (19) & $ 895 $ & (49) & $ 408 $ & (20) & $ 837 $ & (55) & $ 420 $ & (64)\
NGC6860 & $ 87 $ & (11) & $ 256 $ & (62) & $< 63 $ & & $ 134 $ & (10) & $ 56 $ & (14) & $ 63 $ & (12) & $ 16 $ & (3) & $ 100 $ & (6) & $ 36 $ & (11) & $ 62 $ & (10) & $ 57 $ & (10)\
NGC6890 & $ 134 $ & (13) & $< 74 $ & & $ 254 $ & (27) & $ 296 $ & (30) & $ 64 $ & (12) & $ 85 $ & (10) & $ 55 $ & (4) & $ 158 $ & (11) & $ 76 $ & (6) & $ 101 $ & (8) & $ 118 $ & (12)\
IC5063 & $< 31 $ & & $ 397 $ & (73) & $< 114 $ & & $ 129 $ & (30) & $< 55 $ & & $ 67 $ & (15) & $< 17 $ & & $ 106 $ & (16) & $ 118 $ & (21) & $ 85 $ & (26) & $< 78 $ &\
UGC11680 & $ 59 $ & (14) & $< 252 $ & & $< 85 $ & & $ 125 $ & (23) & $ 66 $ & (12) & $< 28 $ & & $ 16 $ & (3) & $ 49 $ & (7) & $ 28 $ & (6) & $ 19 $ & (5) & $ 33 $ & (8)\
NGC7130 & $ 630 $ & (20) & $ 1070 $ & (120) & $ 752 $ & (39) & $ 907 $ & (30) & $ 319 $ & (16) & $ 486 $ & (12) & $ 188 $ & (6) & $ 500 $ & (16) & $ 248 $ & (13) & $ 445 $ & (24) & $ 258 $ & (20)\
NGC7172 & $ 262 $ & (23) & $ 898 $ & (81) & $ 210 $ & (55) & $ 452 $ & (62) & $< 153 $ & & $< 47 $ & & $ 27 $ & (4) & $ 175 $ & (12) & $ 127 $ & (16) & $ 211 $ & (11) & $ 217 $ & (14)\
NGC7213 & $ 76 $ & (11) & $ 299 $ & (72) & $< 76 $ & & $ 90 $ & (24) & $< 43 $ & & $ 78 $ & (11) & $ 30 $ & (4) & $ 159 $ & (10) & $ 103 $ & (7) & $ 72 $ & (11) & $< 37 $ &\
NGC7314 & $ 25 $ & (6) & $< 120 $ & & $< 72 $ & & $ 53 $ & (10) & $< 24 $ & & $< 19 $ & & $< 8 $ & & $ 31 $ & (5) & $< 14 $ & & $< 19 $ & & $ 57 $ & (6)\
M-3-58-7 & $ 87 $ & (9) & $ 283 $ & (32) & $ 122 $ & (17) & $ 197 $ & (11) & $ 124 $ & (12) & $ 53 $ & (10) & $ 28 $ & (3) & $ 99 $ & (6) & $ 42 $ & (6) & $ 66 $ & (7) & $ 56 $ & (14)\
NGC7469 & $ 1209 $ & (39) & $ 1780 $ & (130) & $ 1447 $ & (44) & $ 1565 $ & (32) & $ 471 $ & (20) & $ 806 $ & (17) & $ 316 $ & (12) & $ 858 $ & (31) & $ 382 $ & (19) & $ 675 $ & (37) & $ 383 $ & (56)\
NGC7496 & $ 329 $ & (19) & $ 335 $ & (59) & $ 434 $ & (18) & $ 361 $ & (17) & $ 136 $ & (17) & $ 182 $ & (15) & $ 81 $ & (4) & $ 230 $ & (11) & $ 89 $ & (28) & $ 195 $ & (26) & $ 146 $ & (11)\
NGC7582 & $ 2042 $ & (88) & $ 2900 $ & (270) & $ 2860 $ & (110) & $ 3300 $ & (130) & $ 1162 $ & (84) & $ 1390 $ & (46) & $ 517 $ & (27) & $ 1625 $ & (74) & $ 935 $ & (48) & $ 1780 $ & (110) & $ 730 $ & (58)\
NGC7590 & $ 252 $ & (22) & $ 380 $ & (100) & $ 343 $ & (32) & $ 293 $ & (30) & $ 91 $ & (21) & $ 191 $ & (13) & $ 85 $ & (6) & $ 191 $ & (14) & $ 61 $ & (15) & $ 125 $ & (8) & $ 129 $ & (10)\
NGC7603 & $ 123 $ & (12) & $< 186 $ & & $ 100 $ & (20) & $ 177 $ & (11) & $ 64 $ & (11) & $ 59 $ & (8) & $ 44 $ & (3) & $ 138 $ & (7) & $ 84 $ & (8) & $ 108 $ & (8) & $ 72 $ & (9)\
NGC7674 & $ 230 $ & (13) & $ 248 $ & (69) & $ 398 $ & (22) & $ 375 $ & (20) & $ 113 $ & (20) & $ 167 $ & (14) & $ 63 $ & (5) & $ 195 $ & (11) & $ 85 $ & (10) & $ 152 $ & (13) & $ 126 $ & (25)\
CGCG381-051 & $ 111 $ & (15) & $ 215 $ & (57) & $ 114 $ & (15) & $ 112 $ & (10) & $< 32 $ & & $ 110 $ & (8) & $ 29 $ & (3) & $ 66 $ & (7) & $ 34 $ & (7) & $ 64 $ & (8) & $< 22 $ &\
[lrrrrrrrrrrrrrrrrrrrrrr]{} MRK335 & $< 32 $ & & $< 167 $ & & $< 48 $ & & $ 102 $ & (18) & $ 71 $ & (21) & $ 56 $ & (16) & $< 14 $ & & $ 50 $ & (13) & $ 71 $ & (15) & $ 91 $ & (11) & $< 65 $ &\
MRK938 & $ 2700 $ & (130) & $ 5880 $ & (590) & $ 3150 $ & (180) & $ 3730 $ & (160) & $ 1620 $ & (110) & $ 1797 $ & (82) & $ 596 $ & (17) & $ 1429 $ & (44) & $ 622 $ & (41) & $ 1133 $ & (48) & $ 388 $ & (45)\
E12-G21 & $ 500 $ & (51) & $< 554 $ & & $ 723 $ & (50) & $ 757 $ & (50) & $ 174 $ & (35) & $ 465 $ & (27) & $ 139 $ & (15) & $ 803 $ & (42) & $ 303 $ & (36) & $ 682 $ & (74) & $ 640 $ & (54)\
MRK348 & $< 44 $ & & $ 362 $ & (46) & $< 67 $ & & $ 181 $ & (25) & $< 66 $ & & $ 104 $ & (16) & $ 25 $ & (5) & $ 96 $ & (12) & $< 52 $ & & $ 44 $ & (13) & $< 85 $ &\
NGC424 & $ 12 $ & (3) & $ 191 $ & (29) & $ 53 $ & (13) & $ 76 $ & (15) & $ 63 $ & (10) & $ 38 $ & (8) & $< 10 $ & & $ 26 $ & (6) & $ 30 $ & (7) & $< 31 $ & & $< 41 $ &\
NGC526A & $ 33 $ & (9) & $ 381 $ & (53) & $ 66 $ & (14) & $ 72 $ & (22) & $ 87 $ & (19) & $ 51 $ & (13) & $< 13 $ & & $ 57 $ & (11) & $ 96 $ & (11) & $ 58 $ & (11) & $< 87 $ &\
NGC513 & $ 613 $ & (67) & $< 1145 $ & & $ 1330 $ & (120) & $ 900 $ & (120) & $< 337 $ & & $ 632 $ & (79) & $ 541 $ & (18) & $ 832 $ & (55) & $ 445 $ & (72) & $ 648 $ & (71) & $ 804 $ & (80)\
F01475-0740 & $ 113 $ & (26) & $ 251 $ & (81) & $ 287 $ & (27) & $< 92 $ & & $< 65 $ & & $ 84 $ & (16) & $ 30 $ & (5) & $ 161 $ & (11) & $ 153 $ & (15) & $ 173 $ & (16) & $< 62 $ &\
NGC931 & $ 57 $ & (7) & $< 146 $ & & $ 62 $ & (19) & $ 102 $ & (23) & $< 44 $ & & $ 35 $ & (11) & $ 17 $ & (4) & $ 124 $ & (8) & $ 41 $ & (10) & $ 44 $ & (10) & $ 56 $ & (18)\
NGC1056 & $ 2340 $ & (120) & $ 2630 $ & (350) & $ 3580 $ & (170) & $ 2880 $ & (190) & $ 920 $ & (140) & $ 1943 $ & (50) & $ 998 $ & (56) & $ 1960 $ & (150) & $ 821 $ & (56) & $ 1745 $ & (86) & $ 1230 $ & (110)\
NGC1097 & $ 1734 $ & (75) & $ 2370 $ & (210) & $ 2530 $ & (130) & $ 2880 $ & (160) & $ 970 $ & (120) & $ 1490 $ & (33) & $ 668 $ & (31) & $ 1794 $ & (82) & $ 1010 $ & (48) & $ 2280 $ & (180) & $ 650 $ & (50)\
NGC1125 & $ 767 $ & (71) & $ 2620 $ & (490) & $ 1270 $ & (160) & $ 1220 $ & (130) & $ 615 $ & (55) & $ 541 $ & (41) & $ 296 $ & (16) & $ 733 $ & (40) & $ 231 $ & (32) & $ 448 $ & (30) & $ 167 $ & (25)\
NGC1143-4 & $ 1342 $ & (88) & $ 1600 $ & (190) & $ 1940 $ & (57) & $ 1924 $ & (49) & $ 530 $ & (28) & $ 1009 $ & (27) & $ 563 $ & (28) & $ 1688 $ & (73) & $ 642 $ & (25) & $ 1121 $ & (58) & $ 812 $ & (49)\
M-2-8-39 & $< 87 $ & & $< 599 $ & & $< 167 $ & & $< 158 $ & & $< 98 $ & & $ 72 $ & (21) & $< 14 $ & & $< 41 $ & & $< 55 $ & & $ 36 $ & (11) & $ 256 $ & (42)\
NGC1194 & $ 30 $ & (7) & $< 155 $ & & $ 131 $ & (20) & $ 115 $ & (21) & $< 61 $ & & $< 52 $ & & $< 19 $ & & $ 121 $ & (16) & $ 98 $ & (26) & $ 64 $ & (13) & $ 164 $ & (25)\
NGC1241 & $ 750 $ & (100) & $ 2910 $ & (810) & $< 919 $ & & $ 1730 $ & (340) & $ 590 $ & (170) & $ 600 $ & (120) & $ 285 $ & (26) & $ 725 $ & (72) & $ 275 $ & (82) & $ 372 $ & (38) & $ 661 $ & (70)\
NGC1320 & $ 125 $ & (11) & $< 290 $ & & $ 246 $ & (33) & $ 229 $ & (37) & $ 74 $ & (20) & $ 116 $ & (16) & $ 48 $ & (4) & $ 121 $ & (10) & $ 32 $ & (9) & $ 101 $ & (13) & $< 53 $ &\
NGC1365 & $ 1235 $ & (58) & $ 1420 $ & (190) & $ 1600 $ & (84) & $ 1830 $ & (100) & $ 566 $ & (63) & $ 817 $ & (28) & $ 341 $ & (17) & $ 1016 $ & (46) & $ 482 $ & (25) & $ 985 $ & (68) & $ 446 $ & (30)\
NGC1386 & $ 68 $ & (12) & $< 216 $ & & $ 223 $ & (23) & $ 230 $ & (25) & $ 73 $ & (14) & $ 91 $ & (9) & $ 50 $ & (5) & $ 264 $ & (10) & $ 115 $ & (15) & $ 149 $ & (14) & $ 185 $ & (27)\
F03450+0055 & $ 42 $ & (8) & $ 431 $ & (63) & $< 42 $ & & $ 127 $ & (13) & $ 144 $ & (12) & $ 51 $ & (10) & $< 10 $ & & $< 27 $ & & $ 64 $ & (12) & $ 75 $ & (12) & $< 71 $ &\
NGC1566 & $ 303 $ & (26) & $ 640 $ & (120) & $ 500 $ & (40) & $ 720 $ & (47) & $ 268 $ & (35) & $ 321 $ & (24) & $ 270 $ & (12) & $ 788 $ & (31) & $ 307 $ & (21) & $ 534 $ & (30) & $ 719 $ & (33)\
F04385-0828 & $ 33 $ & (6) & $< 84 $ & & $ 66 $ & (12) & $< 33 $ & & $< 32 $ & & $< 26 $ & & $< 13 $ & & $ 137 $ & (12) & $ 105 $ & (14) & $ 128 $ & (12) & $< 55 $ &\
NGC1667 & $ 2450 $ & (180) & $ 3890 $ & (590) & $ 3000 $ & (240) & $ 3550 $ & (280) & $ 820 $ & (200) & $ 1950 $ & (130) & $ 745 $ & (37) & $ 1975 $ & (97) & $ 731 $ & (83) & $ 1693 $ & (81) & $ 1300 $ & (140)\
E33-G2 & $< 55 $ & & $ 550 $ & (88) & $< 84 $ & & $< 82 $ & & $ 102 $ & (15) & $ 72 $ & (13) & $< 16 $ & & $ 68 $ & (15) & $< 71 $ & & $< 42 $ & & $< 59 $ &\
M-5-13-17 & $ 327 $ & (25) & $< 718 $ & & $ 309 $ & (75) & $ 466 $ & (77) & $ 148 $ & (41) & $ 352 $ & (32) & $ 182 $ & (9) & $ 337 $ & (23) & $ 120 $ & (22) & $ 163 $ & (20) & $ 199 $ & (28)\
MRK6 & $ 34 $ & (8) & $< 209 $ & & $ 85 $ & (22) & $< 41 $ & & $< 58 $ & & $< 56 $ & & $ 23 $ & (4) & $< 32 $ & & $< 32 $ & & $< 48 $ & & $ 146 $ & (27)\
MRK79 & $ 24 $ & (4) & $ 363 $ & (80) & $< 69 $ & & $ 74 $ & (17) & $ 71 $ & (12) & $< 26 $ & & $< 12 $ & & $ 72 $ & (10) & $ 60 $ & (11) & $ 81 $ & (16) & $< 89 $ &\
NGC2639 & $ 470 $ & (120) & $< 2493 $ & & $ 1220 $ & (280) & $ 1150 $ & (280) & $< 715 $ & & $< 559 $ & & $ 378 $ & (39) & $ 990 $ & (110) & $< 465 $ & & $ 540 $ & (120) & $ 1220 $ & (200)\
MRK704 & $< 22 $ & & $< 86 $ & & $< 37 $ & & $ 63 $ & (16) & $< 38 $ & & $< 26 $ & & $< 8 $ & & $< 22 $ & & $< 31 $ & & $< 34 $ & & $< 61 $ &\
NGC2992 & $ 382 $ & (28) & $ 940 $ & (130) & $ 588 $ & (39) & $ 723 $ & (47) & $ 275 $ & (26) & $ 368 $ & (19) & $ 152 $ & (7) & $ 420 $ & (18) & $ 175 $ & (15) & $ 223 $ & (32) & $ 345 $ & (32)\
MRK1239 & $ 15 $ & (4) & $ 112 $ & (26) & $ 63 $ & (13) & $ 66 $ & (8) & $ 47 $ & (7) & $ 39 $ & (6) & $ 15 $ & (3) & $ 34 $ & (8) & $ 42 $ & (8) & $ 64 $ & (9) & $< 47 $ &\
NGC3079 & $ 3950 $ & (180) & $ 9210 $ & (750) & $ 7650 $ & (340) & $ 9310 $ & (450) & $ 3330 $ & (350) & $ 3400 $ & (200) & $ 1417 $ & (72) & $ 5170 $ & (190) & $ 2180 $ & (110) & $ 3600 $ & (190) & $ 2540 $ & (120)\
NGC3227 & $ 294 $ & (14) & $ 562 $ & (60) & $ 370 $ & (30) & $ 517 $ & (22) & $ 174 $ & (15) & $ 259 $ & (12) & $ 137 $ & (10) & $ 447 $ & (27) & $ 195 $ & (15) & $ 245 $ & (20) & $ 340 $ & (28)\
NGC3511 & $ 2980 $ & (210) & $ 3040 $ & (670) & $ 4020 $ & (240) & $ 3850 $ & (270) & $ 1320 $ & (230) & $ 2100 $ & (130) & $ 1006 $ & (41) & $ 2070 $ & (110) & $ 769 $ & (83) & $ 1742 $ & (94) & $ 1940 $ & (140)\
NGC3516 & $< 29 $ & & $< 243 $ & & $ 73 $ & (23) & $< 68 $ & & $ 64 $ & (15) & $< 32 $ & & $ 15 $ & (4) & $ 43 $ & (7) & $< 43 $ & & $< 42 $ & & $< 63 $ &\
M+0-29-23 & $ 1314 $ & (74) & $ 2530 $ & (220) & $ 1569 $ & (71) & $ 1740 $ & (81) & $ 666 $ & (48) & $ 1023 $ & (40) & $ 465 $ & (15) & $ 1247 $ & (33) & $ 487 $ & (22) & $ 920 $ & (44) & $ 634 $ & (43)\
NGC3660 & $ 710 $ & (190) & $ 2800 $ & (650) & $< 893 $ & & $ 2460 $ & (320) & $ 630 $ & (140) & $ 480 $ & (110) & $ 311 $ & (26) & $ 361 $ & (73) & $ 309 $ & (90) & $ 688 $ & (96) & $< 481 $ &\
NGC3982 & $ 1850 $ & (230) & $ 3360 $ & (770) & $ 3020 $ & (320) & $ 2350 $ & (350) & $ 780 $ & (210) & $ 1382 $ & (77) & $ 636 $ & (25) & $ 1166 $ & (65) & $ 412 $ & (62) & $ 796 $ & (81) & $ 924 $ & (67)\
NGC4051 & $ 99 $ & (7) & $< 134 $ & & $ 201 $ & (15) & $ 168 $ & (17) & $ 63 $ & (12) & $ 79 $ & (8) & $ 58 $ & (4) & $ 153 $ & (11) & $ 71 $ & (9) & $ 144 $ & (8) & $ 155 $ & (19)\
UGC7064 & $ 521 $ & (68) & $< 1222 $ & & $ 1120 $ & (120) & $ 670 $ & (100) & $ 227 $ & (56) & $ 685 $ & (48) & $ 172 $ & (21) & $ 544 $ & (58) & $ 235 $ & (52) & $ 365 $ & (34) & $ 328 $ & (48)\
NGC4151 & $ 13 $ & (3) & $< 123 $ & & $ 77 $ & (15) & $ 129 $ & (17) & $ 76 $ & (13) & $ 65 $ & (11) & $ 10 $ & (3) & $< 18 $ & & $< 19 $ & & $ 43 $ & (10) & $< 57 $ &\
MRK766 & $ 114 $ & (15) & $ 855 $ & (69) & $ 83 $ & (25) & $ 294 $ & (29) & $ 157 $ & (22) & $ 187 $ & (16) & $ 46 $ & (4) & $ 137 $ & (10) & $ 75 $ & (12) & $< 52 $ & & $ 178 $ & (26)\
NGC4388 & $ 184 $ & (7) & $ 360 $ & (110) & $ 330 $ & (44) & $ 388 $ & (46) & $ 125 $ & (14) & $ 188 $ & (15) & $ 109 $ & (8) & $ 458 $ & (24) & $ 201 $ & (27) & $ 272 $ & (29) & $ 217 $ & (22)\
NGC4501 & $< 225 $ & & $< 856 $ & & $ 360 $ & (110) & $ 490 $ & (110) & $< 273 $ & & $< 205 $ & & $ 335 $ & (34) & $ 1168 $ & (87) & $ 560 $ & (120) & $ 850 $ & (110) & $ 1580 $ & (130)\
NGC4579 & $ 124 $ & (22) & $ 630 $ & (120) & $< 131 $ & & $ 179 $ & (48) & $< 111 $ & & $ 83 $ & (21) & $ 104 $ & (7) & $ 429 $ & (20) & $ 293 $ & (41) & $ 338 $ & (64) & $ 455 $ & (33)\
NGC4593 & $ 69 $ & (7) & $< 152 $ & & $ 132 $ & (19) & $ 157 $ & (18) & $ 72 $ & (14) & $ 69 $ & (11) & $ 32 $ & (4) & $ 101 $ & (10) & $ 33 $ & (10) & $ 75 $ & (14) & $< 82 $ &\
NGC4594 & $ 26 $ & (8) & $< 240 $ & & $< 76 $ & & $ 139 $ & (33) & $< 64 $ & & $< 33 $ & & $ 26 $ & (6) & $ 65 $ & (13) & $< 48 $ & & $< 100 $ & & $< 71 $ &\
TOL1238-364 & $ 453 $ & (26) & $ 750 $ & (120) & $ 548 $ & (49) & $ 545 $ & (55) & $ 133 $ & (21) & $ 316 $ & (14) & $ 115 $ & (7) & $ 384 $ & (18) & $ 191 $ & (15) & $ 276 $ & (19) & $< 50 $ &\
NGC4602 & $ 1230 $ & (130) & $ 4570 $ & (830) & $< 955 $ & & $ 1870 $ & (280) & $< 659 $ & & $ 1310 $ & (170) & $ 506 $ & (32) & $ 1052 $ & (71) & $ 620 $ & (110) & $ 880 $ & (110) & $ 826 $ & (80)\
M-2-33-34 & $ 177 $ & (31) & $< 1134 $ & & $ 380 $ & (110) & $ 291 $ & (84) & $< 180 $ & & $< 183 $ & & $ 151 $ & (11) & $ 422 $ & (26) & $ 156 $ & (25) & $ 345 $ & (32) & $ 334 $ & (48)\
NGC4941 & $ 158 $ & (20) & $ 930 $ & (120) & $ 190 $ & (41) & $< 134 $ & & $ 168 $ & (26) & $ 120 $ & (12) & $< 55 $ & & $ 256 $ & (36) & $ 185 $ & (34) & $ 160 $ & (40) & $< 153 $ &\
NGC4968 & $ 220 $ & (14) & $ 648 $ & (73) & $ 325 $ & (31) & $ 283 $ & (21) & $ 199 $ & (29) & $ 170 $ & (22) & $ 64 $ & (5) & $ 226 $ & (14) & $ 117 $ & (14) & $ 123 $ & (13) & $ 81 $ & (22)\
NGC5005 & $ 373 $ & (25) & $ 1120 $ & (120) & $ 564 $ & (46) & $ 908 $ & (57) & $ 368 $ & (55) & $ 504 $ & (34) & $ 717 $ & (36) & $ 1928 $ & (98) & $ 809 $ & (47) & $ 1410 $ & (100) & $ 1860 $ & (65)\
NGC5033 & $ 844 $ & (49) & $ 1220 $ & (200) & $ 1402 $ & (85) & $ 1663 $ & (98) & $ 568 $ & (62) & $ 870 $ & (30) & $ 638 $ & (21) & $ 1693 $ & (58) & $ 680 $ & (45) & $ 1443 $ & (69) & $ 1684 $ & (80)\
NGC5135 & $ 1194 $ & (37) & $ 1690 $ & (180) & $ 1604 $ & (56) & $ 1802 $ & (64) & $ 593 $ & (45) & $ 1010 $ & (31) & $ 475 $ & (20) & $ 1177 $ & (52) & $ 549 $ & (26) & $ 946 $ & (76) & $ 437 $ & (39)\
M-6-30-15 & $ 25 $ & (7) & $ 237 $ & (64) & $< 62 $ & & $ 95 $ & (17) & $ 51 $ & (11) & $ 35 $ & (7) & $< 12 $ & & $ 78 $ & (9) & $ 61 $ & (8) & $ 37 $ & (8) & $< 49 $ &\
NGC5256 & $ 2160 $ & (140) & $ 4000 $ & (380) & $ 3440 $ & (110) & $ 2830 $ & (130) & $ 841 $ & (65) & $ 1478 $ & (49) & $ 493 $ & (28) & $ 1587 $ & (73) & $ 730 $ & (68) & $ 1180 $ & (110) & $ 573 $ & (59)\
IC4329A & $ 14 $ & (4) & $ 198 $ & (24) & $< 32 $ & & $ 52 $ & (12) & $ 65 $ & (8) & $ 38 $ & (6) & $< 6 $ & & $ 26 $ & (6) & $ 50 $ & (6) & $< 30 $ & & $< 56 $ &\
NGC5347 & $ 97 $ & (19) & $< 324 $ & & $< 93 $ & & $ 124 $ & (27) & $ 84 $ & (16) & $ 47 $ & (12) & $ 19 $ & (4) & $ 116 $ & (10) & $ 98 $ & (11) & $ 137 $ & (18) & $< 57 $ &\
NGC5506 & $ 30 $ & (3) & $< 104 $ & & $ 104 $ & (12) & $ 147 $ & (10) & $< 26 $ & & $ 26 $ & (6) & $< 20 $ & & $ 181 $ & (20) & $ 132 $ & (22) & $ 138 $ & (18) & $ 70 $ & (18)\
NGC5548 & $ 92 $ & (12) & $ 267 $ & (70) & $< 74 $ & & $ 121 $ & (27) & $< 77 $ & & $ 92 $ & (23) & $ 25 $ & (5) & $ 160 $ & (11) & $< 47 $ & & $ 133 $ & (12) & $< 64 $ &\
MRK817 & $ 96 $ & (18) & $ 400 $ & (100) & $ 167 $ & (28) & $ 259 $ & (19) & $< 67 $ & & $ 103 $ & (18) & $ 17 $ & (4) & $ 82 $ & (11) & $ 77 $ & (13) & $ 78 $ & (13) & $< 59 $ &\
NGC5929 & $ 480 $ & (100) & $< 2941 $ & & $< 1085 $ & & $< 995 $ & & $< 500 $ & & $< 371 $ & & $ 417 $ & (58) & $ 1100 $ & (150) & $< 367 $ & & $ 570 $ & (150) & $< 483 $ &\
NGC5953 & $ 3170 $ & (170) & $ 3310 $ & (610) & $ 4370 $ & (250) & $ 4270 $ & (290) & $ 1290 $ & (160) & $ 2184 $ & (65) & $ 1000 $ & (51) & $ 2330 $ & (140) & $ 942 $ & (67) & $ 1920 $ & (150) & $ 1420 $ & (120)\
M-2-40-4 & $ 89 $ & (8) & $< 210 $ & & $ 208 $ & (23) & $ 146 $ & (20) & $ 30 $ & (9) & $ 103 $ & (7) & $ 56 $ & (5) & $ 210 $ & (11) & $ 80 $ & (9) & $ 181 $ & (14) & $ 94 $ & (22)\
F15480-0344 & $< 90 $ & & $< 521 $ & & $< 175 $ & & $ 200 $ & (36) & $< 57 $ & & $< 48 $ & & $ 50 $ & (5) & $ 110 $ & (14) & $ 95 $ & (12) & $ 75 $ & (14) & $< 89 $ &\
NGC6810 & $ 1485 $ & (70) & $ 1960 $ & (210) & $ 1955 $ & (81) & $ 2063 $ & (97) & $ 580 $ & (44) & $ 974 $ & (30) & $ 408 $ & (20) & $ 944 $ & (52) & $ 425 $ & (21) & $ 861 $ & (56) & $ 310 $ & (48)\
NGC6860 & $ 98 $ & (13) & $ 374 $ & (90) & $< 95 $ & & $ 213 $ & (16) & $ 96 $ & (24) & $ 112 $ & (21) & $ 34 $ & (5) & $ 214 $ & (12) & $ 84 $ & (27) & $ 154 $ & (24) & $ 169 $ & (31)\
NGC6890 & $ 280 $ & (27) & $< 164 $ & & $ 588 $ & (64) & $ 701 $ & (72) & $ 159 $ & (31) & $ 215 $ & (24) & $ 164 $ & (13) & $ 474 $ & (32) & $ 231 $ & (18) & $ 312 $ & (24) & $ 381 $ & (39)\
IC5063 & $< 13 $ & & $ 160 $ & (30) & $< 46 $ & & $ 53 $ & (12) & $< 23 $ & & $ 29 $ & (6) & $< 8 $ & & $ 49 $ & (7) & $ 54 $ & (9) & $ 38 $ & (12) & $< 35 $ &\
UGC11680 & $ 219 $ & (54) & $< 1114 $ & & $< 386 $ & & $ 590 $ & (110) & $ 326 $ & (59) & $< 139 $ & & $ 83 $ & (15) & $ 258 $ & (35) & $ 157 $ & (35) & $ 113 $ & (31) & $ 207 $ & (49)\
NGC7130 & $ 1614 $ & (52) & $ 2740 $ & (310) & $ 1914 $ & (99) & $ 2285 $ & (75) & $ 784 $ & (39) & $ 1176 $ & (30) & $ 363 $ & (12) & $ 957 $ & (30) & $ 442 $ & (22) & $ 747 $ & (40) & $ 332 $ & (26)\
NGC7172 & $ 278 $ & (24) & $ 1210 $ & (110) & $ 290 $ & (76) & $ 642 $ & (88) & $< 238 $ & & $< 84 $ & & $ 126 $ & (17) & $ 780 $ & (52) & $ 350 $ & (43) & $ 470 $ & (25) & $ 853 $ & (56)\
NGC7213 & $ 87 $ & (13) & $ 510 $ & (120) & $< 134 $ & & $ 170 $ & (45) & $< 87 $ & & $ 156 $ & (23) & $ 44 $ & (5) & $ 242 $ & (15) & $ 193 $ & (12) & $ 151 $ & (24) & $< 77 $ &\
NGC7314 & $ 116 $ & (25) & $< 638 $ & & $< 387 $ & & $ 286 $ & (57) & $< 132 $ & & $< 105 $ & & $< 40 $ & & $ 159 $ & (28) & $< 68 $ & & $< 85 $ & & $ 285 $ & (28)\
M-3-58-7 & $ 92 $ & (9) & $ 387 $ & (44) & $ 171 $ & (24) & $ 284 $ & (16) & $ 184 $ & (18) & $ 78 $ & (14) & $ 43 $ & (4) & $ 153 $ & (8) & $ 68 $ & (9) & $ 112 $ & (12) & $ 99 $ & (24)\
NGC7469 & $ 733 $ & (24) & $ 1142 $ & (83) & $ 934 $ & (29) & $ 1019 $ & (21) & $ 309 $ & (13) & $ 526 $ & (11) & $ 178 $ & (6) & $ 483 $ & (18) & $ 214 $ & (11) & $ 370 $ & (20) & $ 160 $ & (24)\
NGC7496 & $ 1533 $ & (87) & $ 1270 $ & (220) & $ 1600 $ & (68) & $ 1282 $ & (59) & $ 455 $ & (58) & $ 597 $ & (49) & $ 241 $ & (13) & $ 679 $ & (34) & $ 249 $ & (77) & $ 516 $ & (69) & $ 281 $ & (21)\
NGC7582 & $ 757 $ & (33) & $ 1240 $ & (120) & $ 1248 $ & (47) & $ 1484 $ & (57) & $ 563 $ & (41) & $ 731 $ & (24) & $ 369 $ & (20) & $ 1137 $ & (52) & $ 549 $ & (28) & $ 943 $ & (56) & $ 345 $ & (28)\
NGC7590 & $ 1590 $ & (140) & $ 2910 $ & (780) & $ 2690 $ & (250) & $ 2320 $ & (230) & $ 730 $ & (170) & $ 1530 $ & (100) & $ 693 $ & (47) & $ 1580 $ & (120) & $ 530 $ & (130) & $ 1156 $ & (69) & $ 1490 $ & (110)\
NGC7603 & $ 83 $ & (8) & $< 166 $ & & $ 93 $ & (18) & $ 173 $ & (11) & $ 69 $ & (12) & $ 65 $ & (9) & $ 69 $ & (5) & $ 220 $ & (11) & $ 159 $ & (14) & $ 229 $ & (17) & $ 242 $ & (30)\
NGC7674 & $ 184 $ & (10) & $ 225 $ & (63) & $ 367 $ & (20) & $ 356 $ & (19) & $ 113 $ & (20) & $ 173 $ & (15) & $ 61 $ & (4) & $ 191 $ & (11) & $ 83 $ & (9) & $ 151 $ & (13) & $ 124 $ & (25)\
CGCG381-051 & $ 750 $ & (100) & $ 1330 $ & (360) & $ 698 $ & (89) & $ 674 $ & (62) & $< 178 $ & & $ 571 $ & (43) & $ 120 $ & (11) & $ 279 $ & (29) & $ 163 $ & (32) & $ 327 $ & (42) & $< 85 $ &\
[lrrrrrrrrrrrrrrrr]{} MRK335 & $< 72 $ & & $< 88 $ & & $< 64 $ & & $< 55 $ & & $ 53 $ & (12) & $ 21 $ & (6) & $ 17 $ & (5) & $< 18 $ &\
MRK938 & $ 160 $ & (32) & $< 205 $ & & $ 151 $ & (22) & $ 249 $ & (57) & $ 128 $ & (13) & $ 143 $ & (22) & $ 195 $ & (22) & $< 153 $ &\
E12-G21 & $ 55 $ & (15) & $< 132 $ & & $< 67 $ & & $< 83 $ & & $ 40 $ & (11) & $< 34 $ & & $< 21 $ & & $< 22 $ &\
MRK348 & $< 79 $ & & $< 85 $ & & $ 62 $ & (17) & $< 74 $ & & $< 26 $ & & $< 28 $ & & $< 39 $ & & $ 23 $ & (6)\
NGC424 & $< 96 $ & & $< 76 $ & & $< 68 $ & & $< 177 $ & & $< 55 $ & & $< 54 $ & & $< 35 $ & & $< 56 $ &\
NGC526A & $ 76 $ & (19) & $ 40 $ & (11) & $ 73 $ & (10) & $ 55 $ & (11) & $ 29 $ & (9) & $ 22 $ & (4) & $ 33 $ & (9) & $< 28 $ &\
NGC513 & $< 94 $ & & $< 135 $ & & $< 75 $ & & $< 137 $ & & $ 81 $ & (18) & $< 47 $ & & $ 41 $ & (9) & $< 26 $ &\
F01475-0740 & $< 111 $ & & $< 54 $ & & $ 47 $ & (11) & $< 49 $ & & $< 24 $ & & $ 29 $ & (8) & $< 23 $ & & $< 43 $ &\
NGC931 & $< 130 $ & & $< 78 $ & & $ 65 $ & (20) & $< 112 $ & & $< 47 $ & & $< 25 $ & & $< 29 $ & & $< 42 $ &\
NGC1056 & $< 144 $ & & $< 196 $ & & $< 97 $ & & $< 197 $ & & $ 85 $ & (17) & $< 51 $ & & $< 47 $ & & $< 32 $ &\
NGC1097 & $< 235 $ & & $< 595 $ & & $ 186 $ & (43) & $< 772 $ & & $ 319 $ & (31) & $< 503 $ & & $ 374 $ & (36) & $< 82 $ &\
NGC1125 & $< 81 $ & & $< 79 $ & & $< 52 $ & & $< 44 $ & & $< 22 $ & & $< 30 $ & & $< 21 $ & & $< 52 $ &\
NGC1143-4 & $< 47 $ & & $< 160 $ & & $< 41 $ & & $< 55 $ & & $ 62 $ & (7) & $ 51 $ & (13) & $ 91 $ & (7) & $< 45 $ &\
M-2-8-39 & $< 150 $ & & $ 65 $ & (20) & $< 81 $ & & $ 82 $ & (26) & $< 39 $ & & $< 22 $ & & $< 35 $ & & $ 20 $ & (6)\
NGC1194 & $< 57 $ & & $< 54 $ & & $< 50 $ & & $< 67 $ & & $< 23 $ & & $< 40 $ & & $< 25 $ & & $< 31 $ &\
NGC1241 & $< 171 $ & & $< 92 $ & & $< 78 $ & & $< 144 $ & & $< 34 $ & & $ 42 $ & (9) & $ 47 $ & (8) & $< 18 $ &\
NGC1320 & $< 96 $ & & $< 68 $ & & $< 51 $ & & $< 83 $ & & $< 41 $ & & $< 30 $ & & $< 27 $ & & $< 56 $ &\
NGC1365 & $< 259 $ & & $< 1070 $ & & $< 351 $ & & $< 1203 $ & & $ 382 $ & (49) & $< 465 $ & & $< 377 $ & & $< 342 $ &\
NGC1386 & $< 181 $ & & $< 89 $ & & $ 82 $ & (24) & $< 75 $ & & $ 82 $ & (21) & $ 96 $ & (19) & $< 54 $ & & $< 88 $ &\
F03450+0055 & $ 60 $ & (15) & $< 51 $ & & $< 62 $ & & $< 37 $ & & $< 25 $ & & $ 23 $ & (7) & $< 26 $ & & $< 25 $ &\
NGC1566 & $< 80 $ & & $< 85 $ & & $< 35 $ & & $< 78 $ & & $ 121 $ & (11) & $ 77 $ & (10) & $ 149 $ & (10) & $< 22 $ &\
F04385-0828 & $< 90 $ & & $ 66 $ & (19) & $< 76 $ & & $< 82 $ & & $< 34 $ & & $ 55 $ & (15) & $< 38 $ & & $< 58 $ &\
NGC1667 & $< 148 $ & & $< 188 $ & & $< 82 $ & & $< 192 $ & & $< 60 $ & & $ 73 $ & (15) & $ 79 $ & (15) & $< 20 $ &\
E33-G2 & $< 103 $ & & $< 96 $ & & $< 60 $ & & $< 70 $ & & $< 40 $ & & $ 46 $ & (9) & $ 20 $ & (6) & $< 20 $ &\
M-5-13-17 & $< 90 $ & & $< 60 $ & & $< 51 $ & & $< 52 $ & & $ 37 $ & (9) & $< 26 $ & & $< 22 $ & & $< 29 $ &\
MRK6 & $< 77 $ & & $< 43 $ & & $ 71 $ & (16) & $< 40 $ & & $ 59 $ & (10) & $ 27 $ & (7) & $< 35 $ & & $< 52 $ &\
MRK79 & $< 105 $ & & $< 58 $ & & $< 69 $ & & $< 62 $ & & $ 61 $ & (17) & $< 25 $ & & $ 54 $ & (13) & $< 28 $ &\
NGC2639 & $< 103 $ & & $< 100 $ & & $< 100 $ & & $< 121 $ & & $< 74 $ & & $< 41 $ & & $< 25 $ & & $ 17 $ & (5)\
MRK704 & $< 94 $ & & $< 54 $ & & $< 48 $ & & $< 68 $ & & $< 39 $ & & $< 27 $ & & $< 23 $ & & $< 26 $ &\
NGC2992 & $< 176 $ & & $< 154 $ & & $ 100 $ & (24) & $< 129 $ & & $ 108 $ & (13) & $< 106 $ & & $ 187 $ & (26) & $< 76 $ &\
MRK1239 & $< 86 $ & & $< 102 $ & & $< 90 $ & & $< 39 $ & & $< 53 $ & & $< 31 $ & & $< 35 $ & & $< 51 $ &\
NGC3079 & $< 254 $ & & $< 612 $ & & $ 666 $ & (45) & $< 770 $ & & $ 365 $ & (21) & $ 450 $ & (110) & $ 506 $ & (39) & $< 179 $ &\
NGC3227 & $< 93 $ & & $< 97 $ & & $ 240 $ & (38) & $ 134 $ & (42) & $ 197 $ & (24) & $ 112 $ & (24) & $ 277 $ & (31) & $< 87 $ &\
NGC3511 & $< 166 $ & & $< 140 $ & & $< 86 $ & & $< 126 $ & & $< 64 $ & & $< 34 $ & & $ 29 $ & (9) & $ 26 $ & (7)\
NGC3516 & $< 100 $ & & $< 94 $ & & $ 78 $ & (22) & $ 105 $ & (29) & $< 49 $ & & $ 42 $ & (11) & $< 31 $ & & $< 49 $ &\
M+0-29-23 & $< 71 $ & & $< 142 $ & & $ 98 $ & (24) & $< 114 $ & & $ 34 $ & (10) & $ 56 $ & (11) & $ 70 $ & (13) & $< 52 $ &\
NGC3660 & $ 108 $ & (28) & $< 97 $ & & $< 88 $ & & $< 106 $ & & $< 44 $ & & $< 24 $ & & $< 27 $ & & $ 19 $ & (4)\
NGC3982 & $< 152 $ & & $< 191 $ & & $< 92 $ & & $< 201 $ & & $< 58 $ & & $< 46 $ & & $< 28 $ & & $< 25 $ &\
NGC4051 & $< 64 $ & & $< 66 $ & & $< 45 $ & & $< 89 $ & & $ 123 $ & (18) & $< 28 $ & & $< 41 $ & & $< 77 $ &\
UGC7064 & $< 77 $ & & $< 96 $ & & $< 100 $ & & $< 89 $ & & $< 32 $ & & $< 30 $ & & $< 25 $ & & $< 29 $ &\
NGC4151 & $< 228 $ & & $< 65 $ & & $< 59 $ & & $< 233 $ & & $< 129 $ & & $< 150 $ & & $< 178 $ & & $ 200 $ & (60)\
MRK766 & $< 108 $ & & $< 89 $ & & $ 40 $ & (9) & $< 71 $ & & $< 28 $ & & $< 49 $ & & $< 56 $ & & $< 71 $ &\
NGC4388 & $< 199 $ & & $ 57 $ & (17) & $ 121 $ & (25) & $< 210 $ & & $ 143 $ & (14) & $ 172 $ & (38) & $ 135 $ & (30) & $< 127 $ &\
NGC4501 & $< 121 $ & & $< 146 $ & & $ 83 $ & (23) & $ 177 $ & (32) & $ 129 $ & (20) & $ 76 $ & (15) & $ 192 $ & (9) & $ 16 $ & (5)\
NGC4579 & $< 70 $ & & $ 87 $ & (23) & $ 191 $ & (16) & $ 115 $ & (21) & $ 290 $ & (14) & $ 115 $ & (12) & $ 201 $ & (6) & $< 12 $ &\
NGC4593 & $< 76 $ & & $< 61 $ & & $ 64 $ & (15) & $< 81 $ & & $ 55 $ & (16) & $< 22 $ & & $ 59 $ & (17) & $< 52 $ &\
NGC4594 & $< 70 $ & & $< 53 $ & & $< 54 $ & & $< 91 $ & & $< 26 $ & & $< 21 $ & & $ 5 $ & (1) & $< 8 $ &\
TOL1238-364 & $< 76 $ & & $< 108 $ & & $< 48 $ & & $< 152 $ & & $ 55 $ & (16) & $< 64 $ & & $< 46 $ & & $< 99 $ &\
NGC4602 & $< 173 $ & & $< 66 $ & & $ 74 $ & (24) & $< 94 $ & & $< 60 $ & & $< 25 $ & & $ 33 $ & (7) & $ 23 $ & (6)\
M-2-33-34 & $< 88 $ & & $< 49 $ & & $< 61 $ & & $< 44 $ & & $< 22 $ & & $< 17 $ & & $< 21 $ & & $< 44 $ &\
NGC4941 & $ 108 $ & (11) & $ 72 $ & (7) & $ 77 $ & (8) & $ 51 $ & (8) & $ 45 $ & (10) & $ 22 $ & (6) & $< 25 $ & & $< 32 $ &\
NGC4968 & $< 135 $ & & $< 53 $ & & $< 55 $ & & $< 80 $ & & $< 41 $ & & $ 54 $ & (11) & $< 40 $ & & $< 45 $ &\
NGC5005 & $ 187 $ & (52) & $< 127 $ & & $ 431 $ & (28) & $< 167 $ & & $ 376 $ & (21) & $ 187 $ & (36) & $ 393 $ & (15) & $ 61 $ & (19)\
NGC5033 & $< 87 $ & & $< 167 $ & & $ 95 $ & (24) & $< 162 $ & & $ 118 $ & (11) & $ 85 $ & (18) & $ 137 $ & (13) & $ 30 $ & (10)\
NGC5135 & $< 138 $ & & $< 172 $ & & $< 143 $ & & $< 186 $ & & $ 116 $ & (19) & $< 167 $ & & $ 262 $ & (29) & $< 120 $ &\
M-6-30-15 & $< 88 $ & & $< 60 $ & & $< 46 $ & & $< 61 $ & & $< 23 $ & & $< 19 $ & & $< 25 $ & & $< 29 $ &\
NGC5256 & $< 121 $ & & $< 139 $ & & $< 75 $ & & $< 94 $ & & $ 96 $ & (12) & $ 90 $ & (25) & $ 87 $ & (14) & $< 81 $ &\
IC4329A & $< 181 $ & & $< 106 $ & & $< 47 $ & & $< 159 $ & & $< 64 $ & & $< 44 $ & & $< 90 $ & & $< 91 $ &\
NGC5347 & $< 65 $ & & $< 51 $ & & $ 59 $ & (16) & $ 64 $ & (16) & $< 38 $ & & $< 31 $ & & $< 29 $ & & $< 37 $ &\
NGC5506 & $< 332 $ & & $< 127 $ & & $< 58 $ & & $< 157 $ & & $< 77 $ & & $ 166 $ & (45) & $< 55 $ & & $< 167 $ &\
NGC5548 & $< 51 $ & & $ 67 $ & (17) & $ 44 $ & (14) & $< 70 $ & & $ 71 $ & (17) & $ 44 $ & (10) & $ 44 $ & (14) & $< 37 $ &\
MRK817 & $< 155 $ & & $< 97 $ & & $< 64 $ & & $< 56 $ & & $< 63 $ & & $< 33 $ & & $< 32 $ & & $< 52 $ &\
NGC5929 & $< 88 $ & & $ 61 $ & (16) & $ 57 $ & (16) & $< 85 $ & & $ 74 $ & (11) & $ 34 $ & (7) & $ 61 $ & (6) & $< 12 $ &\
NGC5953 & $< 131 $ & & $< 329 $ & & $< 90 $ & & $< 361 $ & & $ 112 $ & (25) & $< 125 $ & & $ 141 $ & (31) & $< 51 $ &\
M-2-40-4 & $< 95 $ & & $< 64 $ & & $< 60 $ & & $ 101 $ & (26) & $< 32 $ & & $ 62 $ & (13) & $< 34 $ & & $< 43 $ &\
F15480-0344 & $< 66 $ & & $< 74 $ & & $ 52 $ & (7) & $< 57 $ & & $< 31 $ & & $< 16 $ & & $ 46 $ & (11) & $ 29 $ & (9)\
NGC6810 & $< 177 $ & & $< 352 $ & & $< 146 $ & & $< 290 $ & & $ 168 $ & (18) & $< 143 $ & & $ 359 $ & (55) & $< 178 $ &\
NGC6860 & $< 126 $ & & $< 56 $ & & $ 69 $ & (20) & $< 44 $ & & $< 33 $ & & $< 40 $ & & $ 41 $ & (12) & $< 21 $ &\
NGC6890 & $< 95 $ & & $< 77 $ & & $< 62 $ & & $< 116 $ & & $< 31 $ & & $< 32 $ & & $ 46 $ & (13) & $< 39 $ &\
IC5063 & $< 95 $ & & $< 61 $ & & $ 88 $ & (16) & $< 138 $ & & $< 67 $ & & $< 58 $ & & $< 67 $ & & $< 129 $ &\
UGC11680 & $< 126 $ & & $< 85 $ & & $< 90 $ & & $< 106 $ & & $< 57 $ & & $< 18 $ & & $ 23 $ & (7) & $< 19 $ &\
NGC7130 & $< 71 $ & & $< 111 $ & & $< 114 $ & & $ 134 $ & (37) & $ 117 $ & (19) & $ 94 $ & (24) & $ 122 $ & (24) & $< 137 $ &\
NGC7172 & $< 80 $ & & $< 135 $ & & $ 72 $ & (15) & $< 273 $ & & $ 47 $ & (4) & $ 56 $ & (17) & $ 51 $ & (12) & $< 36 $ &\
NGC7213 & $< 122 $ & & $< 85 $ & & $< 71 $ & & $< 73 $ & & $ 88 $ & (20) & $< 27 $ & & $ 98 $ & (13) & $< 30 $ &\
NGC7314 & $ 77 $ & (12) & $ 44 $ & (6) & $ 62 $ & (7) & $ 42 $ & (12) & $ 26 $ & (4) & $ 18 $ & (3) & $ 15 $ & (4) & $< 27 $ &\
M-3-58-7 & $ 56 $ & (17) & $< 61 $ & & $< 71 $ & & $ 74 $ & (22) & $< 32 $ & & $< 27 $ & & $< 35 $ & & $< 53 $ &\
NGC7469 & $< 149 $ & & $< 248 $ & & $< 143 $ & & $< 119 $ & & $ 252 $ & (31) & $ 184 $ & (36) & $ 189 $ & (60) & $< 342 $ &\
NGC7496 & $< 78 $ & & $< 109 $ & & $ 99 $ & (26) & $ 99 $ & (25) & $< 160 $ & & $ 238 $ & (35) & $ 62 $ & (17) & $< 75 $ &\
NGC7582 & $< 228 $ & & $< 548 $ & & $< 145 $ & & $< 495 $ & & $ 242 $ & (31) & $ 350 $ & (100) & $ 310 $ & (60) & $< 341 $ &\
NGC7590 & $< 179 $ & & $< 128 $ & & $< 99 $ & & $< 124 $ & & $< 59 $ & & $< 40 $ & & $ 38 $ & (8) & $ 27 $ & (6)\
NGC7603 & $ 71 $ & (11) & $< 66 $ & & $< 55 $ & & $< 62 $ & & $ 57 $ & (14) & $ 31 $ & (9) & $ 40 $ & (12) & $ 22 $ & (6)\
NGC7674 & $< 87 $ & & $< 78 $ & & $ 74 $ & (22) & $< 107 $ & & $ 93 $ & (16) & $< 49 $ & & $< 64 $ & & $< 101 $ &\
CGCG381-051 & $< 75 $ & & $< 98 $ & & $ 76 $ & (24) & $ 64 $ & (20) & $ 22 $ & (6) & $< 27 $ & & $ 33 $ & (11) & $< 31 $ &\
[lrrrrrrrrrrrrrrrr]{} MRK335 & $< 7 $ & & $< 10 $ & & $< 9 $ & & $< 10 $ & & $ 9 $ & (2) & $ 6 $ & (2) & $ 6 $ & (2) & $< 15 $ &\
MRK938 & $ 50 $ & (10) & $< 74 $ & & $ 60 $ & (9) & $ 98 $ & (22) & $ 77 $ & (8) & $ 37 $ & (6) & $ 34 $ & (4) & $< 11 $ &\
E12-G21 & $ 12 $ & (3) & $< 31 $ & & $< 18 $ & & $< 24 $ & & $ 14 $ & (4) & $< 16 $ & & $< 14 $ & & $< 19 $ &\
MRK348 & $< 8 $ & & $< 10 $ & & $ 8 $ & (2) & $< 12 $ & & $< 5 $ & & $< 5 $ & & $< 7 $ & & $ 10 $ & (3)\
NGC424 & $< 3 $ & & $< 2 $ & & $< 2 $ & & $< 7 $ & & $< 2 $ & & $< 3 $ & & $< 2 $ & & $< 11 $ &\
NGC526A & $ 9 $ & (2) & $ 6 $ & (2) & $ 12 $ & (2) & $ 11 $ & (2) & $ 5 $ & (2) & $ 5 $ & (1) & $ 9 $ & (3) & $< 27 $ &\
NGC513 & $< 28 $ & & $< 44 $ & & $< 26 $ & & $< 54 $ & & $ 38 $ & (9) & $< 28 $ & & $ 30 $ & (7) & $< 23 $ &\
F01475-0740 & $< 35 $ & & $< 16 $ & & $ 13 $ & (3) & $< 14 $ & & $< 6 $ & & $ 8 $ & (2) & $< 5 $ & & $< 18 $ &\
NGC931 & $< 8 $ & & $< 6 $ & & $ 5 $ & (2) & $< 10 $ & & $< 5 $ & & $< 3 $ & & $< 5 $ & & $< 12 $ &\
NGC1056 & $< 45 $ & & $< 70 $ & & $< 37 $ & & $< 80 $ & & $ 37 $ & (8) & $< 27 $ & & $< 28 $ & & $< 15 $ &\
NGC1097 & $< 16 $ & & $< 46 $ & & $ 16 $ & (4) & $< 70 $ & & $ 28 $ & (3) & $< 61 $ & & $ 44 $ & (4) & $< 7 $ &\
NGC1125 & $< 30 $ & & $< 35 $ & & $< 28 $ & & $< 28 $ & & $< 19 $ & & $< 12 $ & & $< 6 $ & & $< 13 $ &\
NGC1143-4 & $< 15 $ & & $< 58 $ & & $< 16 $ & & $< 23 $ & & $ 45 $ & (5) & $ 29 $ & (7) & $ 52 $ & (4) & $< 20 $ &\
M-2-8-39 & $< 53 $ & & $ 22 $ & (7) & $< 26 $ & & $ 26 $ & (8) & $< 12 $ & & $< 6 $ & & $< 11 $ & & $ 17 $ & (5)\
NGC1194 & $< 5 $ & & $< 5 $ & & $< 5 $ & & $< 8 $ & & $< 7 $ & & $< 8 $ & & $< 6 $ & & $< 14 $ &\
NGC1241 & $< 90 $ & & $< 59 $ & & $< 61 $ & & $< 133 $ & & $< 36 $ & & $ 35 $ & (8) & $ 39 $ & (6) & $< 16 $ &\
NGC1320 & $< 10 $ & & $< 7 $ & & $< 6 $ & & $< 10 $ & & $< 5 $ & & $< 4 $ & & $< 4 $ & & $< 13 $ &\
NGC1365 & $< 8 $ & & $< 35 $ & & $< 12 $ & & $< 39 $ & & $ 16 $ & (2) & $< 18 $ & & $< 14 $ & & $< 9 $ &\
NGC1386 & $< 14 $ & & $< 7 $ & & $ 7 $ & (2) & $< 6 $ & & $ 12 $ & (3) & $ 10 $ & (2) & $< 7 $ & & $< 15 $ &\
F03450+0055 & $ 7 $ & (2) & $< 7 $ & & $< 10 $ & & $< 7 $ & & $< 4 $ & & $ 5 $ & (1) & $< 6 $ & & $< 13 $ &\
NGC1566 & $< 13 $ & & $< 16 $ & & $< 7 $ & & $< 19 $ & & $ 36 $ & (3) & $ 28 $ & (4) & $ 68 $ & (4) & $< 13 $ &\
F04385-0828 & $< 6 $ & & $ 4 $ & (1) & $< 5 $ & & $< 6 $ & & $< 6 $ & & $ 6 $ & (2) & $< 5 $ & & $< 9 $ &\
NGC1667 & $< 75 $ & & $< 117 $ & & $< 56 $ & & $< 131 $ & & $< 41 $ & & $ 53 $ & (11) & $ 65 $ & (12) & $< 14 $ &\
E33-G2 & $< 13 $ & & $< 13 $ & & $< 9 $ & & $< 13 $ & & $< 7 $ & & $ 10 $ & (2) & $ 6 $ & (2) & $< 14 $ &\
M-5-13-17 & $< 24 $ & & $< 17 $ & & $< 16 $ & & $< 17 $ & & $ 13 $ & (3) & $< 9 $ & & $< 7 $ & & $< 14 $ &\
MRK6 & $< 7 $ & & $< 5 $ & & $ 10 $ & (2) & $< 7 $ & & $ 12 $ & (2) & $ 6 $ & (2) & $< 7 $ & & $< 22 $ &\
MRK79 & $< 8 $ & & $< 5 $ & & $< 7 $ & & $< 7 $ & & $ 8 $ & (2) & $< 4 $ & & $ 10 $ & (3) & $< 10 $ &\
NGC2639 & $< 46 $ & & $< 66 $ & & $< 90 $ & & $< 129 $ & & $< 84 $ & & $< 54 $ & & $< 50 $ & & $ 41 $ & (12)\
MRK704 & $< 7 $ & & $< 4 $ & & $< 4 $ & & $< 7 $ & & $< 4 $ & & $< 4 $ & & $< 5 $ & & $< 15 $ &\
NGC2992 & $< 16 $ & & $< 16 $ & & $ 12 $ & (3) & $< 18 $ & & $ 14 $ & (2) & $< 13 $ & & $ 24 $ & (3) & $< 14 $ &\
MRK1239 & $< 3 $ & & $< 4 $ & & $< 4 $ & & $< 2 $ & & $< 3 $ & & $< 2 $ & & $< 4 $ & & $< 15 $ &\
NGC3079 & $< 27 $ & & $< 89 $ & & $ 133 $ & (9) & $< 184 $ & & $ 216 $ & (12) & $ 100 $ & (25) & $ 135 $ & (10) & $< 19 $ &\
NGC3227 & $< 8 $ & & $< 9 $ & & $ 23 $ & (4) & $ 14 $ & (4) & $ 22 $ & (3) & $ 13 $ & (3) & $ 29 $ & (3) & $< 13 $ &\
NGC3511 & $< 125 $ & & $< 127 $ & & $< 85 $ & & $< 118 $ & & $< 60 $ & & $< 34 $ & & $ 39 $ & (12) & $ 34 $ & (9)\
NGC3516 & $< 7 $ & & $< 7 $ & & $ 7 $ & (2) & $ 11 $ & (3) & $< 6 $ & & $ 6 $ & (2) & $< 5 $ & & $< 15 $ &\
M+0-29-23 & $< 22 $ & & $< 46 $ & & $ 32 $ & (8) & $< 37 $ & & $ 16 $ & (5) & $ 19 $ & (4) & $ 23 $ & (4) & $< 13 $ &\
NGC3660 & $ 116 $ & (30) & $< 119 $ & & $< 118 $ & & $< 149 $ & & $< 53 $ & & $< 37 $ & & $< 41 $ & & $ 28 $ & (6)\
NGC3982 & $< 94 $ & & $< 136 $ & & $< 67 $ & & $< 132 $ & & $< 38 $ & & $< 28 $ & & $< 17 $ & & $< 15 $ &\
NGC4051 & $< 4 $ & & $< 5 $ & & $< 3 $ & & $< 7 $ & & $ 10 $ & (2) & $< 3 $ & & $< 5 $ & & $< 16 $ &\
UGC7064 & $< 30 $ & & $< 41 $ & & $< 46 $ & & $< 45 $ & & $< 15 $ & & $< 16 $ & & $< 12 $ & & $< 20 $ &\
NGC4151 & $< 5 $ & & $< 2 $ & & $< 2 $ & & $< 6 $ & & $< 3 $ & & $< 4 $ & & $< 5 $ & & $ 14 $ & (4)\
MRK766 & $< 10 $ & & $< 10 $ & & $ 5 $ & (1) & $< 11 $ & & $< 4 $ & & $< 7 $ & & $< 7 $ & & $< 12 $ &\
NGC4388 & $< 16 $ & & $ 5 $ & (1) & $ 10 $ & (2) & $< 18 $ & & $ 24 $ & (2) & $ 18 $ & (4) & $ 12 $ & (3) & $< 12 $ &\
NGC4501 & $< 25 $ & & $< 41 $ & & $ 32 $ & (9) & $ 88 $ & (16) & $ 102 $ & (16) & $ 75 $ & (15) & $ 338 $ & (15) & $ 39 $ & (12)\
NGC4579 & $< 10 $ & & $ 16 $ & (4) & $ 48 $ & (4) & $ 39 $ & (7) & $ 91 $ & (4) & $ 67 $ & (7) & $ 146 $ & (4) & $< 12 $ &\
NGC4593 & $< 5 $ & & $< 5 $ & & $ 5 $ & (1) & $< 8 $ & & $ 5 $ & (2) & $< 3 $ & & $ 10 $ & (3) & $< 16 $ &\
NGC4594 & $< 4 $ & & $< 4 $ & & $< 6 $ & & $< 16 $ & & $< 7 $ & & $< 11 $ & & $ 5 $ & (2) & $< 17 $ &\
TOL1238-364 & $< 10 $ & & $< 15 $ & & $< 7 $ & & $< 21 $ & & $ 8 $ & (2) & $< 7 $ & & $< 4 $ & & $< 11 $ &\
NGC4602 & $< 156 $ & & $< 74 $ & & $ 95 $ & (30) & $< 120 $ & & $< 75 $ & & $< 33 $ & & $ 46 $ & (10) & $ 30 $ & (8)\
M-2-33-34 & $< 36 $ & & $< 22 $ & & $< 29 $ & & $< 23 $ & & $< 13 $ & & $< 11 $ & & $< 12 $ & & $< 34 $ &\
NGC4941 & $ 41 $ & (4) & $ 31 $ & (3) & $ 38 $ & (4) & $ 30 $ & (5) & $ 29 $ & (7) & $ 13 $ & (3) & $< 13 $ & & $< 23 $ &\
NGC4968 & $< 22 $ & & $< 9 $ & & $< 10 $ & & $< 15 $ & & $< 7 $ & & $ 8 $ & (2) & $< 6 $ & & $< 12 $ &\
NGC5005 & $ 17 $ & (5) & $< 14 $ & & $ 59 $ & (4) & $< 30 $ & & $ 121 $ & (7) & $ 68 $ & (13) & $ 205 $ & (8) & $ 20 $ & (6)\
NGC5033 & $< 12 $ & & $< 29 $ & & $ 19 $ & (5) & $< 36 $ & & $ 37 $ & (3) & $ 31 $ & (7) & $ 73 $ & (7) & $ 16 $ & (5)\
NGC5135 & $< 18 $ & & $< 24 $ & & $< 21 $ & & $< 28 $ & & $ 21 $ & (3) & $< 26 $ & & $ 31 $ & (3) & $< 11 $ &\
M-6-30-15 & $< 6 $ & & $< 5 $ & & $< 4 $ & & $< 7 $ & & $< 3 $ & & $< 3 $ & & $< 4 $ & & $< 10 $ &\
NGC5256 & $< 66 $ & & $< 82 $ & & $< 46 $ & & $< 56 $ & & $ 76 $ & (9) & $ 45 $ & (12) & $ 31 $ & (5) & $< 18 $ &\
IC4329A & $< 4 $ & & $< 3 $ & & $< 2 $ & & $< 6 $ & & $< 2 $ & & $< 2 $ & & $< 5 $ & & $< 12 $ &\
NGC5347 & $< 13 $ & & $< 10 $ & & $ 11 $ & (3) & $ 11 $ & (3) & $< 7 $ & & $< 5 $ & & $< 5 $ & & $< 11 $ &\
NGC5506 & $< 5 $ & & $< 2 $ & & $< 1 $ & & $< 4 $ & & $< 5 $ & & $ 7 $ & (2) & $< 3 $ & & $< 12 $ &\
NGC5548 & $< 7 $ & & $ 11 $ & (3) & $ 7 $ & (2) & $< 13 $ & & $ 12 $ & (3) & $ 8 $ & (2) & $ 9 $ & (3) & $< 16 $ &\
MRK817 & $< 18 $ & & $< 13 $ & & $< 10 $ & & $< 10 $ & & $< 9 $ & & $< 6 $ & & $< 5 $ & & $< 12 $ &\
NGC5929 & $< 83 $ & & $ 70 $ & (19) & $ 74 $ & (21) & $< 126 $ & & $ 133 $ & (20) & $ 75 $ & (16) & $ 135 $ & (14) & $< 23 $ &\
NGC5953 & $< 35 $ & & $< 100 $ & & $< 28 $ & & $< 107 $ & & $ 34 $ & (8) & $< 39 $ & & $ 47 $ & (10) & $< 12 $ &\
M-2-40-4 & $< 6 $ & & $< 4 $ & & $< 4 $ & & $ 8 $ & (2) & $< 4 $ & & $ 7 $ & (2) & $< 6 $ & & $< 11 $ &\
F15480-0344 & $< 16 $ & & $< 18 $ & & $ 13 $ & (2) & $< 15 $ & & $< 8 $ & & $< 4 $ & & $ 10 $ & (3) & $ 11 $ & (3)\
NGC6810 & $< 19 $ & & $< 42 $ & & $< 18 $ & & $< 35 $ & & $ 19 $ & (2) & $< 15 $ & & $ 27 $ & (4) & $< 14 $ &\
NGC6860 & $< 13 $ & & $< 6 $ & & $ 9 $ & (3) & $< 7 $ & & $< 6 $ & & $< 10 $ & & $ 12 $ & (4) & $< 16 $ &\
NGC6890 & $< 19 $ & & $< 16 $ & & $< 14 $ & & $< 28 $ & & $< 9 $ & & $< 10 $ & & $ 15 $ & (4) & $< 17 $ &\
IC5063 & $< 4 $ & & $< 3 $ & & $ 4 $ & (1) & $< 6 $ & & $< 4 $ & & $< 3 $ & & $< 3 $ & & $< 9 $ &\
UGC11680 & $< 42 $ & & $< 31 $ & & $< 37 $ & & $< 51 $ & & $< 27 $ & & $< 10 $ & & $ 15 $ & (5) & $< 21 $ &\
NGC7130 & $< 17 $ & & $< 28 $ & & $< 29 $ & & $ 34 $ & (9) & $ 27 $ & (4) & $ 16 $ & (4) & $ 16 $ & (3) & $< 14 $ &\
NGC7172 & $< 6 $ & & $< 14 $ & & $ 9 $ & (2) & $< 40 $ & & $ 36 $ & (3) & $ 14 $ & (4) & $ 20 $ & (5) & $< 13 $ &\
NGC7213 & $< 10 $ & & $< 9 $ & & $< 10 $ & & $< 14 $ & & $ 12 $ & (3) & $< 5 $ & & $ 21 $ & (3) & $< 17 $ &\
NGC7314 & $ 30 $ & (5) & $ 20 $ & (3) & $ 32 $ & (3) & $ 23 $ & (6) & $ 17 $ & (3) & $ 8 $ & (2) & $ 8 $ & (2) & $< 20 $ &\
M-3-58-7 & $ 5 $ & (2) & $< 6 $ & & $< 9 $ & & $ 11 $ & (3) & $< 4 $ & & $< 5 $ & & $< 6 $ & & $< 14 $ &\
NGC7469 & $< 9 $ & & $< 15 $ & & $< 9 $ & & $< 8 $ & & $ 15 $ & (2) & $ 10 $ & (2) & $ 8 $ & (3) & $< 14 $ &\
NGC7496 & $< 37 $ & & $< 51 $ & & $ 41 $ & (11) & $ 34 $ & (9) & $< 52 $ & & $ 65 $ & (10) & $ 12 $ & (3) & $< 10 $ &\
NGC7582 & $< 7 $ & & $< 20 $ & & $< 6 $ & & $< 23 $ & & $ 23 $ & (3) & $ 20 $ & (6) & $ 15 $ & (3) & $< 11 $ &\
NGC7590 & $< 83 $ & & $< 78 $ & & $< 73 $ & & $< 99 $ & & $< 46 $ & & $< 36 $ & & $ 45 $ & (9) & $ 30 $ & (7)\
NGC7603 & $ 4 $ & (1) & $< 4 $ & & $< 4 $ & & $< 6 $ & & $ 7 $ & (2) & $ 6 $ & (2) & $ 13 $ & (4) & $ 18 $ & (5)\
NGC7674 & $< 7 $ & & $< 6 $ & & $ 6 $ & (2) & $< 10 $ & & $ 10 $ & (2) & $< 5 $ & & $< 6 $ & & $< 15 $ &\
CGCG381-051 & $< 55 $ & & $< 67 $ & & $ 48 $ & (15) & $ 38 $ & (12) & $ 9 $ & (2) & $< 13 $ & & $ 13 $ & (4) & $< 14 $ &\
[lrrrrrrrrrrrrrrrrrr]{} MRK335 & $ 32 $ & (7) & $< 80 $ & & $ 24 $ & (5) & $ 44 $ & (9) & $< 19 $ & & $< 32 $ & & $< 87 $ & & $ 27 $ & (8) & $< 70 $ &\
MRK938 & $< 126 $ & & $< 796 $ & & $ 563 $ & (25) & $< 82 $ & & $< 56 $ & & $ 52 $ & (15) & $< 895 $ & & $< 55 $ & & $ 209 $ & (30)\
E12-G21 & $ 202 $ & (13) & $ 169 $ & (52) & $ 140 $ & (19) & $ 61 $ & (8) & $ 52 $ & (9) & $ 90 $ & (9) & $< 174 $ & & $ 47 $ & (12) & $< 69 $ &\
MRK348 & $ 179 $ & (14) & $ 131 $ & (35) & $ 131 $ & (10) & $ 178 $ & (16) & $ 54 $ & (11) & $ 42 $ & (12) & $< 110 $ & & $ 62 $ & (9) & $ 69 $ & (14)\
NGC424 & $ 145 $ & (29) & $< 244 $ & & $ 73 $ & (22) & $ 136 $ & (30) & $< 58 $ & & $< 53 $ & & $< 244 $ & & $< 54 $ & & $< 68 $ &\
NGC526A & $ 203 $ & (10) & $< 155 $ & & $ 51 $ & (11) & $ 135 $ & (13) & $< 25 $ & & $ 56 $ & (10) & $ 66 $ & (19) & $ 61 $ & (12) & $ 57 $ & (10)\
NGC513 & $ 110 $ & (8) & $ 308 $ & (36) & $ 237 $ & (15) & $ 93 $ & (9) & $ 24 $ & (8) & $ 100 $ & (8) & $ 217 $ & (38) & $< 48 $ & & $< 94 $ &\
F01475-0740 & $< 65 $ & & $< 182 $ & & $ 157 $ & (9) & $ 96 $ & (10) & $ 21 $ & (6) & $ 47 $ & (11) & $< 185 $ & & $< 30 $ & & $ 62 $ & (19)\
NGC931 & $ 365 $ & (23) & $< 124 $ & & $ 67 $ & (12) & $ 141 $ & (17) & $ 78 $ & (15) & $< 37 $ & & $ 167 $ & (48) & $ 156 $ & (20) & $< 56 $ &\
NGC1056 & $ 52 $ & (10) & $ 715 $ & (67) & $ 503 $ & (20) & $ 127 $ & (22) & $< 30 $ & & $ 218 $ & (14) & $ 490 $ & (32) & $< 83 $ & & $ 94 $ & (30)\
NGC1097 & $< 97 $ & & $ 2900 $ & (240) & $ 3200 $ & (170) & $ 90 $ & (25) & $< 183 $ & & $ 593 $ & (31) & $ 1030 $ & (250) & $< 170 $ & & $ 1281 $ & (57)\
NGC1125 & $ 252 $ & (21) & $ 223 $ & (55) & $ 311 $ & (9) & $ 243 $ & (11) & $ 77 $ & (10) & $ 136 $ & (10) & $ 220 $ & (53) & $ 107 $ & (8) & $ 95 $ & (17)\
NGC1143-4 & $ 74 $ & (11) & $ 614 $ & (78) & $ 499 $ & (14) & $ 110 $ & (9) & $ 26 $ & (7) & $ 267 $ & (8) & $ 454 $ & (75) & $ 30 $ & (9) & $ 216 $ & (16)\
M-2-8-39 & $ 138 $ & (13) & $ 64 $ & (19) & $ 66 $ & (6) & $ 134 $ & (17) & $ 62 $ & (11) & $< 36 $ & & $< 48 $ & & $ 44 $ & (8) & $< 90 $ &\
NGC1194 & $ 115 $ & (13) & $< 63 $ & & $ 31 $ & (9) & $ 94 $ & (15) & $ 41 $ & (11) & $< 41 $ & & $< 90 $ & & $ 62 $ & (8) & $ 101 $ & (17)\
NGC1241 & $ 69 $ & (7) & $ 208 $ & (24) & $ 139 $ & (7) & $ 107 $ & (10) & $< 19 $ & & $ 43 $ & (6) & $ 138 $ & (30) & $ 65 $ & (17) & $< 80 $ &\
NGC1320 & $ 347 $ & (23) & $< 127 $ & & $ 87 $ & (11) & $ 71 $ & (16) & $ 69 $ & (16) & $< 46 $ & & $< 134 $ & & $ 96 $ & (12) & $< 49 $ &\
NGC1365 & $ 1000 $ & (160) & $ 6520 $ & (790) & $ 5000 $ & (220) & $ 440 $ & (110) & $ 255 $ & (74) & $ 1669 $ & (85) & $ 2800 $ & (500) & $ 446 $ & (48) & $ 2130 $ & (140)\
NGC1386 & $ 891 $ & (52) & $ 450 $ & (140) & $ 192 $ & (17) & $ 419 $ & (23) & $ 277 $ & (26) & $ 196 $ & (25) & $< 199 $ & & $ 285 $ & (14) & $ 156 $ & (22)\
F03450+0055 & $< 25 $ & & $< 78 $ & & $< 21 $ & & $ 37 $ & (11) & $< 27 $ & & $< 26 $ & & $< 79 $ & & $< 27 $ & & $< 66 $ &\
NGC1566 & $ 78 $ & (8) & $ 194 $ & (18) & $ 168 $ & (9) & $ 86 $ & (7) & $< 15 $ & & $ 64 $ & (5) & $ 100 $ & (16) & $ 30 $ & (6) & $ 61 $ & (14)\
F04385-0828 & $< 60 $ & & $< 301 $ & & $ 120 $ & (13) & $< 50 $ & & $< 38 $ & & $< 38 $ & & $< 316 $ & & $< 22 $ & & $< 81 $ &\
NGC1667 & $ 87 $ & (8) & $ 79 $ & (21) & $ 364 $ & (14) & $ 66 $ & (15) & $< 24 $ & & $ 159 $ & (12) & $< 68 $ & & $< 54 $ & & $ 191 $ & (31)\
E33-G2 & $ 159 $ & (11) & $< 77 $ & & $ 30 $ & (8) & $ 57 $ & (11) & $ 25 $ & (7) & $< 34 $ & & $< 83 $ & & $ 113 $ & (15) & $< 54 $ &\
M-5-13-17 & $ 130 $ & (12) & $ 131 $ & (24) & $ 119 $ & (7) & $ 71 $ & (9) & $< 25 $ & & $< 26 $ & & $ 85 $ & (25) & $ 34 $ & (10) & $< 59 $ &\
MRK6 & $ 302 $ & (17) & $ 334 $ & (41) & $ 240 $ & (10) & $ 463 $ & (23) & $ 75 $ & (10) & $ 140 $ & (12) & $ 165 $ & (44) & $ 172 $ & (11) & $ 113 $ & (16)\
MRK79 & $ 447 $ & (18) & $ 161 $ & (43) & $ 96 $ & (14) & $ 204 $ & (14) & $ 82 $ & (13) & $ 53 $ & (13) & $ 166 $ & (47) & $ 114 $ & (12) & $< 77 $ &\
NGC2639 & $ 23 $ & (6) & $ 111 $ & (20) & $ 135 $ & (11) & $ 42 $ & (11) & $< 20 $ & & $ 48 $ & (9) & $ 54 $ & (13) & $< 57 $ & & $< 96 $ &\
MRK704 & $ 102 $ & (11) & $ 53 $ & (14) & $< 32 $ & & $ 75 $ & (12) & $< 43 $ & & $< 34 $ & & $< 46 $ & & $< 29 $ & & $< 47 $ &\
NGC2992 & $ 1258 $ & (40) & $ 1090 $ & (100) & $ 657 $ & (32) & $ 756 $ & (37) & $ 249 $ & (25) & $ 419 $ & (20) & $ 710 $ & (110) & $ 344 $ & (14) & $ 276 $ & (25)\
MRK1239 & $ 83 $ & (17) & $< 157 $ & & $ 66 $ & (18) & $ 118 $ & (19) & $< 37 $ & & $< 50 $ & & $< 171 $ & & $< 42 $ & & $< 102 $ &\
NGC3079 & $ 311 $ & (64) & $ 2620 $ & (260) & $ 1860 $ & (120) & $ 232 $ & (57) & $< 115 $ & & $ 155 $ & (25) & $ 610 $ & (150) & $ 112 $ & (28) & $ 589 $ & (48)\
NGC3227 & $ 573 $ & (50) & $ 740 $ & (150) & $ 752 $ & (24) & $ 619 $ & (30) & $ 135 $ & (22) & $ 253 $ & (24) & $< 391 $ & & $ 268 $ & (31) & $ 294 $ & (34)\
NGC3511 & $< 23 $ & & $ 402 $ & (29) & $ 248 $ & (13) & $< 34 $ & & $< 17 $ & & $ 95 $ & (8) & $ 151 $ & (28) & $< 50 $ & & $< 78 $ &\
NGC3516 & $ 591 $ & (23) & $ 147 $ & (29) & $ 65 $ & (10) & $ 189 $ & (18) & $ 72 $ & (12) & $ 81 $ & (13) & $ 160 $ & (29) & $ 159 $ & (12) & $< 72 $ &\
M+0-29-23 & $< 39 $ & & $ 446 $ & (56) & $ 407 $ & (18) & $ 71 $ & (13) & $< 28 $ & & $ 109 $ & (12) & $< 186 $ & & $< 43 $ & & $ 150 $ & (22)\
NGC3660 & $ 41 $ & (9) & $ 74 $ & (17) & $ 51 $ & (7) & $ 38 $ & (9) & $< 16 $ & & $< 27 $ & & $< 62 $ & & $ 45 $ & (11) & $< 87 $ &\
NGC3982 & $ 39 $ & (8) & $ 531 $ & (34) & $ 301 $ & (16) & $ 67 $ & (15) & $< 30 $ & & $ 133 $ & (10) & $ 238 $ & (33) & $ 74 $ & (16) & $ 130 $ & (34)\
NGC4051 & $ 297 $ & (32) & $ 295 $ & (94) & $ 184 $ & (16) & $ 107 $ & (18) & $< 28 $ & & $ 85 $ & (20) & $< 152 $ & & $ 64 $ & (16) & $ 88 $ & (20)\
UGC7064 & $ 117 $ & (13) & $< 214 $ & & $ 117 $ & (8) & $ 57 $ & (9) & $ 47 $ & (10) & $ 75 $ & (11) & $< 220 $ & & $< 36 $ & & $< 90 $ &\
NGC4151 & $ 1860 $ & (110) & $ 1230 $ & (300) & $ 1459 $ & (62) & $ 1528 $ & (82) & $ 353 $ & (58) & $ 761 $ & (68) & $< 623 $ & & $ 892 $ & (58) & $ 441 $ & (42)\
MRK766 & $ 387 $ & (29) & $< 181 $ & & $ 210 $ & (15) & $ 231 $ & (37) & $ 121 $ & (24) & $ 98 $ & (24) & $ 265 $ & (66) & $ 92 $ & (14) & $ 48 $ & (8)\
NGC4388 & $ 3401 $ & (74) & $ 1200 $ & (210) & $ 845 $ & (35) & $ 1201 $ & (83) & $ 482 $ & (32) & $ 526 $ & (26) & $ 730 $ & (150) & $ 695 $ & (35) & $ 270 $ & (23)\
NGC4501 & $ 30 $ & (5) & $ 185 $ & (16) & $ 104 $ & (13) & $ 87 $ & (8) & $< 20 $ & & $ 28 $ & (7) & $ 38 $ & (12) & $< 55 $ & & $< 68 $ &\
NGC4579 & $ 68 $ & (5) & $ 371 $ & (24) & $ 196 $ & (12) & $ 99 $ & (4) & $< 8 $ & & $ 44 $ & (4) & $ 58 $ & (11) & $ 26 $ & (5) & $ 173 $ & (15)\
NGC4593 & $ 80 $ & (17) & $< 117 $ & & $ 75 $ & (11) & $< 42 $ & & $< 37 $ & & $< 32 $ & & $< 123 $ & & $< 30 $ & & $< 68 $ &\
NGC4594 & $ 35 $ & (2) & $ 271 $ & (21) & $ 129 $ & (7) & $ 144 $ & (6) & $< 6 $ & & $ 56 $ & (4) & $ 94 $ & (23) & $< 19 $ & & $ 91 $ & (22)\
TOL1238-364 & $< 131 $ & & $< 438 $ & & $ 553 $ & (21) & $ 268 $ & (31) & $< 41 $ & & $ 170 $ & (23) & $< 413 $ & & $ 93 $ & (12) & $ 167 $ & (25)\
NGC4602 & $< 16 $ & & $ 184 $ & (21) & $ 127 $ & (9) & $< 13 $ & & $< 13 $ & & $ 72 $ & (8) & $ 123 $ & (22) & $ 28 $ & (8) & $ 82 $ & (26)\
M-2-33-34 & $ 692 $ & (32) & $ 352 $ & (24) & $ 138 $ & (7) & $ 320 $ & (12) & $ 131 $ & (9) & $ 137 $ & (9) & $ 233 $ & (27) & $ 218 $ & (8) & $ 75 $ & (19)\
NGC4941 & $ 203 $ & (12) & $ 180 $ & (25) & $ 160 $ & (9) & $ 202 $ & (8) & $ 43 $ & (8) & $ 93 $ & (9) & $< 72 $ & & $ 135 $ & (12) & $ 71 $ & (8)\
NGC4968 & $ 293 $ & (24) & $ 188 $ & (42) & $ 247 $ & (12) & $ 252 $ & (17) & $ 95 $ & (15) & $ 64 $ & (15) & $ 161 $ & (38) & $ 107 $ & (13) & $ 61 $ & (19)\
NGC5005 & $ 127 $ & (22) & $ 920 $ & (100) & $ 615 $ & (36) & $ 183 $ & (14) & $< 39 $ & & $ 59 $ & (11) & $< 163 $ & & $< 67 $ & & $ 169 $ & (29)\
NGC5033 & $ 179 $ & (8) & $ 949 $ & (48) & $ 534 $ & (23) & $ 140 $ & (13) & $ 33 $ & (7) & $ 148 $ & (7) & $ 255 $ & (54) & $< 45 $ & & $ 192 $ & (24)\
NGC5135 & $ 680 $ & (50) & $ 1450 $ & (170) & $ 1421 $ & (64) & $ 620 $ & (33) & $ 130 $ & (26) & $ 391 $ & (21) & $ 960 $ & (190) & $ 260 $ & (21) & $ 596 $ & (53)\
M-6-30-15 & $ 123 $ & (15) & $< 79 $ & & $ 60 $ & (10) & $< 34 $ & & $< 26 $ & & $< 37 $ & & $< 87 $ & & $ 66 $ & (14) & $ 45 $ & (10)\
NGC5256 & $ 636 $ & (32) & $ 720 $ & (86) & $ 616 $ & (29) & $ 398 $ & (17) & $ 63 $ & (18) & $ 265 $ & (14) & $ 452 $ & (79) & $ 156 $ & (12) & $ 218 $ & (27)\
IC4329A & $ 1233 $ & (71) & $ 302 $ & (93) & $ 219 $ & (33) & $ 512 $ & (51) & $ 365 $ & (46) & $< 78 $ & & $ 380 $ & (97) & $ 308 $ & (43) & $ 59 $ & (15)\
NGC5347 & $< 50 $ & & $< 113 $ & & $ 71 $ & (13) & $< 35 $ & & $< 23 $ & & $< 34 $ & & $< 128 $ & & $ 31 $ & (9) & $ 100 $ & (17)\
NGC5506 & $ 2324 $ & (99) & $ 1170 $ & (170) & $ 826 $ & (53) & $ 1192 $ & (81) & $ 343 $ & (41) & $ 501 $ & (35) & $ 700 $ & (180) & $ 772 $ & (40) & $ 331 $ & (37)\
NGC5548 & $ 80 $ & (11) & $< 129 $ & & $ 99 $ & (8) & $ 97 $ & (12) & $< 30 $ & & $ 48 $ & (11) & $< 137 $ & & $< 42 $ & & $< 53 $ &\
MRK817 & $< 45 $ & & $< 114 $ & & $ 76 $ & (10) & $ 70 $ & (13) & $< 40 $ & & $< 39 $ & & $< 128 $ & & $< 42 $ & & $< 60 $ &\
NGC5929 & $ 53 $ & (6) & $ 205 $ & (23) & $ 99 $ & (9) & $ 93 $ & (9) & $< 17 $ & & $ 64 $ & (7) & $ 82 $ & (20) & $< 37 $ & & $ 68 $ & (17)\
NGC5953 & $ 233 $ & (18) & $ 1690 $ & (120) & $ 1303 $ & (60) & $ 227 $ & (35) & $ 52 $ & (14) & $ 449 $ & (17) & $ 700 $ & (120) & $< 104 $ & & $ 347 $ & (33)\
M-2-40-4 & $ 158 $ & (20) & $ 222 $ & (57) & $ 204 $ & (13) & $ 150 $ & (18) & $ 60 $ & (15) & $ 52 $ & (13) & $< 184 $ & & $< 45 $ & & $< 63 $ &\
F15480-0344 & $ 426 $ & (17) & $< 115 $ & & $ 80 $ & (8) & $ 161 $ & (17) & $ 133 $ & (15) & $< 37 $ & & $< 120 $ & & $ 96 $ & (12) & $< 48 $ &\
NGC6810 & $< 148 $ & & $ 1620 $ & (200) & $ 1479 $ & (68) & $< 119 $ & & $< 59 $ & & $ 567 $ & (32) & $ 810 $ & (190) & $< 79 $ & & $ 522 $ & (48)\
NGC6860 & $ 110 $ & (9) & $ 118 $ & (33) & $ 77 $ & (9) & $ 81 $ & (9) & $ 35 $ & (10) & $ 33 $ & (8) & $< 100 $ & & $< 28 $ & & $< 72 $ &\
NGC6890 & $ 88 $ & (14) & $ 207 $ & (30) & $ 185 $ & (11) & $ 61 $ & (13) & $ 33 $ & (8) & $ 92 $ & (10) & $ 145 $ & (23) & $< 56 $ & & $ 63 $ & (17)\
IC5063 & $ 904 $ & (90) & $< 713 $ & & $ 213 $ & (34) & $ 620 $ & (46) & $< 106 $ & & $ 155 $ & (43) & $< 721 $ & & $ 529 $ & (34) & $ 78 $ & (16)\
UGC11680 & $ 23 $ & (6) & $< 88 $ & & $ 55 $ & (7) & $ 55 $ & (9) & $< 20 $ & & $< 18 $ & & $< 49 $ & & $< 48 $ & & $< 95 $ &\
NGC7130 & $ 124 $ & (37) & $ 890 $ & (140) & $ 1010 $ & (28) & $ 308 $ & (27) & $ 111 $ & (18) & $ 232 $ & (19) & $ 680 $ & (160) & $ 107 $ & (15) & $ 415 $ & (50)\
NGC7172 & $ 549 $ & (17) & $ 612 $ & (46) & $ 211 $ & (14) & $ 184 $ & (15) & $ 72 $ & (14) & $ 89 $ & (10) & $ 308 $ & (38) & $ 89 $ & (10) & $ 141 $ & (34)\
NGC7213 & $< 33 $ & & $ 207 $ & (31) & $ 227 $ & (13) & $ 129 $ & (10) & $< 23 $ & & $ 36 $ & (11) & $< 99 $ & & $< 51 $ & & $ 116 $ & (17)\
NGC7314 & $ 521 $ & (20) & $ 114 $ & (26) & $ 74 $ & (8) & $ 246 $ & (16) & $ 108 $ & (8) & $ 89 $ & (8) & $ 94 $ & (20) & $ 188 $ & (10) & $ 68 $ & (7)\
M-3-58-7 & $< 58 $ & & $< 169 $ & & $ 94 $ & (9) & $ 73 $ & (13) & $ 57 $ & (13) & $< 35 $ & & $< 188 $ & & $ 39 $ & (10) & $ 74 $ & (23)\
NGC7469 & $< 187 $ & & $ 1660 $ & (340) & $ 2005 $ & (46) & $ 242 $ & (75) & $ 200 $ & (44) & $ 638 $ & (51) & $ 1590 $ & (360) & $ 172 $ & (26) & $ 824 $ & (61)\
NGC7496 & $< 73 $ & & $< 486 $ & & $ 424 $ & (32) & $ 34 $ & (11) & $< 26 $ & & $ 232 $ & (15) & $ 377 $ & (96) & $ 48 $ & (15) & $ 215 $ & (22)\
NGC7582 & $ 2210 $ & (150) & $ 2790 $ & (650) & $ 2840 $ & (130) & $ 1153 $ & (70) & $ 274 $ & (67) & $ 904 $ & (55) & $ 1700 $ & (540) & $ 458 $ & (36) & $ 1057 $ & (62)\
NGC7590 & $ 39 $ & (6) & $ 492 $ & (20) & $ 174 $ & (10) & $ 59 $ & (12) & $< 25 $ & & $ 107 $ & (9) & $ 242 $ & (23) & $< 59 $ & & $< 112 $ &\
NGC7603 & $ 21 $ & (6) & $ 130 $ & (32) & $ 127 $ & (9) & $ 70 $ & (11) & $< 23 $ & & $< 23 $ & & $ 64 $ & (17) & $< 30 $ & & $< 61 $ &\
NGC7674 & $ 492 $ & (40) & $< 298 $ & & $ 287 $ & (15) & $ 380 $ & (24) & $ 181 $ & (22) & $ 175 $ & (21) & $< 313 $ & & $ 156 $ & (15) & $ 179 $ & (26)\
CGCG381-051 & $< 41 $ & & $ 127 $ & (36) & $ 184 $ & (11) & $< 28 $ & & $ 18 $ & (5) & $ 102 $ & (11) & $< 115 $ & & $< 23 $ & & $ 86 $ & (27)\
[lrrrrrrrrrrrrrrrrrr]{} MRK335 & $ 22 $ & (5) & $< 101 $ & & $ 7 $ & (2) & $ 15 $ & (3) & $< 6 $ & & $< 12 $ & & $< 102 $ & & $ 5 $ & (2) & $< 9 $ &\
MRK938 & $< 9 $ & & $< 46 $ & & $ 125 $ & (6) & $< 13 $ & & $< 10 $ & & $ 8 $ & (2) & $< 53 $ & & $< 30 $ & & $ 83 $ & (12)\
E12-G21 & $ 170 $ & (11) & $ 158 $ & (49) & $ 68 $ & (9) & $ 36 $ & (5) & $ 28 $ & (5) & $ 64 $ & (6) & $< 163 $ & & $ 18 $ & (5) & $< 18 $ &\
MRK348 & $ 65 $ & (5) & $ 83 $ & (22) & $ 24 $ & (2) & $ 33 $ & (3) & $ 10 $ & (2) & $ 8 $ & (3) & $< 64 $ & & $ 12 $ & (2) & $ 9 $ & (2)\
NGC424 & $ 23 $ & (5) & $< 70 $ & & $ 4 $ & (1) & $ 8 $ & (2) & $< 3 $ & & $< 4 $ & & $< 65 $ & & $< 2 $ & & $< 2 $ &\
NGC526A & $ 148 $ & (7) & $< 290 $ & & $ 11 $ & (3) & $ 35 $ & (3) & $< 6 $ & & $ 18 $ & (3) & $ 108 $ & (31) & $ 10 $ & (2) & $ 9 $ & (2)\
NGC513 & $ 94 $ & (7) & $ 256 $ & (30) & $ 149 $ & (9) & $ 66 $ & (7) & $ 16 $ & (5) & $ 75 $ & (6) & $ 184 $ & (32) & $< 24 $ & & $< 33 $ &\
F01475-0740 & $< 23 $ & & $< 100 $ & & $ 41 $ & (2) & $ 24 $ & (3) & $ 5 $ & (2) & $ 11 $ & (3) & $< 96 $ & & $< 6 $ & & $ 17 $ & (5)\
NGC931 & $ 89 $ & (6) & $< 48 $ & & $ 8 $ & (2) & $ 20 $ & (2) & $ 10 $ & (2) & $< 6 $ & & $ 61 $ & (18) & $ 16 $ & (2) & $< 4 $ &\
NGC1056 & $ 25 $ & (5) & $ 272 $ & (26) & $ 278 $ & (11) & $ 77 $ & (13) & $< 17 $ & & $ 127 $ & (8) & $ 192 $ & (12) & $< 38 $ & & $ 36 $ & (12)\
NGC1097 & $< 8 $ & & $ 196 $ & (16) & $ 412 $ & (22) & $ 11 $ & (3) & $< 24 $ & & $ 65 $ & (3) & $ 72 $ & (17) & $< 16 $ & & $ 108 $ & (5)\
NGC1125 & $ 62 $ & (5) & $ 57 $ & (14) & $ 115 $ & (4) & $ 72 $ & (3) & $ 24 $ & (3) & $ 39 $ & (3) & $ 56 $ & (14) & $ 79 $ & (6) & $ 53 $ & (10)\
NGC1143-4 & $ 33 $ & (5) & $ 228 $ & (29) & $ 270 $ & (7) & $ 58 $ & (5) & $ 13 $ & (4) & $ 147 $ & (5) & $ 176 $ & (29) & $ 22 $ & (7) & $ 84 $ & (6)\
M-2-8-39 & $ 96 $ & (9) & $ 91 $ & (27) & $ 18 $ & (2) & $ 39 $ & (5) & $ 17 $ & (3) & $< 12 $ & & $< 61 $ & & $ 12 $ & (2) & $< 28 $ &\
NGC1194 & $ 45 $ & (5) & $< 36 $ & & $ 5 $ & (2) & $ 20 $ & (3) & $ 7 $ & (2) & $< 11 $ & & $< 50 $ & & $ 16 $ & (2) & $ 10 $ & (2)\
NGC1241 & $ 60 $ & (6) & $ 180 $ & (21) & $ 112 $ & (6) & $ 83 $ & (8) & $< 15 $ & & $ 36 $ & (5) & $ 120 $ & (26) & $ 66 $ & (17) & $< 64 $ &\
NGC1320 & $ 75 $ & (5) & $< 39 $ & & $ 11 $ & (2) & $ 10 $ & (2) & $ 9 $ & (2) & $< 7 $ & & $< 39 $ & & $ 12 $ & (1) & $< 5 $ &\
NGC1365 & $ 26 $ & (4) & $ 144 $ & (18) & $ 189 $ & (8) & $ 16 $ & (4) & $ 9 $ & (3) & $ 58 $ & (3) & $ 64 $ & (11) & $ 18 $ & (2) & $ 70 $ & (5)\
NGC1386 & $ 142 $ & (8) & $ 83 $ & (25) & $ 19 $ & (2) & $ 47 $ & (3) & $ 28 $ & (3) & $ 27 $ & (4) & $< 35 $ & & $ 39 $ & (2) & $ 12 $ & (2)\
F03450+0055 & $< 11 $ & & $< 66 $ & & $< 4 $ & & $ 8 $ & (3) & $< 6 $ & & $< 6 $ & & $< 61 $ & & $< 4 $ & & $< 10 $ &\
NGC1566 & $ 45 $ & (5) & $ 115 $ & (11) & $ 63 $ & (4) & $ 36 $ & (3) & $< 5 $ & & $ 30 $ & (2) & $ 59 $ & (9) & $ 9 $ & (2) & $ 12 $ & (3)\
F04385-0828 & $< 8 $ & & $< 54 $ & & $ 12 $ & (1) & $< 5 $ & & $< 4 $ & & $< 5 $ & & $< 55 $ & & $< 3 $ & & $< 5 $ &\
NGC1667 & $ 63 $ & (6) & $ 43 $ & (11) & $ 270 $ & (11) & $ 53 $ & (12) & $< 19 $ & & $ 129 $ & (9) & $< 42 $ & & $< 37 $ & & $ 131 $ & (21)\
E33-G2 & $ 92 $ & (7) & $< 76 $ & & $ 7 $ & (2) & $ 14 $ & (3) & $ 6 $ & (2) & $< 10 $ & & $< 77 $ & & $ 21 $ & (3) & $< 8 $ &\
M-5-13-17 & $ 55 $ & (5) & $ 77 $ & (14) & $ 41 $ & (3) & $ 23 $ & (3) & $< 8 $ & & $< 8 $ & & $ 48 $ & (14) & $ 11 $ & (3) & $< 18 $ &\
MRK6 & $ 110 $ & (6) & $ 192 $ & (24) & $ 53 $ & (2) & $ 100 $ & (5) & $ 16 $ & (2) & $ 29 $ & (3) & $ 90 $ & (24) & $ 32 $ & (2) & $ 16 $ & (2)\
MRK79 & $ 138 $ & (5) & $ 72 $ & (19) & $ 15 $ & (2) & $ 36 $ & (3) & $ 14 $ & (2) & $ 11 $ & (3) & $ 71 $ & (20) & $ 14 $ & (2) & $< 7 $ &\
NGC2639 & $ 54 $ & (13) & $ 239 $ & (43) & $ 189 $ & (15) & $ 76 $ & (19) & $< 32 $ & & $ 104 $ & (20) & $ 119 $ & (28) & $< 66 $ & & $< 87 $ &\
MRK704 & $ 49 $ & (5) & $ 52 $ & (14) & $< 4 $ & & $ 13 $ & (2) & $< 7 $ & & $< 8 $ & & $< 41 $ & & $< 3 $ & & $< 4 $ &\
NGC2992 & $ 230 $ & (7) & $ 200 $ & (19) & $ 77 $ & (4) & $ 88 $ & (4) & $ 28 $ & (3) & $ 57 $ & (3) & $ 132 $ & (20) & $ 44 $ & (2) & $ 34 $ & (3)\
MRK1239 & $ 20 $ & (4) & $< 64 $ & & $ 5 $ & (2) & $ 12 $ & (2) & $< 3 $ & & $< 6 $ & & $< 65 $ & & $< 2 $ & & $< 4 $ &\
NGC3079 & $ 39 $ & (8) & $ 166 $ & (16) & $ 352 $ & (22) & $ 44 $ & (11) & $< 19 $ & & $ 44 $ & (7) & $ 42 $ & (11) & $ 61 $ & (15) & $ 120 $ & (10)\
NGC3227 & $ 76 $ & (7) & $ 112 $ & (23) & $ 85 $ & (3) & $ 65 $ & (3) & $ 14 $ & (2) & $ 26 $ & (3) & $< 59 $ & & $ 30 $ & (4) & $ 27 $ & (3)\
NGC3511 & $< 31 $ & & $ 489 $ & (35) & $ 257 $ & (13) & $< 42 $ & & $< 19 $ & & $ 132 $ & (11) & $ 188 $ & (35) & $< 48 $ & & $< 76 $ &\
NGC3516 & $ 165 $ & (6) & $ 58 $ & (11) & $ 10 $ & (2) & $ 32 $ & (3) & $ 12 $ & (2) & $ 14 $ & (2) & $ 61 $ & (11) & $ 21 $ & (2) & $< 6 $ &\
M+0-29-23 & $< 10 $ & & $ 105 $ & (13) & $ 130 $ & (6) & $ 21 $ & (4) & $< 8 $ & & $ 35 $ & (4) & $< 44 $ & & $< 20 $ & & $ 48 $ & (7)\
NGC3660 & $ 60 $ & (13) & $ 105 $ & (24) & $ 83 $ & (11) & $ 62 $ & (14) & $< 27 $ & & $< 40 $ & & $< 88 $ & & $ 56 $ & (13) & $< 118 $ &\
NGC3982 & $ 23 $ & (5) & $ 303 $ & (19) & $ 184 $ & (10) & $ 40 $ & (9) & $< 18 $ & & $ 78 $ & (6) & $ 139 $ & (19) & $ 47 $ & (10) & $ 94 $ & (25)\
NGC4051 & $ 56 $ & (6) & $ 79 $ & (25) & $ 18 $ & (2) & $ 12 $ & (2) & $< 3 $ & & $ 10 $ & (3) & $< 39 $ & & $ 5 $ & (1) & $ 6 $ & (2)\
UGC7064 & $ 76 $ & (9) & $< 170 $ & & $ 61 $ & (4) & $ 28 $ & (5) & $ 24 $ & (5) & $ 38 $ & (6) & $< 170 $ & & $< 17 $ & & $< 41 $ &\
NGC4151 & $ 108 $ & (6) & $ 119 $ & (29) & $ 38 $ & (2) & $ 42 $ & (2) & $ 9 $ & (2) & $ 23 $ & (2) & $< 56 $ & & $ 20 $ & (1) & $ 11 $ & (1)\
MRK766 & $ 62 $ & (5) & $< 35 $ & & $ 27 $ & (2) & $ 27 $ & (4) & $ 15 $ & (3) & $ 12 $ & (3) & $ 51 $ & (13) & $ 12 $ & (2) & $ 6 $ & (1)\
NGC4388 & $ 305 $ & (7) & $ 138 $ & (24) & $ 82 $ & (4) & $ 104 $ & (7) & $ 42 $ & (3) & $ 45 $ & (2) & $ 81 $ & (17) & $ 106 $ & (5) & $ 21 $ & (2)\
NGC4501 & $ 68 $ & (12) & $ 387 $ & (34) & $ 109 $ & (14) & $ 127 $ & (12) & $< 24 $ & & $ 55 $ & (15) & $ 84 $ & (27) & $< 49 $ & & $< 26 $ &\
NGC4579 & $ 69 $ & (5) & $ 350 $ & (23) & $ 123 $ & (8) & $ 72 $ & (3) & $< 5 $ & & $ 34 $ & (3) & $ 56 $ & (11) & $ 9 $ & (2) & $ 44 $ & (4)\
NGC4593 & $ 22 $ & (5) & $< 44 $ & & $ 10 $ & (2) & $< 6 $ & & $< 5 $ & & $< 6 $ & & $< 45 $ & & $< 2 $ & & $< 5 $ &\
NGC4594 & $ 70 $ & (5) & $ 471 $ & (36) & $ 81 $ & (4) & $ 144 $ & (6) & $< 5 $ & & $ 72 $ & (5) & $ 170 $ & (41) & $< 5 $ & & $ 10 $ & (3)\
TOL1238-364 & $< 13 $ & & $< 60 $ & & $ 59 $ & (2) & $ 24 $ & (3) & $< 4 $ & & $ 14 $ & (2) & $< 55 $ & & $ 11 $ & (2) & $ 22 $ & (3)\
NGC4602 & $< 21 $ & & $ 249 $ & (29) & $ 173 $ & (12) & $< 17 $ & & $< 18 $ & & $ 98 $ & (11) & $ 165 $ & (29) & $ 35 $ & (10) & $ 105 $ & (34)\
M-2-33-34 & $ 508 $ & (23) & $ 297 $ & (20) & $ 86 $ & (5) & $ 190 $ & (7) & $ 79 $ & (5) & $ 82 $ & (5) & $ 196 $ & (23) & $ 135 $ & (5) & $ 35 $ & (9)\
NGC4941 & $ 133 $ & (8) & $ 153 $ & (21) & $ 95 $ & (6) & $ 109 $ & (4) & $ 24 $ & (5) & $ 49 $ & (5) & $< 59 $ & & $ 87 $ & (8) & $ 35 $ & (4)\
NGC4968 & $ 67 $ & (6) & $ 62 $ & (14) & $ 36 $ & (2) & $ 35 $ & (2) & $ 13 $ & (2) & $ 9 $ & (2) & $ 51 $ & (12) & $ 17 $ & (2) & $ 10 $ & (3)\
NGC5005 & $ 47 $ & (8) & $ 204 $ & (23) & $ 226 $ & (13) & $ 83 $ & (7) & $< 15 $ & & $ 31 $ & (6) & $< 38 $ & & $< 24 $ & & $ 23 $ & (4)\
NGC5033 & $ 99 $ & (5) & $ 438 $ & (22) & $ 204 $ & (9) & $ 66 $ & (6) & $ 14 $ & (3) & $ 82 $ & (4) & $ 122 $ & (26) & $< 15 $ & & $ 38 $ & (5)\
NGC5135 & $ 61 $ & (5) & $ 115 $ & (14) & $ 207 $ & (9) & $ 76 $ & (4) & $ 17 $ & (4) & $ 43 $ & (2) & $ 78 $ & (16) & $ 48 $ & (4) & $ 85 $ & (8)\
M-6-30-15 & $ 38 $ & (5) & $< 40 $ & & $ 9 $ & (1) & $< 5 $ & & $< 4 $ & & $< 6 $ & & $< 41 $ & & $ 7 $ & (2) & $ 4 $ & (1)\
NGC5256 & $ 147 $ & (8) & $ 133 $ & (16) & $ 283 $ & (14) & $ 144 $ & (6) & $ 24 $ & (7) & $ 87 $ & (5) & $ 86 $ & (15) & $ 119 $ & (9) & $ 133 $ & (16)\
IC4329A & $ 140 $ & (8) & $ 63 $ & (20) & $ 10 $ & (2) & $ 26 $ & (3) & $ 17 $ & (2) & $< 4 $ & & $ 74 $ & (19) & $ 12 $ & (2) & $ 1 $ & (1)\
NGC5347 & $< 13 $ & & $< 52 $ & & $ 11 $ & (2) & $< 5 $ & & $< 3 $ & & $< 6 $ & & $< 54 $ & & $ 5 $ & (1) & $ 18 $ & (3)\
NGC5506 & $ 151 $ & (6) & $ 96 $ & (14) & $ 34 $ & (2) & $ 56 $ & (4) & $ 15 $ & (2) & $ 28 $ & (2) & $ 56 $ & (14) & $ 45 $ & (2) & $ 7 $ & (1)\
NGC5548 & $ 29 $ & (4) & $< 77 $ & & $ 19 $ & (2) & $ 20 $ & (3) & $< 6 $ & & $ 10 $ & (3) & $< 78 $ & & $< 7 $ & & $< 9 $ &\
MRK817 & $< 9 $ & & $< 31 $ & & $ 13 $ & (2) & $ 11 $ & (2) & $< 7 $ & & $< 6 $ & & $< 33 $ & & $< 6 $ & & $< 9 $ &\
NGC5929 & $ 102 $ & (12) & $ 338 $ & (38) & $ 224 $ & (21) & $ 212 $ & (21) & $< 40 $ & & $ 134 $ & (14) & $ 139 $ & (35) & $< 72 $ & & $ 89 $ & (23)\
NGC5953 & $ 56 $ & (4) & $ 344 $ & (25) & $ 417 $ & (19) & $ 76 $ & (12) & $ 17 $ & (5) & $ 139 $ & (5) & $ 149 $ & (26) & $< 32 $ & & $ 109 $ & (10)\
M-2-40-4 & $ 36 $ & (5) & $ 61 $ & (16) & $ 25 $ & (2) & $ 23 $ & (3) & $ 8 $ & (2) & $ 9 $ & (2) & $< 50 $ & & $< 5 $ & & $< 4 $ &\
F15480-0344 & $ 146 $ & (6) & $< 56 $ & & $ 20 $ & (2) & $ 38 $ & (4) & $ 32 $ & (4) & $< 8 $ & & $< 56 $ & & $ 22 $ & (3) & $< 12 $ &\
NGC6810 & $< 11 $ & & $ 130 $ & (16) & $ 151 $ & (7) & $< 10 $ & & $< 5 $ & & $ 39 $ & (2) & $ 65 $ & (15) & $< 8 $ & & $ 64 $ & (6)\
NGC6860 & $ 69 $ & (6) & $ 122 $ & (34) & $ 19 $ & (2) & $ 22 $ & (3) & $ 9 $ & (3) & $ 11 $ & (3) & $< 98 $ & & $< 5 $ & & $< 9 $ &\
NGC6890 & $ 36 $ & (6) & $ 95 $ & (14) & $ 57 $ & (3) & $ 19 $ & (4) & $ 10 $ & (3) & $ 30 $ & (3) & $ 67 $ & (11) & $< 16 $ & & $ 13 $ & (4)\
IC5063 & $ 55 $ & (6) & $< 62 $ & & $ 9 $ & (2) & $ 27 $ & (2) & $< 4 $ & & $ 7 $ & (2) & $< 59 $ & & $ 26 $ & (2) & $ 3 $ & (1)\
UGC11680 & $ 22 $ & (6) & $< 125 $ & & $ 32 $ & (4) & $ 33 $ & (6) & $< 12 $ & & $< 12 $ & & $< 67 $ & & $< 23 $ & & $< 39 $ &\
NGC7130 & $ 12 $ & (4) & $ 79 $ & (13) & $ 166 $ & (5) & $ 42 $ & (4) & $ 16 $ & (3) & $ 28 $ & (2) & $ 62 $ & (14) & $ 22 $ & (3) & $ 107 $ & (13)\
NGC7172 & $ 198 $ & (6) & $ 191 $ & (14) & $ 45 $ & (3) & $ 50 $ & (4) & $ 16 $ & (3) & $ 38 $ & (4) & $ 99 $ & (12) & $ 66 $ & (7) & $ 18 $ & (4)\
NGC7213 & $< 16 $ & & $ 173 $ & (26) & $ 49 $ & (3) & $ 29 $ & (2) & $< 5 $ & & $ 8 $ & (3) & $< 77 $ & & $< 6 $ & & $ 17 $ & (3)\
NGC7314 & $ 357 $ & (14) & $ 90 $ & (20) & $ 33 $ & (4) & $ 110 $ & (7) & $ 47 $ & (4) & $ 48 $ & (5) & $ 75 $ & (15) & $ 108 $ & (6) & $ 35 $ & (3)\
M-3-58-7 & $< 13 $ & & $< 51 $ & & $ 16 $ & (2) & $ 12 $ & (2) & $ 10 $ & (2) & $< 6 $ & & $< 55 $ & & $ 5 $ & (1) & $ 9 $ & (3)\
NGC7469 & $< 7 $ & & $ 66 $ & (14) & $ 109 $ & (3) & $ 11 $ & (3) & $ 9 $ & (2) & $ 25 $ & (2) & $ 64 $ & (15) & $ 9 $ & (2) & $ 51 $ & (4)\
NGC7496 & $< 10 $ & & $< 62 $ & & $ 110 $ & (8) & $ 7 $ & (2) & $< 6 $ & & $ 40 $ & (3) & $ 49 $ & (12) & $ 15 $ & (5) & $ 88 $ & (9)\
NGC7582 & $ 72 $ & (5) & $ 77 $ & (18) & $ 146 $ & (7) & $ 52 $ & (3) & $ 12 $ & (3) & $ 41 $ & (3) & $ 48 $ & (15) & $ 39 $ & (3) & $ 43 $ & (3)\
NGC7590 & $ 45 $ & (7) & $ 504 $ & (21) & $ 164 $ & (9) & $ 68 $ & (14) & $< 26 $ & & $ 127 $ & (11) & $ 253 $ & (24) & $< 46 $ & & $< 83 $ &\
NGC7603 & $ 14 $ & (4) & $ 133 $ & (33) & $ 27 $ & (2) & $ 21 $ & (3) & $< 6 $ & & $< 8 $ & & $ 64 $ & (16) & $< 3 $ & & $< 5 $ &\
NGC7674 & $ 67 $ & (6) & $< 53 $ & & $ 28 $ & (2) & $ 37 $ & (2) & $ 17 $ & (2) & $ 18 $ & (2) & $< 54 $ & & $ 15 $ & (2) & $ 15 $ & (2)\
CGCG381-051 & $< 16 $ & & $ 71 $ & (20) & $ 96 $ & (6) & $< 12 $ & & $ 8 $ & (2) & $ 37 $ & (4) & $< 60 $ & & $< 9 $ & & $ 54 $ & (17)\
[llrrrr]{} MRK335 & Screen & 0.10 & (0.07) & 1.00 &\
MRK938 & Screen & 0.91 & (0.03) & 1.27 & (0.02)\
E12-G21 & Mixed & 1.21 & (0.26) & 1.00 &\
MRK348 & Screen & 0.22 & (0.08) & 1.66 & (0.06)\
NGC424 & & 0.00 & & 1.14 & (0.07)\
NGC526A & & 0.00 & & 1.81 &\
NGC513 & Screen & 0.24 & (0.02) & 1.23 & (0.02)\
F01475-0740 & Screen & 0.09 & (0.10) & 1.00 &\
NGC931 & & 0.00 & & 1.24 & (0.03)\
NGC1056 & & 0.00 & & 1.38 & (0.02)\
NGC1097 & Screen & 0.04 & (0.08) & 1.81 &\
NGC1125 & Mixed & 1.79 & (0.53) & 1.40 & (0.05)\
NGC1143-4 & Screen & 0.70 & (0.02) & 1.00 &\
M-2-8-39 & Mixed & 4.01 & (0.52) & 1.03 & (0.04)\
NGC1194 & Screen & 2.35 & (0.06) & 1.14 & (0.08)\
NGC1241 & Mixed & 0.31 & (0.09) & 1.81 &\
NGC1320 & Screen & 0.69 & (0.13) & 1.24 & (0.05)\
NGC1365 & Mixed & 0.83 & (0.07) & 1.30 & (0.02)\
NGC1386 & Mixed & 9.41 & (1.44) & 1.23 & (0.06)\
F03450+0055 & Screen & 0.58 & (0.11) & &\
NGC1566 && 0.00 & & 1.42 & (0.07)\
F04385-0828 & Screen & 1.67 & (0.03) & 1.14 & (0.03)\
NGC1667 & Mixed & 0.09 & (0.08) & 1.81 &\
E33-G2 & Screen & 0.34 & (0.09) & 1.46 & (0.07)\
M-5-13-17 & Screen & 0.01 & (0.04) & 1.22 & (0.05)\
MRK6 && 0.00 & (0.01) & 1.00 &\
MRK79 & Screen & 0.41 & (0.11) & 1.14 & (0.04)\
NGC2639 & & 0.00 & & 1.81 &\
MRK704 & Screen & 1.07 & (0.08) & 1.07 & (0.04)\
NGC2992 & Mixed & 0.52 & (0.11) & 1.62 & (0.05)\
MRK1239 & Screen & 0.64 & (0.17) & 1.29 & (0.05)\
NGC3079 & Mixed & 6.19 & (0.19) & &\
NGC3227 & & 0.00 & (0.05) & 1.12 & (0.02)\
NGC3511 & Mixed & 0.20 & (0.09) & 1.81 &\
NGC3516 & & 0.00 & & 1.00 &\
M+0-29-23 & Mixed & 0.26 & (0.17) & 1.21 & (0.09)\
NGC3660 & & 0.00 & & 1.39 & (0.11)\
NGC3982 & Mixed & 0.19 & (0.07) & 1.81 &\
NGC4051 & Mixed & 1.33 & (0.28) & 1.33 & (0.06)\
UGC7064 & & 0.00 & & 1.63 & (0.12)\
NGC4151 && 0.00 & & 1.01 & (0.02)\
MRK766 & & 0.00 & & 1.26 & (0.04)\
NGC4388 & Mixed & 3.32 & (0.13) & 1.55 & (0.02)\
NGC4501 & Mixed & 0.62 & (0.10) & 1.81 &\
NGC4579 & & 0.00 & & 1.24 & (0.02)\
NGC4593 & Screen & 0.35 & (0.21) & 1.65 & (0.09)\
NGC4594 & & 0.00 & & 1.00 &\
NGC4602 && 0.00 & (0.09) & 1.81 &\
TOL1238-364 & Mixed & 1.34 & (0.14) & 1.14 & (0.02)\
M-2-33-34 & Mixed & 1.30 & (0.07) & 1.11 & (0.03)\
NGC4941 & & 0.00 & & 1.00 &\
NGC4968 & Screen & 0.17 & (0.11) & 1.06 & (0.05)\
NGC5005 & Mixed & 0.96 & (0.06) & 1.70 & (0.04)\
NGC5033 & Mixed & 0.54 & (0.04) & 1.00 &\
NGC5135 && 0.00 & & 1.38 & (0.08)\
M-6-30-15 & Mixed & 1.50 & (0.32) & 1.00 &\
NGC5256 & Mixed & 1.17 & (0.08) & 1.25 & (0.03)\
IC4329A & & 0.00 & & 1.00 &\
NGC5347 & Screen & 1.11 & (0.19) & 1.45 & (0.08)\
NGC5506 & Screen & 2.16 & (0.04) & 1.20 & (0.03)\
NGC5548 & Screen & 0.56 & (0.11) & 1.00 &\
MRK817 & Screen & 0.05 & (0.07) & 1.25 & (0.05)\
NGC5929 & Mixed & 0.08 & (0.14) & 1.81 &\
NGC5953 & Screen & 0.09 & (0.02) & &\
M-2-40-4 & & 0.00 & & 1.00 &\
F15480-0344 & Screen & 1.10 & (0.09) & 1.00 &\
NGC6810 & Screen & 0.02 & (0.05) & 1.31 & (0.02)\
NGC6860 & & 0.00 & & 1.00 &\
NGC6890 && 0.00 & & 1.34 & (0.03)\
IC5063 & Screen & 0.74 & (0.09) & 1.00 &\
UGC11680 & Mixed & 0.30 & (0.22) & 1.05 & (0.06)\
NGC7130 & Mixed & 0.08 & (0.04) & 1.34 & (0.02)\
NGC7172 & Screen & 2.65 & (0.04) & 1.23 & (0.04)\
NGC7213 && 0.00 & & 1.03 & (0.06)\
NGC7314 & Screen & 1.17 & (0.15) & 1.81 &\
M-3-58-7 & & 0.00 & & 1.00 &\
NGC7469 & & 0.00 & & 1.26 & (0.02)\
NGC7496 & Mixed & 0.24 & (0.04) & 1.35 & (0.03)\
NGC7582 & Screen & 1.39 & (0.06) & 1.22 & (0.03)\
NGC7590 && 0.00 & & 1.81 &\
NGC7603 & Screen & 0.75 & (0.23) & 1.00 &\
NGC7674 & Screen & 0.97 & (0.12) & 1.21 & (0.04)\
CGCG381-051 && 0.00 & & 1.44 & (0.05)\
[lrrrr]{} MRK335 & $ 0.295$ & (0.028)& $ 0.089$ & (0.009)\
MRK938 & $-0.536$ & (0.046)& $-0.268$ & (0.009)\
E12-G21 & $ 0.036$ & (0.050)& $-0.001$ & (0.019)\
MRK348 & $-0.120$ & (0.022)& $ 0.079$ & (0.011)\
NGC424 & $ 0.111$ & (0.018)& $ 0.085$ & (0.011)\
NGC526A & $ 0.252$ & (0.031)& $ 0.098$ & (0.011)\
NGC513 & $ 0.030$ & (0.067)& $ 0.089$ & (0.095)\
F01475-0740 & $ 0.143$ & (0.017)& $ 0.163$ & (0.007)\
NGC931 & $ 0.013$ & (0.010)& $ 0.013$ & (0.007)\
NGC1056 & $ 0.061$ & (0.030)& $-0.050$ & (0.006)\
NGC1097 & $ 0.207$ & (0.054)& $ 0.015$ & (0.014)\
NGC1125 & $-0.458$ & (0.070)& $-0.114$ & (0.016)\
NGC1143-4 & $-0.465$ & (0.042)& $-0.144$ & (0.009)\
M-2-8-39 & $ 0.014$ & (0.021)& $ 0.080$ & (0.012)\
NGC1194 & $-0.797$ & (0.010)& $-0.145$ & (0.012)\
NGC1241 & $-0.104$ & (0.066)& $-0.053$ & (0.018)\
NGC1320 & $-0.020$ & (0.028)& $ 0.036$ & (0.010)\
NGC1365 & $-0.169$ & (0.019)& $-0.066$ & (0.012)\
NGC1386 & $-0.534$ & (0.023)& $-0.079$ & (0.013)\
F03450+0055 & $ 0.328$ & (0.024)& $ 0.068$ & (0.011)\
NGC1566 & $-0.045$ & (0.020)& $-0.015$ & (0.006)\
F04385-0828 & $-0.769$ & (0.017)& $-0.102$ & (0.011)\
NGC1667 & $ 0.049$ & (0.058)& $-0.091$ & (0.009)\
E33-G2 & $ 0.109$ & (0.028)& $ 0.013$ & (0.008)\
M-5-13-17 & $ 0.007$ & (0.033)& $-0.013$ & (0.013)\
MRK6 & $ 0.137$ & (0.015)& $ 0.197$ & (0.009)\
MRK79 & $ 0.175$ & (0.034)& $ 0.052$ & (0.014)\
NGC2639 & $ 0.065$ & (0.088)& $-0.002$ & (0.033)\
MRK704 & $ 0.082$ & (0.020)& $-0.016$ & (0.012)\
NGC2992 & $ 0.042$ & (0.023)& $-0.022$ & (0.010)\
MRK1239 & $ 0.238$ & (0.021)& $ 0.135$ & (0.014)\
NGC3079 & $-0.923$ & (0.058)& $-0.561$ & (0.012)\
NGC3227 & $-0.032$ & (0.027)& $ 0.052$ & (0.015)\
NGC3511 & $ 0.015$ & (0.076)& $-0.109$ & (0.012)\
NGC3516 & $-0.007$ & (0.012)& $ 0.052$ & (0.006)\
M+0-29-23 & $-0.366$ & (0.029)& $-0.187$ & (0.012)\
NGC3660 & $ 0.315$ & (0.075)& $ 0.194$ & (0.042)\
NGC3982 & $-0.065$ & (0.056)& $ 0.007$ & (0.014)\
NGC4051 & $ 0.030$ & (0.018)& $ 0.049$ & (0.011)\
UGC7064 & $ 0.044$ & (0.040)& $ 0.086$ & (0.018)\
NGC4151 & $ 0.117$ & (0.009)& $ 0.145$ & (0.016)\
MRK766 & $ 0.167$ & (0.024)& $ 0.038$ & (0.010)\
NGC4388 & $-0.603$ & (0.015)& $-0.089$ & (0.010)\
NGC4501 & $-0.202$ & (0.065)& $-0.122$ & (0.016)\
NGC4579 & $ 0.400$ & (0.036)& $ 0.105$ & (0.010)\
NGC4593 & $ 0.229$ & (0.074)& $ 0.080$ & (0.036)\
NGC4594 & $ 0.347$ & (0.016)& $ 0.196$ & (0.013)\
TOL1238-364 & $-0.127$ & (0.016)& $ 0.033$ & (0.011)\
NGC4602 & $ 0.095$ & (0.088)& $-0.049$ & (0.011)\
M-2-33-34 & $-0.039$ & (0.018)& $-0.020$ & (0.006)\
NGC4941 & $-0.026$ & (0.043)& $-0.016$ & (0.015)\
NGC4968 & $-0.071$ & (0.024)& $ 0.056$ & (0.008)\
NGC5005 & $-0.320$ & (0.037)& $-0.108$ & (0.011)\
NGC5033 & $-0.127$ & (0.026)& $-0.108$ & (0.008)\
NGC5135 & $-0.183$ & (0.016)& $-0.056$ & (0.009)\
M-6-30-15 & $ 0.143$ & (0.021)& $ 0.049$ & (0.006)\
NGC5256 & $-0.352$ & (0.070)& $-0.137$ & (0.011)\
IC4329A & $ 0.114$ & (0.017)& $ 0.042$ & (0.014)\
NGC5347 & $-0.069$ & (0.021)& $ 0.019$ & (0.007)\
NGC5506 & $-0.703$ & (0.034)& $-0.046$ & (0.070)\
NGC5548 & $ 0.136$ & (0.021)& $ 0.081$ & (0.007)\
MRK817 & $ 0.271$ & (0.033)& $ 0.071$ & (0.011)\
NGC5929 & $-0.051$ & (0.146)& $ 0.013$ & (0.032)\
NGC5953 & $ 0.015$ & (0.027)& $-0.079$ & (0.010)\
M-2-40-4 & $-0.221$ & (0.014)& $ 0.032$ & (0.007)\
F15480-0344 & $-0.009$ & (0.025)& $ 0.095$ & (0.007)\
NGC6810 & $ 0.034$ & (0.025)& $ 0.105$ & (0.023)\
NGC6860 & $ 0.171$ & (0.019)& $ 0.054$ & (0.007)\
NGC6890 & $-0.037$ & (0.015)& $ 0.018$ & (0.009)\
IC5063 & $-0.237$ & (0.014)& $-0.014$ & (0.010)\
UGC11680 & $ 0.067$ & (0.038)& $ 0.087$ & (0.018)\
NGC7130 & $-0.036$ & (0.028)& $-0.001$ & (0.010)\
NGC7172 & $-1.679$ & (0.008)& $-0.387$ & (0.012)\
NGC7213 & $ 0.587$ & (0.023)& $ 0.236$ & (0.007)\
NGC7314 & $-0.205$ & (0.056)& $-0.118$ & (0.018)\
M-3-58-7 & $ 0.189$ & (0.027)& $ 0.021$ & (0.010)\
NGC7469 & $ 0.068$ & (0.018)& $ 0.048$ & (0.010)\
NGC7496 & $-0.091$ & (0.037)& $-0.024$ & (0.008)\
NGC7582 & $-0.657$ & (0.026)& $-0.156$ & (0.010)\
NGC7590 & $ 0.213$ & (0.085)& $ 0.086$ & (0.068)\
NGC7603 & $ 0.205$ & (0.020)& $ 0.033$ & (0.015)\
NGC7674 & $-0.098$ & (0.019)& $ 0.025$ & (0.012)\
CGCG381-051 & $ 0.381$ & (0.018)& $ 0.126$ & (0.006)\
[lrrrr]{} MRK335 & $-$1.75 & (0.03) & $-$1.49 & (0.29)\
MRK938 & 1.58 & (0.02) & $-$2.03 & (0.03)\
E12-G21 & $-$0.62 & (0.02) & $-$0.59 & (0.10)\
MRK348 & $-$1.87 & (0.01) & $-$2.62 & (0.23)\
NGC424 & $-$2.23 & (0.02) & $-$2.60 & (0.11)\
NGC526A & $-$2.91 & (0.03) & $-$1.50 & (0.44)\
NGC513 & $-$0.40 & (0.03) & $-$0.72 & (0.06)\
F01475-0740 & $-$1.36 & (0.03) & $-$2.23 & (0.19)\
NGC931 & $-$1.45 & (0.02) & $-$1.61 & (0.07)\
NGC1056 & 0.62 & (0.03) & $-$1.08 & (0.06)\
NGC1097 & 0.69 & (0.01) & $-$1.05 & (0.02)\
NGC1125 & 0.21 & (0.03) & $-$2.08 & (0.06)\
NGC1143-4 & 0.62 & (0.01) & $-$0.60 & (0.05)\
M-2-8-39 & $-$2.36 & (0.02) & $-$2.80 & (0.43)\
NGC1194 & $-$1.13 & (0.02) & $-$2.82 & (0.20)\
NGC1241 & 0.00 & (0.03) & $-$0.47 & (0.06)\
NGC1320 & $-$1.06 & (0.02) & $-$1.79 & (0.06)\
NGC1365 & 0.59 & (0.01) & $-$1.05 & (0.03)\
NGC1386 & $-$0.45 & (0.02) & $-$1.52 & (0.03)\
F03450+0055 & $-$1.89 & (0.03) & &\
NGC1566 & $-$0.50 & (0.02) & $-$0.29 & (0.07)\
F04385-0828 & $-$0.50 & (0.03) & $-$2.85 & (0.07)\
NGC1667 & 0.42 & (0.03) & $-$0.37 & (0.06)\
E33-G2 & $-$1.96 & (0.02) & $-$1.39 & (0.21)\
M-5-13-17 & $-$0.98 & (0.03) & $-$1.30 & (0.12)\
MRK6 & $-$1.72 & (0.02) & $-$1.95 & (0.14)\
MRK79 & $-$1.32 & (0.02) & $-$1.54 & (0.12)\
NGC2639 & 0.04 & (0.06) & 0.36 & (0.07)\
MRK704 & $-$2.29 & (0.02) & $-$3.87 & (0.38)\
NGC2992 & $-$0.73 & (0.03) & $-$1.19 & (0.04)\
MRK1239 & $-$1.82 & (0.02) & $-$2.45 & (0.11)\
NGC3079 & 2.45 & (0.04) & &\
NGC3227 & $-$0.83 & (0.02) & $-$1.11 & (0.04)\
NGC3511 & 0.24 & (0.05) & $-$0.12 & (0.06)\
NGC3516 & $-$1.35 & (0.02) & $-$1.97 & (0.11)\
M+0-29-23 & 0.41 & (0.01) & $-$0.98 & (0.04)\
NGC3660 & 0.11 & (0.04) & $-$0.79 & (0.15)\
NGC3982 & $-$0.02 & (0.04) & $-$0.59 & (0.07)\
NGC4051 & $-$1.26 & (0.03) & $-$1.16 & (0.08)\
UGC7064 & $-$0.75 & (0.04) & $-$1.05 & (0.12)\
NGC4151 & $-$2.14 & (0.02) & $-$2.50 & (0.05)\
MRK766 & $-$0.81 & (0.02) & $-$2.03 & (0.04)\
NGC4388 & $-$0.53 & (0.03) & $-$1.29 & (0.09)\
NGC4501 & $-$0.28 & (0.05) & 0.34 & (0.08)\
NGC4579 & $-$0.67 & (0.02) & $-$0.36 & (0.08)\
NGC4593 & $-$1.11 & (0.02) & $-$1.33 & (0.09)\
NGC4594 & $-$0.74 & (0.06) & $-$0.12 & (0.46)\
NGC4602 & 0.04 & (0.05) & $-$0.19 & (0.13)\
TOL1238-364 & $-$0.90 & (0.03) & $-$1.51 & (0.06)\
M-2-33-34 & $-$0.72 & (0.04) & $-$1.30 & (0.12)\
NGC4941 & $-$0.90 & (0.03) & $-$1.33 & (0.14)\
NGC4968 & $-$1.47 & (0.02) & $-$1.58 & (0.10)\
NGC5005 & 1.45 & (0.03) & $-$0.45 & (0.04)\
NGC5033 & 0.10 & (0.02) & $-$0.19 & (0.18)\
NGC5135 & 0.44 & (0.02) & $-$0.72 & (0.06)\
M-6-30-15 & $-$1.74 & (0.02) & $-$1.67 & (0.16)\
NGC5256 & 0.85 & (0.02) & $-$1.39 & (0.04)\
IC4329A & $-$2.14 & (0.02) & $-$2.70 & (0.10)\
NGC5347 & $-$1.51 & (0.04) & $-$1.67 & (0.12)\
NGC5506 & $-$0.67 & (0.02) & $-$2.11 & (0.05)\
NGC5548 & $-$1.65 & (0.03) & $-$1.88 & (0.13)\
MRK817 & $-$0.80 & (0.02) & $-$2.51 & (0.08)\
NGC5929 & 0.45 & (0.07) & $-$1.26 & (0.09)\
NGC5953 & 0.51 & (0.03) & &\
M-2-40-4 & $-$0.77 & (0.01) & $-$1.28 & (0.05)\
F15480-0344 & $-$1.21 & (0.02) & $-$1.82 & (0.10)\
NGC6810 & $-$0.42 & (0.02) & $-$1.07 & (0.03)\
NGC6860 & $-$2.06 & (0.02) & $-$0.61 & (0.13)\
NGC6890 & $-$0.66 & (0.02) & $-$0.66 & (0.05)\
IC5063 & $-$1.01 & (0.02) & $-$2.70 & (0.06)\
UGC11680 & $-$1.39 & (0.03) & $-$0.59 & (0.27)\
NGC7130 & 0.44 & (0.02) & $-$1.24 & (0.03)\
NGC7172 & 0.40 & (0.02) & $-$0.65 & (0.04)\
NGC7213 & $-$2.30 & (0.03) & $-$0.76 & (0.10)\
NGC7314 & $-$0.77 & (0.02) & $-$0.82 & (0.13)\
M-3-58-7 & $-$0.87 & (0.02) & $-$1.69 & (0.05)\
NGC7469 & $-$0.11 & (0.01) & $-$1.55 & (0.03)\
NGC7496 & 0.47 & (0.02) & $-$1.57 & (0.04)\
NGC7582 & 0.71 & (0.02) & $-$1.38 & (0.03)\
NGC7590 & 0.10 & (0.03) & $-$0.34 & (0.06)\
NGC7603 & $-$1.75 & (0.02) & $-$1.01 & (0.16)\
NGC7674 & $-$0.94 & (0.02) & $-$1.66 & (0.04)\
CGCG381-051 & $-$0.77 & (0.02) & $-$1.69 & (0.08)\
Figure \[fig:allseds\]
Figure \[fig:allseds\]
Figure \[fig:allseds\]
Figure \[fig:allseds\]
Figure \[fig:allseds\]
Figure \[fig:allseds\]
Figure \[fig:allseds\]
Figure \[fig:allseds\]
Figure \[fig:allseds\]
Figure \[fig:allseds\]
[^1]: Details are provided in the IRAC Data Handbook, available at http://ssc.spitzer.caltech.edu/irac/dh/.
[^2]: http://ssc.spitzer.caltech.edu/irac/calib
[^3]: http://ssc.spitzer.caltech.edu/postbcd/mopex.html
[^4]: http://ssc.spitzer.caltech.edu/irac/calib/
[^5]: http://ssc.spitzer.caltech.edu/mips/dh
[^6]: http://ssc.spitzer.caltech.edu/postbcd/spice.html.
[^7]: http://ssc.spitzer.caltech.edu/mips/dh
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We use a result of van Geemen [@vG2] to determine the endomorphism algebra of the Kuga–Satake variety of a K3 surface with real multiplication. This is applied to prove the Hodge conjecture for self-products of double covers of ${\mathbb{P}}^2$ which are ramified along six lines.'
address: 'Mathematisches Institut der Universit[ä]{}t Bonn, Endenicher Allee 60, 53115 Bonn, Germany'
author:
- Ulrich Schlickewei
title: 'The Hodge conjecture for self-products of certain K3 surfaces'
---
[Introduction]{} Let $S$ be a complex K3 surface, i.e. a smooth, projective surface over ${\mathbb{C}}$ satisfying $H^1(S,{\mathcal{O}}_S) = 0$ and $\omega_S \simeq {\mathcal{O}}_S$. Let $T(S) \subset H^2(S,{\mathbb{Q}})$ be the rational transcendental lattice of $S$, defined as the orthogonal complement of the Néron–Severi group with respect to the intersection form. The algebra $E_S := \operatorname{End}_{\operatorname{Hdg}}(T(S))$ of endomorphisms of $T(S)$ which preserve the Hodge decomposition can be interpreted as a subspace of the space of (2,2)-classes on the self-product $S \times S$. The Hodge conjecture for $S \times S$ predicts that $E_S$ consists of linear combinations of fundamental classes of algebraic surfaces in $S \times S$. Using the Lefschetz theorem on (1,1)-classes, it is easily seen that conversely the Hodge conjecture for $S \times S$ holds if $E_S$ is generated by algebraic classes.
Mukai [@Mu] used his theory of moduli spaces of sheaves to prove that if the Picard number of $S$ is at least 11, then any $\varphi \in E_S$ which preserves the intersection form on $T(S)$ can be represented as a linear combination of fundamental classes of algebraic cycles. Later this result was improved by Nikulin [@N] on the base of lattice-theoretic arguments to the case that the Picard number of $S$ is at least 5. In [@Mu2], Mukai announced that using the theory of moduli spaces of twisted sheaves, the hypothesis on the Picard number could be omitted.
But how many isometries do exist in the algebra $E_S$? Results of Zarhin [@Zarhin] imply that $E_S$ is an algebraic number field, which is either totally real (we say that $S$ has *real multiplication*) or a CM field ($S$ has *complex multiplication*). Isometries of $T(S)$ correspond to elements of norm 1 in $E_S$. If $S$ has complex multiplication, one can use the fact that CM fields are generated as ${\mathbb{Q}}$-vector spaces by elements of norm 1 to see that $E_S$ is generated by isometries. In combination with Mukai’s results, this proves the Hodge conjecture for self-products of K3 surfaces with complex multiplication and with Picard number at least 5. This was noticed by Ramón-Marí [@Ma]. If $S$ has real multiplication, the only Hodge isometries in $E_S$ are plus or minus the identity. Thus, Mukai’s results are no longer sufficient to prove the algebraicity of interesting classes in $E_S$.
In order to approach the case of real multiplication one passes from K3 surfaces to Abelian varieties by associating to a K3 surface $S$ its Kuga–Satake Abelian variety $A$. By construction, see [@KS], there exists an inclusion of Hodge structures $T(S)\subset H^2(A\times A,{\mathbb{Q}})$. Van Geemen [@vG2] studied the Kuga–Satake variety of a K3 surface with real multiplication. He discovered that the corestriction of a certain Clifford algebra over $E_S$ plays an important role for the Kuga–Satake variety of $S$. We rephrase and slightly improve his result which then reads as follows:
\[ThmZerlegungKS\] Let $S$ be a K3 surface with real multiplication by a totally real number field $E_S$ of degree $d$ over ${\mathbb{Q}}$. Let $A$ be a Kuga–Satake variety of $S$.
Then there exists an Abelian variety $B$ such that $A$ is isogenous to $B^{2^{d-1}}$. The endomorphism algebra of $B$ is $$\operatorname{End}_{{\mathbb{Q}}}(B) = \operatorname{Cores}_{E /{\mathbb{Q}}} C^0(Q).$$
Here, $Q: T \times T \to E_S$ is a quadratic form on $T$ which already appeared in Zarhin’s paper [@Zarhin] and which will be reintroduced in Section \[SectionSMTZarhin\], $C^0(Q)$ is the even Clifford algebra of $Q$ over $E_S$ and $\operatorname{Cores}_{E/{\mathbb{Q}}}C^0(Q)$ is the corestriction of this algebra. The corestriction of algebras will be reviewed in Section \[SectionCorestriction\].
Theorem \[ThmZerlegungKS\] leads to a better understanding of the phenomenon of real multiplication for K3 surfaces by allowing us to calculate the endomorphism algebra of the corresponding Kuga–Satake varieties. However, since the Kuga–Satake construction is purely Hodge-theoretic, this still gives no geometric explanation. Therefore, we focus on one of the few families of K3 surfaces for which the Kuga–Satake correspondence has been understood geometrically. This is the family of double covers of ${\mathbb{P}}^2$ ramified along six lines. Paranjape [@P] found an explicit cycle on $S \times A \times A$ which realizes the inclusion of Hodge structures $T(S) \subset H^2(A \times A,{\mathbb{Q}})$. Building on the decomposition result for Kuga–Satake varieties we derive
\[ThmHCDC\] Let $S$ be a K3 surface which is a double cover of ${\mathbb{P}}^2$ ramified along six lines. Then the Hodge conjecture is true for $S \times S$.
As pointed out by van Geemen [@vG2], there are one-dimensional sub-families of the family of such double covers with real multiplication by a totally real quadratic number field. In conjunction with our Theorem \[ThmHCDC\], this allows us to produce examples of K3 surfaces $S$ with non-trivial real multiplication for which $\operatorname{End}_{\operatorname{Hdg}}(T(S))$ is generated by algebraic classes. We could not find examples of this type in the existing literature.
The *plan of the paper* is as follows: In Section \[SectionHS\] we review Zarhin’s results on the endomorphism algebra and on the special Mumford–Tate group of an irreducible Hodge structure of K3 type. Also, we recall from [@vG2] how a Hodge structure of K3 type with real multiplication splits over a finite extension of ${\mathbb{Q}}$.
Section \[SectionKS\] is devoted to the proof of Theorem \[ThmZerlegungKS\]. After reviewing the definition of the corestriction of algebras, we explain in detail how the Galois group of a normal closure of $E_S$ acts on the Kuga–Satake Hodge structure. This is the key of the proof.
In the final Section \[SectionDoubleCovers\] we study double covers of ${\mathbb{P}}^2$ ramified along six lines. We review results of Lombardo [@L] on the Kuga–Satake variety of such K3 surfaces, of Schoen [@S] and van Geemen [@vG3] on the Hodge conjecture for certain Abelian varieties of Weil type and of course Paranjape’s [@P] result on the algebraicity of the Kuga–Satake correspondence. Together with Theorem \[ThmZerlegungKS\], they lead to the proof of Theorem \[ThmHCDC\].
*Acknowledgements.* This work is a part of my Ph.D. thesis prepared at the University of Bonn. It is a great pleasure to thank my advisor Daniel Huybrechts for suggesting this interesting topic and for constantly supporting me.
During a four week stay at the University of Milano I had many fruitful discussions with Bert van Geemen. I am most grateful to him for his insights.
[Hodge structures of K3 type with real multiplication]{} \[SectionHS\]
[Hodge structures of K3 type and their endomorphisms]{} Let $\mathrm{U}(1)$ be the one-dimensional unitary group which is a real algebraic group. To fix notations we recall that a Hodge structure of weight $k$ is a finite-dimensional ${\mathbb{Q}}$-vector space $T$ together with a morphism of real algebraic groups $h: \mathrm{U}(1) \to \mathrm{GL}(T)_{{\mathbb{R}}}$ such that for $z \in \mathrm{U}(1)({\mathbb{R}})\subset {\mathbb{C}}$ the ${\mathbb{C}}$-linear extension of the endomorphism $h(z)$ is diagonalizable with eigenvalues $z^p \overline{z}^q$ where $p+q= k$ and $p,q \ge 0$ (cf. e.g. [@vG2 1.1]). The eigenspace to $z^p \overline{z}^q$ is denoted by $T^{p,q} \subset T_{{\mathbb{C}}}$.
A polarization of a weight $k$ Hodge structure $(T,h)$ is a bilinear form $q: T \times T \to {\mathbb{Q}}$ which is $\mathrm{U}(1)$-invariant and which has the property that $(-1)^{k (k-1)/2}q(*,h(i)*): T_{{\mathbb{R}}} \times T_{{\mathbb{R}}} \to T_{{\mathbb{R}}}$ is a symmetric, positive definite bilinear form.
A *Hodge structure of K3 type $(T,h,q)$* consists of a ${\mathbb{Q}}$-Hodge structure $(T, h: \mathrm{U}(1) \to \mathrm{GL}(T)_{{\mathbb{R}}})$ of weight 2 with $\dim_{{\mathbb{C}}} T^{2,0} = 1$ together with a polarization $q : T \times T \to {\mathbb{Q}}$.
*Examples.* The second primitive (rational) cohomology and the (rational) transcendental lattice of a projective K3 surface yield examples of Hodge structures of K3 type. More generally, the second primitive cohomology and the Beauville–Bogomolov orthogonal complement of the Néron–Severi group of an irreducible symplectic variety are Hodge structures of K3 type [@GHJ Part III].
Consider the Hodge decomposition $$T_{{\mathbb{C}}} := T \otimes_{{\mathbb{Q}}} {\mathbb{C}}= T^{2,0} \oplus T^{1,1} \oplus T^{0,2}.$$ Since the quadratic form $q$ is a polarization, this decomposition is $q$-orthogonal. Moreover, $q$ is positive definite on $(T^{2,0} \oplus T^{0,2})\cap T_{{\mathbb{R}}}$ and negative definite on $T^{1,1} \cap T_{{\mathbb{R}}}$.
Assume that $T$ is an irreducible Hodge structure. Let $E:= \operatorname{End}_{\operatorname{Hdg}}(T)$ be the division algebra of endomorphisms of Hodge structures of $T$. Let $':E \to E$ be the involution given by adjunction with respect to $q$ and let $E_0 \subset E$ be the subalgebra of $E$ formed by $q$-self-adjoint endomorphisms.
\[ThmZarhin1\] The map $$\epsilon: E \to {\mathbb{C}}, \; \; \; e \mapsto \text{eigenvalue of} \; e \; \text{on the eigenspace} \; T^{2,0}$$ identifies $E$ with a subfield of ${\mathbb{C}}$. Moreover, $E_0$ is a totally real number field and the following two cases are possible:
$\bullet$ $E_0 = E$ (in this case we say that $T$ has *real multiplication*) or
$\bullet$ $E_0 \subset E$ is a purely imaginary, quadratic extension and $'$ is the restriction of complex conjugation to $E$ (we say that $T$ has *complex multiplication*).
[Splitting of Hodge structures of K3 type with real multiplication]{} (For this and the next section see [@vG2], 2.4 and 2.5.) \[SplittingExtensions\] Let $(T,h,q)$ be an irreducible Hodge structure of K3 type and assume that $E = \operatorname{End}_{\operatorname{Hdg}}(T)$ is a totally real number field. Note that by Theorem \[ThmZarhin1\], all endomorphisms in $E$ are $q$-self-adjoint.
By the theorem of the primitive element, there exists $\alpha \in E$ such that $E = {\mathbb{Q}}(\alpha)$. Let $d = [E: {\mathbb{Q}}]$. Let $P$ be the minimal polynomial of $\alpha$ over ${\mathbb{Q}}$, denote by $\widetilde{E}$ the splitting field of $P$ in ${\mathbb{R}}$. Let $G = \mathrm{Gal}(\widetilde{E}/{\mathbb{Q}})$ and $H = \mathrm{Gal}(\widetilde{E}/E)$. Choose $\sigma_1 = \operatorname{id}, \sigma_2, \ldots, \sigma_d \in G$ such that $$G = \sigma_1 H \sqcup \ldots \sqcup \sigma_d H.$$ Note that each coset $\sigma_i H$ induces a well-defined embedding $E \hookrightarrow \widetilde{E}$. In $\widetilde{E}[X]$ we get $$P(X) = \prod_{i=1}^d (X - \sigma_i(\alpha))$$ and consequently $$\begin{aligned} E \otimes_{{\mathbb{Q}}} \widetilde{E} = & {\mathbb{Q}}[X]/(P) \otimes_{{\mathbb{Q}}} \widetilde{E} \\
\simeq & \bigoplus_{i=1}^d \widetilde{E}[X] / (X - \sigma_i (\alpha)) \\
\simeq & \bigoplus_{i=1}^d E_{\sigma_i}.
\end{aligned}$$ The symbol $E_{\sigma_i}$ stands for the field $\widetilde{E}$, the index $\sigma_i$ keeps track of the fact that the $\widetilde{E}$-linear extension of $E \subset \operatorname{End}_{{\mathbb{Q}}}(E)$ acts on $E_{\sigma_i}$ via $e(x) = \sigma_i(e) \cdot x$. See Section \[SectionCorestriction\] for another interpretation of $E_{\sigma_i}$.
In the same way, since $T$ is a finite-dimensional $E$-vector space we get a decomposition $$T_{\widetilde{E}} = T \otimes_{{\mathbb{Q}}} \widetilde{E} = \bigoplus_{i=1}^d T_{\sigma_i}.$$ This is the decomposition of $T_{\widetilde{E}}$ into eigenspaces of the $\widetilde{E}$-linear extension of the $E$-action on $T$, $T_{\sigma_i}$ being the eigenspace of $e_{\widetilde{E}}$ to the eigenvalue $\sigma_i (e)$ for $e \in E$. Since each $e \in E$ is $q$-self-adjoint (that is $e'= e$), the decomposition is orthogonal. Let $q_{\widetilde{E}}$ be the $\widetilde{E}$-bilinear extension of $q$ to $T_{\widetilde{E}} \times T_{\widetilde{E}}$. Using the notation $$T_i := T_{\sigma_i} \; \text{and} \; q_i = (q_{\widetilde{E}})_{| T_i \times T_i},$$ we have an orthogonal decomposition $$\label{ZerlegungT}
(T_{\widetilde{E}}, q_{\widetilde{E}}) = \bigoplus_{i=1}^d (T_i, q_i).$$
[Galois action on T]{} \[GaloisT\] Letting $G$ act in the natural way on $\widetilde{E}$, we get a (only ${\mathbb{Q}}$-linear) Galois action on $T_{\widetilde{E}} = T \otimes_{{\mathbb{Q}}} \widetilde{E}$. Under this action, for $\tau \in G$ we have $$\label{GalPerm1}
\tau T_{\sigma_i} = T_{ \tau \sigma_i}.$$ This is because the Galois action commutes with the $\widetilde{E}$-linear extension of any endomorphism $e \in E \subset \operatorname{End}_{{\mathbb{Q}}}(T)$ the latter being defined over ${\mathbb{Q}}$ and because for $t_i \in T_{\sigma_i}$ and $e \in E$ $$e_{\widetilde{E}} (\tau (t_i)) =
\tau (e_{\widetilde{E}} (t_i)) = \tau (\sigma_i(e) t_i) = \tau(\sigma_i (e)) \tau(t_i)
= (\tau \sigma_i (e)) \tau (t_i),$$ which means that $\tau$ permutes the eigenspaces of $e_{\widetilde{E}}$ precisely in the way we claimed. Define a homomorphism $$\label{Perm1-d} \gamma: G \to \; \mathfrak{S}_d, \; \; \;
\tau \mapsto \; \{ i \mapsto \tau (i) \; \text{where} \; (\tau \sigma_i) H = \sigma_{\tau(i)} H \}.$$ (This describes the action of $G$ on $G/H$.) With this notation, (\[GalPerm1\]) reads $$\label{GalPerm}
\tau T_i = T_{\tau(i)}.$$
Interpret $T$ as a subspace of $T_{\widetilde{E}}$ via the natural inclusion $T \hookrightarrow T_{\widetilde{E}}, \; t \mapsto t \otimes 1$. Denote by $\pi_i$ the projection to $T_i$. For $t \in T$ and $\tau \in G$ we have $t = \tau (t)$. Write $t_i := \pi_i (t \otimes 1)$, then $t = \sum_i t_i$. Using (\[GalPerm\]) we see that $$\label{t_i=sigma_i}
t_{\tau i} = \tau (t_i).$$ It follows that $$\label{iota}
\iota_i: T \to \; T_i, \; \; \; t \mapsto \; \pi_i (t \otimes 1)$$ is an injective map of $E$-vector spaces ($E$ acting on $T_i$ via $\sigma_i : E \hookrightarrow \widetilde{E}$). Equation (\[t\_i=sigma\_i\]) can be rephrased as $$\label{iota_i=sigma_i}
\iota_{\tau i} = \tau \circ \iota_i.$$
Since $q$ is defined over ${\mathbb{Q}}$, we have for $t \in T_{\widetilde{E}}$ and $\tau \in G$ $$q_{\widetilde{E}} (\tau t) = \tau q_{\widetilde{E}}(t).$$ This implies that for $t \in T$ $$\label{QuadratischeFormGalois}
q_i (\iota_i(t)) = \sigma_i q_1(\iota_1(t)).$$
[The special Mumford–Tate group of a Hodge structure of K3 type with real multiplication]{} \[SectionSMTZarhin\] Zarhin [@Zarhin] also computed the special Mumford–Tate group of an irreducible Hodge structure of K3 type. Recall that for a Hodge structure $(W, h: \mathrm{U}(1) \to \mathrm{GL}(W)_{{\mathbb{R}}})$ the special Mumford–Tate group $\mathrm{SMT}(W)$ is the smallest linear algebraic subgroup of $\mathrm{GL}(W)$ defined over ${\mathbb{Q}}$ with $h(\mathrm{U}(1)) \subset \mathrm{SMT}(W)_{{\mathbb{R}}}$ (cf. [@Go]).
Assume that $(T,h,q)$ is an irreducible Hodge structure of K3 type with real multiplication by $E = \operatorname{End}_{\operatorname{Hdg}}(T)$. We continue to use the notations of Section \[SplittingExtensions\]. Denote by $Q$ the restriction of $q_1$ to $T \subset T_1$ (use the inclusion $\iota_1$ of (\[iota\])). This is an $E$-valued (since $H$-invariant), non-degenerate, symmetric bilinear form on the $E$-vector space $T$. Denote by $\operatorname{SO}(Q)$ the $E$-linear algebraic group of $Q$-orthogonal, $E$-linear transformations of $T$ with determinant $1$.
Recall that for an $E$-variety $Y$ the *Weil restriction* $\operatorname{Res}_{E /{\mathbb{Q}}}(Y)$ is the ${\mathbb{Q}}$-variety whose $K$-rational points are the $E \otimes_{{\mathbb{Q}}} K$-rational points of $Y$ for any extension field ${\mathbb{Q}}\subset K$ (cf. [@BLR]).
\[Thm2Zarhin\] The special Mumford–Tate group of the Hodge structure $(T,h,q)$ with real multiplication by $E$ is $$\operatorname{SMT}(T) = \operatorname{Res}_{E/{\mathbb{Q}}}(\operatorname{SO}(Q)).$$ Its representation on $T$ is the natural one, where we regard $T$ as a ${\mathbb{Q}}$-vector space and use that any $E$-linear endomorphism of $T$ is in particular ${\mathbb{Q}}$-linear. After base change to $\widetilde{E}$ $$\operatorname{SMT}(T)_{\widetilde{E}} = \prod_i \operatorname{SO}((T_i), (q_i)),$$ its representation on $T_{\widetilde{E}} = \bigoplus_i (T_i)$ is the product of the standard representations.
[Kuga–Satake varieties and real multiplication]{} \[SectionKS\]
[Kuga–Satake varieties]{} Let $(T,h,q)$ be a Hodge structure of K3 type. Kuga and Satake [@KS] found a way to associate to this a polarizable ${\mathbb{Q}}$-Hodge structure of weight one $(V, h_s : \mathrm{U}(1) \to \mathrm{GL}(V)_{{\mathbb{R}}})$, in other words an isogeny class of Abelian varieties, together with an inclusion of Hodge structures $$\label{EmbeddingTransKS}
T \subset V \otimes V.$$ Set $V:= C^0(q)$ where $C^0(q)$ is the even Clifford algebra of $q$. Define a weight one Hodge structure on $V$ in the following way: Choose $f_1, f_2 \in (T^{2,0} \oplus T^{0,2})_{{\mathbb{R}}}$ such that ${\mathbb{C}}(f_1 + i f_2) = T^{2,0}$ and $q(f_i, f_j) = \delta_{i,j}$ (recall that $q_{|(T^{2,0} \oplus T^{0,2})_{{\mathbb{R}}}}$ is positive definite). Define $J: V \to V, \; v \mapsto f_1 f_2 v$, then we see that $J^2 = - \operatorname{id}$. Now we can define a homomorphism of algebraic groups $$h_s: {\mathrm{U}}(1) \to \mathrm{GL}(V)_{{\mathbb{R}}}, \; \; \exp(x i) \mapsto \exp(x J),$$ and this induces the Kuga–Satake Hodge structure. One can check that $h_s$ is independent of the choice of $f_1, f_2$ (see [@vG1 Lemma 5.5]).
It can be shown that the Kuga–Satake Hodge structure admits a polarization (cf. [@vG1 Prop. 5.9]) and that there is an embedding of Hodge structures as in (\[EmbeddingTransKS\]) (see [@vG1 Prop. 6.3]).
[Corestriction of algebras]{} \[SectionCorestriction\] Let $E/K$ be a finite, separable extension of fields of degree $d$ and let $A$ be an $E$-algebra. We use the notations of Section \[SplittingExtensions\], so $\widetilde{E}$ is a normal closure of $E$ over $K$, $\sigma_1, \ldots ,\sigma_d$ is a set of representatives of $G/H$ where $G = \mathrm{Gal}(\widetilde{E}/K)$ and $H = \mathrm{Gal}(\widetilde{E}/E)$.
For $\sigma \in G$ define the twisted $\widetilde{E}$-algebra as the ring $$A_{\sigma} := A \otimes_E \widetilde{E}$$ which carries an $\widetilde{E}$-algebra structure given by $$\lambda \cdot (a \otimes e) = a \otimes \sigma^{-1} (\lambda) e.$$ Note that $A_{\sigma} \simeq A \otimes_E E_{\sigma}$.
Let $V$ be an $E$-vector space and $W$ an $\widetilde{E}$-vector space, let $\sigma \in G$. A homomorphism of $K$-vector spaces $\varphi: V \to W$ is called *$\sigma$-linear* if $\varphi (\lambda v) = \sigma (\lambda) \varphi (v)$ for all $v \in V$ and $\lambda \in E$. If both, $V$ and $W$ are $\widetilde{E}$-vector spaces, there is a similar notion of an $\sigma$-linear homomorphism.
\[universal\_twisted\] The map $$\kappa_{\sigma}: A \to A_{\sigma}, \; \; a \mapsto a \otimes 1$$ is a $\sigma$-linear ring homomorphism and the pair $(A_{\sigma}, \kappa_{\sigma})$ has the following universal property: For all $\widetilde{E}$-algebras $B$ and for all $\sigma$-linear ring homomorphisms $\varphi: A \to B$ there exists a unique $\widetilde{E}$-algebra homomorphism $\widetilde{\varphi}: A_{\sigma} \to B$ making the diagram $$\xymatrix{ A \ar[r]^-{\kappa_{\sigma}} \ar[dr]_{\varphi} & A_{\sigma} \ar[d]^{\widetilde{\varphi}} \\
& B }$$ commutative.
We only check the universal property. To give a $K$-linear map $\alpha : A \otimes_E \widetilde{E} \to B$ is the same as to give a $K$-bilinear map $\beta: A \times \widetilde{E} \to B$ satisfying $$\label{beta_lambda}
\beta(\lambda a , e) = \beta(a ,\lambda e)$$ for all $a \in A, e \in \widetilde{E}$ and $\lambda \in E$. These maps are related by the condition $$\alpha (a \otimes e) = \beta(a,e).$$
Now given $\varphi$ as in the lemma, we define $$\psi: A \times \widetilde{E} \to B, \; \; (a,e) \mapsto \sigma(e) \varphi(a).$$ This is a $K$-bilinear map which satisfies (\[beta\_lambda\]) and therefore, it induces a $K$-linear map $$\widetilde{\varphi}: A \otimes_E \widetilde{E} \to B, \; \; a \otimes e \mapsto \sigma(e) \varphi(a).$$ It is clear that $\widetilde{\varphi}$ is a ring homomorphism and that it respects the $\widetilde{E}$-algebra structures if we interpret $\widetilde{\varphi}$ as a map $\widetilde{\varphi}: A_{\sigma} \to B$. The uniqueness of this map is immediate.
[*Remark.*]{} (i) The lemma shows that up to unique $\widetilde{E}$-algebra isomorphism, the twisted algebra $ A_{\sigma_i}$ depends only on the coset $\sigma_i H$. Indeed, for $\sigma \in \sigma_i H $ the inclusion $A \hookrightarrow A_{\sigma_i} $ is $\sigma$-linear because $\sigma$ and $\sigma_i$ induce the same embedding of $E$ into $\widetilde{E}$. By the lemma, there exists an $\widetilde{E}$-algebra isomorphism $\alpha_{\sigma, \sigma_i} : A_{\sigma} \stackrel{\sim}{\to} A_{\sigma_i},
\; a \otimes e \mapsto \sigma(e) \cdot (a \otimes 1) = a \otimes \sigma_i^{-1} \sigma (e)$.
\(ii) In Section \[SplittingExtensions\] we were in the situation $E = {\mathbb{Q}}(\alpha)$. There we discussed the splitting $E \otimes_{{\mathbb{Q}}} \widetilde{E} \simeq \bigoplus_i
\widetilde{E}[X] / (X - \sigma_i(\alpha)) \simeq \bigoplus_i E_{\sigma_i}$ and we used the symbol $E_{\sigma_i}$ for the field $\widetilde{E}$ with $E$-action via $e(x) = \sigma_i (e) \cdot x$. This is precisely our twisted $\widetilde{E}$-algebra $E_{\sigma_i}$ on which $E$ acts via the inclusion $\kappa_{\sigma_i}$.
For $\tau \in G$ there is a unique $\tau$-linear ring isomorphism $\tau: A_{\sigma_i} \to A_{\sigma_{\tau i}}$ which extends the identity on $A \subset A_{\sigma_i}$ (in the sense that $\tau \circ \kappa_{\sigma_i} = \kappa_{\sigma_{\tau i}}$). This map is given as the composition of the following two maps: First apply the identity map $ A_{\sigma_i} \to A_{\tau \sigma_i}, \; a \otimes e \mapsto a \otimes e$ which is a $\tau$-linear ring isomorphism. Then apply the isomorphism $\alpha_{\tau \sigma_i, \sigma_{\tau i}}$ (by definition of the $G$-action on $\{1, \ldots, d\}$ we have $\tau \sigma_i \in \sigma_{\tau i} H$). On simple tensors the map $\tau$ takes the form $$\label{taulinear}
a \otimes e \mapsto a \otimes \sigma_{\tau i}^{-1} \tau \sigma_i (e).$$
These maps induce a natural action of $G$ on $$Z_G(A) := A_{\sigma_1} \otimes_{\widetilde{E}} \ldots \otimes_{\widetilde{E}} A_{\sigma_d}$$ where $$\begin{gathered}
\label{GWirkungCores}
\tau ((a_1 \otimes e_1) \otimes \ldots \otimes (a_d \otimes e_d)) \\
= \big( a_{\tau^{-1}1} \otimes \sigma_1^{-1} \tau \sigma_{\tau^{-1}1} (e_{\tau^{-1}1}) \big) \otimes \ldots
\otimes \big( a_{\tau^{-1}d} \otimes \sigma_d^{-1} \tau \sigma_{\tau^{-1}d} (e_{\tau^{-1} d}) \big).\end{gathered}$$
The *corestriction of $A$ to $K$* is the $K$-algebra of $G$-invariants in $Z_G(A)$ $$\operatorname{Cores}_{E/K} (A): = Z_G(A)^G.$$
[*Remark.*]{} (i) By [@D §8, Cor. 1] there is a natural isomorphism $$\operatorname{Cores}_{E/K} (A) \otimes_{K} \widetilde{E} \simeq Z_G(A)$$ In particular, with $d = [E: K]$ one gets $\dim_K \operatorname{Cores}_{E/K} (A) = (\dim_E(A))^d$.
\(ii) Let $X = \mathrm{Spec}(A)$ for a commutative $L$-algebra $A$. Then for any $K$-algebra $B$ we get a chain of isomorphisms, functorial in $B$ $$\begin{aligned}
\operatorname{Hom}_{K-\mathrm{Alg}} (\operatorname{Cores}_{E/K}(A), B) & \simeq \;
\left( \operatorname{Hom}_{\widetilde{E}-\mathrm{Alg}} (Z_G(A), B \otimes_K \widetilde{E}) \right)^G \\
& \simeq \;
\operatorname{Hom}_{E-\mathrm{Alg}}(A, B \otimes_K E). \end{aligned}$$ Here, the last isomorphism is given by composing $f \in \left( \operatorname{Hom}_{\widetilde{E}-\mathrm{Alg}} (Z_G(A), B \otimes_K \widetilde{E}) \right)^G$ with the inclusion $j: A \hookrightarrow Z_G(A), \; a \mapsto \kappa_{\sigma_1}(a) \otimes 1 \otimes \ldots
\otimes 1$. (The image of this composition is contained in the $H$-invariant part of $B \otimes_K \widetilde{E}$ which is $B \otimes_K E$.) This map is an isomorphism, since $Z_G(A)$ is generated as an $\widetilde{E}$-algebra by elements of the form $\sigma \circ j (a)$ with $a \in A$ and $\sigma \in G$.
It follows that $$\mathrm{Res}_{E/K} (\mathrm{Spec}(A)) \simeq \mathrm{Spec}(\operatorname{Cores}_{E/K} (A)),$$ i.e. the Weil restriction of affine $E$-schemes is the same as the corestriction of commutative $E$-algebras.
[The decomposition theorem]{} We will assume from now to the end of the section that $(T,h,q)$ is an irreducible Hodge structure of K3 type with $E=\operatorname{End}_{\operatorname{Hdg}}(T)$ a totally real number field.
Recall that in this case $T$ is an $E$-vector space which carries a natural $E$-valued quadratic form $Q$ (see \[SectionSMTZarhin\]). Let $C^0(Q)$ be the even Clifford algebra of $Q$ over $E$. It was van Geemen (see [@vG2 Prop. 6.3]) who discovered that the algebra $\operatorname{Cores}_{E/{\mathbb{Q}}} (C^0(Q))$ appears as a sub-Hodge structure in the Kuga–Satake Hodge structure of $(T,h,q)$. We are going to show that this contains all information on the Kuga–Satake Hodge structure.
\[Satz\] Denote by $(V,h_s)$ the Kuga–Satake Hodge structure of $(T,h,q)$.
*(i)* The special Mumford–Tate group of $(V,h_s)$ is the image of $\operatorname{Res}_{E/{\mathbb{Q}}} (\operatorname{Spin}(Q))$ in $\operatorname{Spin}(q)$ under a morphism $m$ of rational algebraic groups which after base change to $\widetilde{E}$ becomes $$m_{\widetilde{E}}:
\operatorname{Spin}(q_1) \times \ldots \times \operatorname{Spin}(q_d) \to \operatorname{Spin}(q)_{\widetilde{E}}, \; \; \;
(v_1, \ldots, v_d) \mapsto v_1 \cdot \ldots \cdot v_d.$$ *(ii)* Let $W := \operatorname{Cores}_{E/{\mathbb{Q}}}(C^0(Q))$. Then $W$ can be canonically embedded in $V$ and the image is $\operatorname{SMT}(V)$-stable and therefore, it is a sub-Hodge structure. Furthermore, there is a (non-canonical) isomorphism of Hodge structures $$V \simeq W^{2^{d-1}}.$$
*(iii)* We have $$\operatorname{End}_{\operatorname{Hdg}}(W) = \operatorname{Cores}_{E/{\mathbb{Q}}}(C^0(Q))$$ and consequently $$\operatorname{End}_{\operatorname{Hdg}}(V) = \mathrm{Mat}_{2^{d-1}} \big( \operatorname{Cores}_{E/{\mathbb{Q}}} (C^0(Q)) \big).$$
The proof will be given in Section \[proof\]. The theorem tells us that the Kuga–Satake variety $A$ of $(T,h,q)$ is isogenous to a self-product $B^{2^{d-1}}$ of an Abelian variety $B$ with $\operatorname{End}_{{\mathbb{Q}}}(B) = \operatorname{Cores}_{E/{\mathbb{Q}}} (C^0(Q))$ and therefore, it proves Theorem \[ThmZerlegungKS\].
Note that $B$ is not simple in general. We will see examples below where $B$ decomposes further.
[Galois action on C(q)]{} By Section \[SplittingExtensions\] there is a decomposition $$(T,q)_{\widetilde{E}} = \bigoplus_{i=1}^d (T_i, q_i).$$ This in turn yields an isomorphism $$C(q)_{\widetilde{E}} \simeq C (q_{\widetilde{E}}) \simeq
C(q_1) \widehat{\otimes}_{\widetilde{E}} \ldots \widehat{\otimes}_{\widetilde{E}} C(q_d).$$ Here, the symbol $\widehat{\otimes}$ denotes the graded tensor product of algebras, which on the level of vector spaces is just the usual tensor product, but which twists the algebra structure by a suitable sign (see [@vG1 ??]).
Decompose $C(q_i) = C^0(q_i) \oplus C^1(q_i)$ in the even and the odd part. If we forget the algebra structure and only look at $\widetilde{E}$-vector spaces, we get $$C(q)_{\widetilde{E}} =
\bigoplus_{\mathbf{a} \in \{0,1 \}^d} C^{a_1} (q_1) \otimes_{\widetilde{E}} \ldots \otimes_{\widetilde{E}} C^{a_d} (q_d).$$ For $\mathbf{a} = (a_1,\ldots, a_d) \in \{0,1 \}^d$ define $$C^{\mathbf{a}}(q) = C^{a_1}(q_1) \otimes \ldots \otimes C^{a_d}(q_d).$$ We introduced an action of $G = \mathrm{Gal}(\widetilde{E}/{\mathbb{Q}})$ on $\{1, \ldots , d\}$ (see (\[Perm1-d\])). This induces an action $$\label{WirkungG0,1^d}
G \times \{0,1 \}^d \to \{ 0,1 \}^d, \; \; (\tau, (a_1, \ldots, a_d)) \mapsto (a_{\tau^{-1} 1}, \ldots, a_{\tau^{-1} d}).$$ The next lemma describes the Galois action on $C(q)_{\widetilde{E}}$.
\[GaloisClifford\] *(i)* Via the map $$C(q_i) \subset C(q)_{\widetilde{E}}, \; v_i \mapsto 1 \otimes \ldots \otimes v_i \otimes
\ldots \otimes 1$$ we interpret $C(q_i)$ as a subalgebra of $C(q)_{\widetilde{E}}$. Then the restriction of $\tau \in G$ to $C(q_i)$ induces an isomorphism of ${\mathbb{Z}}/ 2{\mathbb{Z}}$-graded ${\mathbb{Q}}$-algebras $$\tau: (C(q_i)) \stackrel{\sim}{\to} C(q_{\tau{i}}).$$ *(ii)* For $\tau \in G$ and $\mathbf{a} \in \{ 0,1 \}^d$ we get $$\tau (C^{\mathbf{a}}(q)) = C^{\tau \mathbf{a}} (q).$$
Tensor the natural inclusion $T \hookrightarrow C(q)$ with $\widetilde{E}$ to get a $G$-equivariant inclusion $$T \otimes_{{\mathbb{Q}}} \widetilde{E} = \bigoplus_{i=1}^d T_i \to C(q)_{\widetilde{E}}.$$ Using (\[GalPerm\]), we find for $t_i \in C(q_i)$ that $\tau (t_i) \in C(q_{\tau(i)})$. Now, $C(q_i)$ is spanned as a ${\mathbb{Q}}$-algebra by products of the form $$t_1 \cdot \ldots \cdot t_k
= \pm (1 \otimes \ldots \otimes t_1 \otimes \ldots \otimes 1) \cdot \ldots \cdot (1 \otimes \ldots \otimes t_k \otimes
\ldots \otimes 1)$$ for $t_1, \ldots, t_k \in T_i$. Since $G$ acts by ${\mathbb{Q}}$-algebra homomorphisms on $C(q)_{\widetilde{E}}$, this implies (i).
Item (ii) is an immediate consequence of (i): The space $C^{\mathbf{a}}(q)$ is spanned as ${\mathbb{Q}}$-vector space by products of the form $v_1 \cdot \ldots \cdot v_d = \pm v_1 \otimes \ldots \otimes v_d$ with $v_i \in C^{a_i}(q_i)$. Then use again, that $G$ acts by ${\mathbb{Q}}$-algebra homomorphisms.
\[twistedC(Q)\] For $i \in \{1, \ldots, d \}$ the twisted algebra $C^0(Q)_{\sigma_i}$ is canonically isomorphic as an $\widetilde{E}$-algebra to $C^0(q_i)$. Thus $$Z_G(C^0(Q)) \simeq C^0(q_1) \otimes_{\widetilde{E}} \ldots \otimes_{\widetilde{E}} C^0(q_d).$$ On both sides there are natural $G$-actions: On the left hand side $G$ acts via the action introduced in (\[GWirkungCores\]), whereas on the right hand side it acts via the restriction of its action on $C(q)_{\widetilde{E}}$ (use Lemma \[GaloisClifford\]). Then the above isomorphism is $G$-equivariant.
Fix $i \in \{1, \ldots, d\}$. The composition of the canonical inclusion $C^0(Q) \subset C^0(q_1) \simeq C^0(Q)_{\widetilde{E}}$ with the restriction to $C^0(Q)$ of the map $\sigma_i : C(q_1) \to C(q_i)$ from Lemma \[GaloisClifford\] induces a $\sigma_i$-linear ring homomorphism $$\varphi_i : C^0(Q) \hookrightarrow C^0(q_i).$$ By Lemma \[universal\_twisted\] we get an $\widetilde{E}$-algebra homomorphism $$\widetilde{\varphi}_i : C^0(Q)_{\sigma_i} \to C^0(q_i).$$ Recall that there are inclusions $\iota_i: T \hookrightarrow T_i$ (see (\[iota\])) which satisfy $\tau \circ \iota_i = \iota_{\tau i}$ (see (\[iota\_i=sigma\_i\])). Let $t_1, \ldots, t_m \in T$ such that $\iota_1(t_1), \ldots, \iota_1(t_m)$ form a $q_1$-orthogonal basis of $T_1$. Then the vectors $\iota_i (t_1), \ldots, \iota_i(t_m)$ form a $q_i$-orthogonal basis of $T_i$ (use (\[QuadratischeFormGalois\])). By definition of $\widetilde{\varphi_i}$ $$\label{WirkungTildePhi}
\widetilde{\varphi}_i \left( \iota_1 (t_1)^{i_1} \cdot \ldots \cdot \iota_1 (t_m)^{i_d} \right) =
\iota_i(t_1)^{i_1} \cdot \ldots \cdot \iota_i (t_m)^{i_m}.$$ This implies that $\widetilde{\varphi}_i$ maps an $\widetilde{E}$-basis of $C^0(Q)_{\sigma_i}$ onto an $\widetilde{E}$-basis of $C^0(q_i)$, whence $\widetilde{\varphi}_i$ is an isomorphism of $\widetilde{E}$-algebras.
As for the $G$-equivariance, we have to check that for all $\tau \in G$ the diagram $$\begin{CD}
C^0(Q)_{\sigma_i} @>\widetilde{\varphi}_i>> C^0(q_i) \\
@V{\tau}VV @VV{\tau}V \\
C^0(Q)_{\sigma_{\tau i}} @>\widetilde{\varphi}_{\tau i}>> C^0(q_{\tau i})
\end{CD}$$ is commutative. It is enough to check this on an $\widetilde{E}$-basis of $C^0(Q)_{\sigma_i}$ because the vertical maps are both $\tau$-linear whereas the horizontal ones are $\widetilde{E}$-linear. Since $\tau: C^0(Q)_{\sigma_i} \to C^0(Q)_{\sigma_{\tau i}}$ was defined as the extension of the identity map on $C^0(Q) \subset C^0(Q)_{\sigma_i}$, we have $$\begin{aligned}
\widetilde{\varphi}_{\tau i} \circ \tau \left( \iota_1 (t_1)^{i_1} \cdot \ldots \cdot \iota_1(t_m)^{i_m} \right) = &
\widetilde{\varphi}_{\tau i} \left( \iota_1 (t_1)^{i_1} \cdot \ldots \cdot \iota_1(t_m)^{i_m} \right) \\
= & \; \iota_{\tau i}(t_1)^{i_1} \cdot \ldots \cdot \iota_{\tau i} (t_m)^{i_m} \\
= & \; (\tau \circ \iota_i)(t_1)^{i_1} \cdot \ldots \cdot (\tau \circ \iota_i) (t_m)^{i_m} \\
= & \; \tau \left( \iota_i (t_1)^{i_1} \cdot \ldots \cdot \iota_i (t_m)^{i_m} \right) \\
= & \; \tau \circ \widetilde{\varphi}_i \left( \iota_1 (t_1)^{i_1} \cdot \ldots \cdot \iota_m (t_m)^{i_m} \right).
\end{aligned}$$ This completes the proof of the lemma.
[Proof of the decomposition theorem]{} \[proof\] Let $K$ be a field and $(U,r)$ be a quadratic $K$-vector space. Recall that the spin group of $r$ comes with two natural representations:
First there is the covering representation $\rho: \operatorname{Spin}(r) \to \operatorname{SO}(r)$ which over an extension field $K \subset L$ maps $y \in \operatorname{Spin}(r)(L) = \{ x \in (C^0(r) \otimes_{K} L)^* \; | \; x \iota(x) = 1 \; \text{and} \;
x U x^{-1} \subset U\}$ to the endomorphism $U \to U, \; u \mapsto x u x^{-1}$. Here, $\iota : C(r) \to C(r)$ is the natural involution of the Clifford algebra.
Secondly, the spin representation realizes $\operatorname{Spin}(r)$ as a subgroup of $\mathrm{GL}(C^0(r))$ by sending $y \in \operatorname{Spin}(r)(L)$ to the endomorphism of $C^0(r)$ given by $x \mapsto y \cdot x$.
*Proof of (i). By [@vG1 Prop. 6.3], there is a commutative diagram $$\label{DiagrammHodge}
\begin{CD} {\mathrm{U}}(1) @>h_s>> \operatorname{Spin}(q)_{{\mathbb{R}}} @>>> \mathrm{GL}(C^0(q))_{{\mathbb{R}}} \\
@| @VV{\rho}V \\
{\mathrm{U}}(1) @>>{h}> \operatorname{SO}(q)_{{\mathbb{R}}} @>>> \mathrm{GL}(T)_{{\mathbb{R}}}.
\end{CD}$$ (Van Geemen works with the Mumford–Tate group, therefore he gets a factor $t^2$ in 6.3.2. This factor is 1 if one restricts the attention to the special Mumford–Tate group, moreover it is then clear that $h_s({\mathbb{C}}^*) \subset \mathrm{CSpin(q)} = \{ v \in C^0(q)^* \; | \; v T v^{-1} \subset T \}$ implies $h_s({\mathrm{U}}(1)) \subset \operatorname{Spin}(q)$.)*
**Claim: There is a Cartesian diagram $$\begin{CD} \operatorname{SMT}(V) @>>> \operatorname{Spin}(q) \\
@V{\rho_{|\operatorname{SMT}(V)}}VV @VV{\rho}V \\
\operatorname{SMT}(T) @>>> \operatorname{SO}(q).
\end{CD}$$ where the horizontal maps are appropriate factorizations of the inclusions $\operatorname{SMT}\subset \mathrm{GL}$ whose existence is guaranteed by (\[DiagrammHodge\]).**
*Proof of the claim. It is clear by looking at (\[DiagrammHodge\]) and at the definition of the special Mumford–Tate group that $$\operatorname{SMT}(V) \subset \operatorname{SMT}(T) \times_{\operatorname{SO}(q)} \operatorname{Spin}(q).$$ In the same way we see that $$\operatorname{SMT}(T) \subset \rho(\operatorname{SMT}(V))$$ and hence we have a chain of inclusions $$\operatorname{SMT}(V) \subset \operatorname{SMT}(T) \times_{\operatorname{SO}(q)} \operatorname{Spin}(q) \subset \rho(\operatorname{SMT}(V)) \times_{\operatorname{SO}(q)} \operatorname{Spin}(q).$$ But over any field, the kernel of $\rho$ consists of $\{ \pm 1 \} \subset \operatorname{SMT}(V)$ (because $h_s(-1) = -1$) and thus $$\operatorname{SMT}(V) = \rho(\operatorname{SMT}(V)) \times_{\operatorname{SO}(q)} \operatorname{Spin}(q).$$ This proves the claim. $\mathrm{(Claim)} \Box$*
To continue the proof of (i) we have to define the morphism of rational algebraic groups $$m: \operatorname{Res}_{E/{\mathbb{Q}}}(\operatorname{Spin}(Q)) \to \operatorname{Spin}(q).$$ For that sake, note first that there is a natural isomorphism of $\widetilde{E}$-algebras $$\label{C(Q)ext} \begin{aligned}
C^0(Q) \otimes_{{\mathbb{Q}}} \widetilde{E} \simeq & \;C^0(Q) \otimes_E (E \otimes_{{\mathbb{Q}}} \widetilde{E}) \\
\simeq & \; \bigoplus_i C^0(Q) \otimes_E E_{\sigma_i} \\
\simeq & \; \bigoplus_i C^0(Q)_{\sigma_i} \\ \simeq & \; C^0(q_1) \oplus \ldots \oplus C^0(q_d) \end{aligned}$$ where we use the notations of Section \[SectionCorestriction\] and for the last identification Lemma \[twistedC(Q)\]. Consider the natural $G$-action on $C^0(q_1) \oplus \ldots \oplus C^0(q_d)$ given by $$(\tau, (v_1, \ldots, v_d)) \mapsto (\tau v_{\tau^{-1} 1} , \ldots , \tau v_{\tau^{-1}d}).$$ On $C^0(Q)\otimes_{{\mathbb{Q}}} \widetilde{E}$, the Galois group $G$ acts by its natural action on $\widetilde{E}$. Then the identification made in (\[C(Q)ext\]) is $G$-equivariant and we get an isomorphism of ${\mathbb{Q}}$-vector spaces $$C^0(Q) \simeq \left( C^0(q_1) \oplus \ldots \oplus C^0(q_d) \right)^G, \; \; \;
v \mapsto (\sigma_1(v), \ldots, \sigma_d(v)).$$ Now, look at the morphism of $\widetilde{E}$-affine spaces $$C^0(q_1) \oplus \ldots \oplus C^0(q_d) \to C^0(q)_{\widetilde{E}}, \; \; \; (v_1, \ldots, v_d) \mapsto v_1 \cdot \ldots \cdot v_d.$$ This morphism is $G$-equivariant on the $\widetilde{E}$-points and hence it comes from a morphism of ${\mathbb{Q}}$-varieties $$\operatorname{Res}_{E/{\mathbb{Q}}} C^0(Q) \to C^0(q).$$ The restriction of this latter to $\operatorname{Res}_{E/{\mathbb{Q}}} (\operatorname{Spin}(Q))$ is the morphism $m$ we are looking for. It is a morphism of algebraic groups which after base change to $\widetilde{E}$ takes the form $$m_{\widetilde{E}}: \operatorname{Res}_{E/{\mathbb{Q}}}(\operatorname{Spin}(Q))_{\widetilde{E}} \simeq \operatorname{Spin}(q_1) \times \ldots \times \operatorname{Spin}(q_d)
\to \operatorname{Spin}(q)_{\widetilde{E}}, \; \; \; (v_1, \ldots, v_d) \mapsto v_1 \cdot \ldots \cdot v_d.$$ It remains to show that the image of $m$ in $\operatorname{Spin}(q)$ is $\operatorname{SMT}(V)$. Using the claim we have to show that the following diagram exists and that it is Cartesian $$\label{DiagrammSMT}
\begin{CD}
\operatorname{im}(m) @>>> \operatorname{Spin}(q) \\
@V{\rho_{| \operatorname{im}(m)}}VV @VV{\rho}V \\
\operatorname{Res}_{E / {\mathbb{Q}}}(\operatorname{SO}(Q)) @>>> \operatorname{SO}(q).
\end{CD}$$ Here, the lower horizontal map is the one coming from Zarhin’s Theorem \[Thm2Zarhin\].
It is enough to study (\[DiagrammSMT\]) on $\overline{{\mathbb{Q}}}$-points. It is easily seen that over $\widetilde{E} \subset \overline{{\mathbb{Q}}}$ the composition $\rho \circ m$ factorizes over $$\rho_1 \times \ldots \times \rho_d : \operatorname{Spin}(q_1) \times \ldots \times \operatorname{Spin}(q_d) \to \operatorname{SO}(q_1) \times \ldots \times \operatorname{SO}(q_d) \simeq \operatorname{Res}_{E/{\mathbb{Q}}}(\operatorname{SO}(Q))_{\widetilde{E}} \subset \operatorname{SO}(q)_{\widetilde{E}}.$$ This shows that (\[DiagrammSMT\]) exists. Moreover we see that $\rho_{|\operatorname{im}(m)}$ surjects onto $\operatorname{SMT}(T)(\overline{{\mathbb{Q}}})$ because $\rho_1 \times \ldots \times \rho_d$ does so. Since $\ker(\rho) = \{ \pm 1 \} \subset \operatorname{im}(m)$, the diagram (\[DiagrammSMT\]) is Cartesian. This completes the proof of (i). $\mathrm{(i)} \Box$
*Proof of (ii). Choose $\mathbf{a}_0 = (0, \ldots, 0), \ldots, \mathbf{a}_r \in \{0,1 \}^d$ such that $$\left\{ \mathbf{a} \in \{ 0,1\}^d \; | \; \sum_i a_i \equiv 0 \; (2) \right\} =
G \mathbf{a}_0 \sqcup \ldots \sqcup G \mathbf{a}_r,$$ where $G$ acts on $\{ 0,1 \}^d$ via the action introduced in (\[WirkungG0,1\^d\]). Let $G_{\mathbf{a}_j} \subset G$ be the stabilizer of $\mathbf{a}_j$. Then $$\label{C0=PlusDa} \begin{aligned}
C^0(q)_{\widetilde{E}} = & \; \bigoplus_{j=0}^r \left( \bigoplus_{[\tau] \in G / G_{\mathbf{a}_j}} C^{\tau \mathbf{a}_j}(q)
\right) \\
= & \; \bigoplus_{j=0}^r D^{\mathbf{a}_j} \end{aligned}$$ with $D^{\mathbf{a}_j} = \bigoplus_{[\tau] \in G / G_{\mathbf{a}_j}} C^{\tau \mathbf{a}_j}(q)$.*
By Lemma \[GaloisClifford\] this is a decomposition of $G$-modules. Moreover, recall that $\operatorname{Spin}(q_1) \times \ldots \times \operatorname{Spin}(q_d)$ acts on $C^0(q)_{\widetilde{E}}$ by sending $(v_1, \ldots, v_d)$ to the endomorphism of $C^0(q)_{\widetilde{E}}$ given by left multiplication with $m(v_1, \ldots, v_d) = v_1 \cdot \ldots \cdot v_d$. Under this action each $C^{\mathbf{a}}(q)$ is $(\operatorname{Spin}(q_1)\times \ldots \times \operatorname{Spin}(q_d))$-stable. Thus, by (i) the decomposition (\[C0=PlusDa\]) is also a decomposition of $\operatorname{SMT}(V)(\widetilde{E})$-modules. Hence, by passing to $G$-invariants, (\[C0=PlusDa\]) leads to a decomposition of Hodge structures.
Denote by $$R:= D^{\mathbf{a}_0} = C^{\mathbf{a}_0}(q) = C^0(q_1) \otimes_{\widetilde{E}} \ldots \otimes_{\widetilde{E}} C^0(q_d).$$ By Lemma \[twistedC(Q)\], using the notations of Section \[SectionCorestriction\], we have $$R = Z_G(C^0(Q))$$ as $G$-modules and hence $R^G = \operatorname{Cores}_{E/{\mathbb{Q}}} (C^0(Q))$. Thus we have recovered $$W= \operatorname{Cores}_{E/{\mathbb{Q}}} (C^0(Q)) \subset C^0(q) = V$$ as a sub-Hodge structure. We now prove that after passing to $G$-invariants, the remaining summands in (\[C0=PlusDa\]) are isomorphic to sums of copies of $W$.
Denote by $d_j = \sharp (G/ G_{\mathbf{a}_j})$ and choose a set of representatives $\mu_1, \ldots , \mu_{d_j}$ of $G/G_{\mathbf{a}_j}$ in $G$. We consider three group actions on $R^{\oplus d_j}$:
$\bullet$ First there is a natural $(\operatorname{Spin}(q_1) \times \ldots \times \operatorname{Spin}(q_d))$-action which is just the diagonal action of the one on $R$.
$\bullet$ Let $\alpha : G \times R^{\oplus d_j} \to R^{\oplus d_j}$ be the diagonal action of the $G$-action on $R$.
$\bullet$ Finally define the $G$-action $\beta$ by $$\beta: \left\{ \begin{aligned}
G \times \bigoplus_{l=1}^{d_j} R_{[\mu_l]} \to & \;
\bigoplus_{l=1}^{d_j} R_{[\mu_l]} \\
(\tau, (r_{[\mu_1]}, \ldots, r_{[\mu_d}])) \mapsto & \; (\tau r_{[\tau^{-1} \mu_1]}, \ldots,
\tau r_{[\tau^{-1} \mu_{d_j}]}). \end{aligned} \right.$$
Now we will proceed in two steps:
\(a) We show that $D^{\mathbf{a}_j}$ is isomorphic as $G$-module and as $(\operatorname{Spin}(q_1) \times
\ldots \times \operatorname{Spin}(q_d))$-module to $R^{\oplus d_j}$ where $G$ acts on the latter via $\beta$.
\(b) We show that $R^{\oplus d_j}$ is isomorphic as $G$-module and as $(\operatorname{Spin}(q_1) \times
\ldots \times \operatorname{Spin}(q_d))$-module with $G$ acting via $\alpha$ to $R^{\oplus d_j}$ with $G$ acting via $\beta$.
Note that neither of these two isomorphisms is canonical. Once (a) and (b) are proved, we have an isomorphism $$V_{\widetilde{E}} = C^0(q)_{\widetilde{E}} \simeq R^{\oplus 2^{d-1}}$$ of $G$-modules and of $\operatorname{SMT}(V)(\widetilde{E})$-modules, $G$ acting diagonally on the right hand side. Here we use that $$\sum_j d_j = \sharp \left\{ \mathbf{a} \in \{ 0,1\}^d \; | \; \sum_i a_i \equiv 0 \; (2) \right\} = 2^{d-1}.$$ The proof of (ii) is then accomplished by passing to $G$-invariants.
*Proof of (a). Denote by $F_j$ the field $\widetilde{E}^{G_{\mathbf{a_j}}}$. As $C^{\mathbf{a}_j}(q) \subset D^{\mathbf{a}_j}$ is $G_{\mathbf{a}_j}$-stable, $C^{\mathbf{a}_j}(q) = W_j \otimes_{F_j} \widetilde{E}$ for some $F_j$-vector space $W_j$. Since $C^{\mathbf{a}_j}$ contains units in $C(q)_{\widetilde{E}}$, so does $W_j \subset C^{\mathbf{a}_j}$. (Very formally: There is a linear map $C^{\mathbf{a}_j} \to
\operatorname{End}(C(q)_{\widetilde{E}}), \; w \mapsto \{ v \mapsto v \cdot w \}$ which is defined over $F_{j}$. The image of this map over $\widetilde{E}$ intersects the Zariski-open subset of automorphisms of $C(q)_{\widetilde{E}}$, hence this must happen already over $F_j$.)*
Choose a unit $w_j \in W_j$. Then for $\tau \in G$, since $w_j$ is $G_{\mathbf{a}_j}$-invariant, $\tau w_j \in C^{\tau {\mathbf{a}_j}}(q)$ depends only on the coset $\tau G_{\mathbf{a}_j}$ and is again a unit in $C(q)_{\widetilde{E}}$.
Define an isomorphism of $\widetilde{E}$-vector spaces $$\varphi: \left\{ \begin{aligned}
D^{\mathbf{a}_j} = \bigoplus_{l=1}^{d_j} C^{\mu_l \mathbf{a}_j} (q) \to
& \; \bigoplus_{l=1}^{d_j} R_{[\mu_l]} \\
(v_{\mu_1}, \ldots , v_{\mu_{d_j}}) \mapsto & \; (v_{\mu_1} \cdot \mu_1(w_j), \ldots , v_{\mu_{d_j}} \cdot \mu_{d_j} (w_j)).
\end{aligned} \right.$$ This map is clearly $(\operatorname{Spin}(q_1) \times \ldots \times \operatorname{Spin}(q_d))$-equivariant since this group acts by multiplication on the left whereas we multiply on the right.
As for the $G$-equivariance ($G$ acting via $\beta$ on the right hand side), we find for\
$(v_{[\mu_1]}, \ldots , v_{[\mu_{d_j}]}) \in D^{\mathbf{a}_j}$ and $\tau \in G$: $$\begin{aligned}
\varphi \big( \tau(v_{[\mu_1]}, \ldots, v_{[\mu_{d_j}]}) \big) = & \;
\varphi \big( \tau v_{[\tau^{-1} \mu_1]}, \ldots , \tau v_{[\tau^{-1} \mu_d]} \big) \\
= & \; \big( \tau v_{[\tau^{-1} \mu_1]} \cdot \mu_1 w_j , \ldots , \tau v_{[\tau^{-1} \mu_{d_j}]}
\cdot \mu_{d_j} w_j \big) \\
= & \; \big( \tau ( v_{[\tau^{-1} \mu_1]} \cdot \tau^{-1} \mu_1 w_j ), \ldots,
\tau ( v_{[\tau^{-1} \mu_{d_j}]} \cdot \tau^{-1} \mu_{d_j} w_j ) \big) \\
= & \; \beta \big( \tau, ( v_{\mu_1} \cdot \mu_1 w_j, \ldots , v_{\mu_d} \cdot \mu_{d_j} w_j ) \big) \\
= & \; \beta \big( \tau, \varphi(v_{[\mu_1]}, \ldots , v_{[\mu_{d_j}]}) \big).
\end{aligned}$$ Here we used in the penultimate equality that $\sigma w_j$ depends only on the coset $\sigma G_{\mathbf{a}_j}$. This proves (a). $\mathrm{(a)} \Box$
*Proof of (b). Choose a ${\mathbb{Q}}$-basis $f_1, \ldots, f_{d_j}$ of $F_j$. For $i=1, \ldots, d_j$ define an $\widetilde{E}$-vector space homomorphism by $$\psi_i: \left\{ \begin{aligned}
R \hookrightarrow & \; \bigoplus_{l=1}^{d_j} R_{[\mu_l]} \\
r \mapsto & \; \big( \mu_1 (f_i) \cdot r , \ldots , \mu_{d_j} (f_i) \cdot r \big). \end{aligned} \right.$$ As $(\operatorname{Spin}(q_1) \times \ldots \times \operatorname{Spin}(q_d))(\widetilde{E})$ acts by $\widetilde{E}$-linear automorphisms on $R$, the $\psi_i$ are equivariant for the Spin-action.*
Let’s show that $\psi_i$ is $G$-equivariant, $G$ acting on the right hand side via $\beta$. For $\tau \in G$ and $r \in R$ we get $$\begin{aligned}
\psi_i( \tau r) = & \;\big(\mu_1 (f_i) \cdot \tau r, \ldots, \mu_{d_j} (f_i) \cdot \tau r \big) \\
= & \; \big( \tau (\tau^{-1} \mu_1 (f_i) \cdot r) , \ldots , \tau ( \tau^{-1} \mu_{d_j}(f_i) \cdot r) \big) \\
= & \; \beta \big( \tau, (\mu_1 (f_i) \cdot r, \ldots, \mu_{d_j} (f_i) \cdot r) \big) \\
= & \; \beta(\tau, \psi_i (r)).
\end{aligned}$$ Once more, we used the fact that $\sigma f_i$ depends only on the coset $\sigma G_{\mathbf{a}_j}$.
Finally, using Artin’s independence of characters (see [@La Thm. VI.4.1]), we get $$\det( (\mu_l (f_i))_{l,i}) \neq 0.$$ Consequently, the map $$\oplus_{i=1}^{d_j} \psi_i : R^{\oplus d_j} \to R^{\oplus d_j}$$ is an isomorphism which has the equivariance properties we want and (b) is proved. $\text{(ii)} \Box$
*Proof of (iii). Using that endomorphisms of Hodge structures are precisely those endomorphisms which commute with the special Mumford–Tate group, we have to show that $$\operatorname{End}_{\operatorname{SMT}(V)} (W) = \operatorname{Cores}_{E/{\mathbb{Q}}}(C^0(Q)).$$ Denote by $\mathfrak{g}$ the Lie algebra of $\operatorname{SMT}(V)$. Then $$\begin{aligned}
\operatorname{End}_{\operatorname{SMT}(V)} (W) = & \; \operatorname{End}_{\mathfrak{g}}(W) \\
= & \; \{ f \in \operatorname{End}_{{\mathbb{Q}}}(W) \; | \; Xf - fX = 0 \; \text{for all} \; X \in \mathfrak{g} \}.
\end{aligned}$$ Since for any field extension $K /{\mathbb{Q}}$ we have $\mathrm{Lie}(\operatorname{SMT}(V)_K) = \mathfrak{g} \otimes_{\mathbb{Q}}K$ this implies that $$\label{TensK}
\operatorname{End}_{\operatorname{SMT}(V)_K} (W_K) = \operatorname{End}_{\operatorname{SMT}(V)} (W) \otimes_{{\mathbb{Q}}} K.$$*
Now $\operatorname{SMT}(V)(\widetilde{E}) = \operatorname{Spin}(q_1) \times \ldots \times \operatorname{Spin}(q_d) (\widetilde{E})$ acts on $W_{\widetilde{E}} = C^0(q_1) \otimes \ldots \otimes C^0(q_d)$ by factorwise left multiplication: $$\big( (v_1, \ldots, v_d), w_1 \otimes \ldots \otimes w_d \big) \mapsto
(v_1 \cdot w_1) \otimes \ldots \otimes (v_d \cdot w_d).$$ Therefore, using multiplication on the right, we get an inclusion $$\big( C^0(q_1) \otimes \ldots \otimes C^0(q_d) \big)^{\mathrm{op}} \hookrightarrow
\operatorname{End}_{\operatorname{SMT}(V)(\widetilde{E})} (W_{\widetilde{E}}), \; \; \;
w \mapsto \{ w' \mapsto w' \cdot w \}.$$ Now, $( C^0(q_1) \otimes \ldots \otimes C^0(q_d) )^{\mathrm{op}} \simeq C^0(q_1)^{\mathrm{op}}
\otimes \ldots \otimes C^0(q_d)^{\mathrm{op}} \simeq C^0(q_1) \otimes \ldots \otimes C^0(q_d)$ and hence passing to $G$-invariants we have an inclusion $$\label{inclusion}
\operatorname{Cores}_{E /{\mathbb{Q}}} (C^0(Q)) \hookrightarrow \operatorname{End}_{\operatorname{SMT}(V)({\mathbb{Q}})} (W).$$
We will now show that this is an isomorphism over $\widetilde{E}$. Using (\[TensK\]) and comparing dimensions this will prove (iii).
To show that (\[inclusion\]) is an isomorphism over $\widetilde{E}$ we have to determine the $\operatorname{Spin}(q_1) \times \ldots \times \operatorname{Spin}(q_d)$-invariants in $$\operatorname{End}_{\widetilde{E}}\left( C^0(q_{1}) \otimes \ldots \otimes C^0(q_d) \right)
= \operatorname{End}_{\widetilde{E}}C^0(q_1) \otimes \ldots \otimes \operatorname{End}_{\widetilde{E}}C^0(q_d).$$ Using the next lemma inductively, this is equal to $$\operatorname{End}_{\operatorname{Spin}(q_1)} C^0(q_{1}) \otimes \ldots \otimes \operatorname{End}_{\operatorname{Spin}(q_d)} C^0(q_{d}).$$ Now by [@vG1 Lemma 6.5], $\operatorname{End}_{\operatorname{Spin}(q_i)} C^0(q_{i}) = C^0(q_{i})$. This proves (iii). $\Box$
Let $G$ and $H$ be two reductive linear algebraic groups over a field $K$ of characteristic $0$. Let $M$ resp. $N$ be finite-dimensional representations over $K$ of $G$ resp. $H$. Then $$(M \otimes_{K} N)^{G \times H} = M^G \otimes_{K} N^H.$$
Decompose $M = \bigoplus_i M_i$ and $N = \bigoplus_j N_j$ in irreducible representations. Then $M_i \otimes N_j$ is an irreducible representation of $G \times H$ since fixing $0 \neq m_0 \in M_i$ and $0 \neq n_0 \in N_i$ the orbit $(G \times H) m_0 \otimes n_0$ generates $M_i \otimes N_j$.
To conclude the proof note that the space of invariants is the direct sum of trivial, one-dimensional sub representations.
[The Brauer–Hasse–Noether theorem]{} Let $k$ be a field of characteristic $\neq 2$, let $A$ be a central simple $k$-algebra (i.e. a finite-dimensional $k$-algebra with center $k$ which has no non-trivial two-sided ideals). By Wedderburn’s theorem, there exists a central division algebra $D$ over $k$ and an integer $n > 0$ such that $A \simeq \mathrm{Mat}_n(D)$. Let $d^2$ be the dimension of $D$ over $k$ (this is a square because after base change, $D$ becomes isomorphic to a matrix algebra). Then $d$ is the *index of $A$*, denoted by $i(A)$. The class of $A$ in the Brauer group of $k$ has finite order. This integer is called the *exponent of $A$*, it is denoted by $e(A)$. In general, we have $e(A) | i(A)$.
Let $K / k$ be a cyclic extension of degree $n$, let $\sigma$ be a generator of the Galois group $\mathrm{Gal}(K/k)$, let $a \in k^*$. There is a central simple $k$-algebra $(\sigma, a, K/k)$ which as a $k$-algebra is generated by $K$ and an element $y \in (\sigma, a, K/k)$ such that $$y^n = a \; \; \text{and} \; \; r \cdot y = y \cdot \sigma(r) \; \text{for} \; r \in K.$$ This algebra is called the *cyclic algebra associated with $\sigma, a$ and $K/k$*. A cyclic algebra over $k$ of dimension 4 is a quaternion algebra.
\[BrauerHasseNoether\] Let $k$ be an algebraic number field. Then any central division algebra $A$ over $k$ is a cyclic algebra (for an appropriate cyclic extension $K/k$ and $\sigma$ and $a$ as above). Moreover, the exponent and the index of $A$ coincide. In particular, a central division algebra of exponent 2 is a quaternion algebra.
[An example]{} We continue to assume that $(T,h,q)$ is a Hodge structure of K3 type with $E = \operatorname{End}_{\operatorname{Hdg}}(T)$ a totally real number field of degree $d$ over ${\mathbb{Q}}$. By [@vG2 Prop. 3.2] we have $\dim_E T \ge 3$. We will consider now the case that $\dim_E T = 3$.
Then $T_1$ is an $3$-dimensional $\widetilde{E}$-vector space with quadratic form $q_1$ of signature $(2+, 1-)$. The $3$-dimensional quadratic spaces $(T_2,q_2), \ldots, (T_d,q_d)$ are negative definite. This implies that $$\begin{aligned}
C^0(q_1)_{{\mathbb{R}}} & = \mathrm{Mat}_2 ({\mathbb{R}}) \; \text{and} \\
C^0(q_i)_{{\mathbb{R}}}& = {\mathbb{H}}\; \text{for} \; i \ge 2 \end{aligned}$$ (see [@vG1 Thm. 7.7]). Since $$\operatorname{Cores}_{E/{\mathbb{Q}}} (C^0(Q)) \otimes_{{\mathbb{Q}}} \widetilde{E} = Z_G(C^0(Q)) = C^0(q_1) \otimes_{\widetilde{E}} \ldots \otimes_{\widetilde{E}} C^0(q_d)$$ we get $$\operatorname{Cores}_{E/{\mathbb{Q}}} (C^0(Q)) \otimes_{{\mathbb{Q}}} {\mathbb{R}}= \mathrm{Mat}_{2} ({\mathbb{R}}) \otimes_{{\mathbb{R}}} {\mathbb{H}}\otimes_{{\mathbb{R}}} \ldots \otimes_{{\mathbb{R}}}
{\mathbb{H}}.$$ Now, since ${\mathbb{H}}\otimes {\mathbb{H}}\simeq \mathrm{Mat}_4({\mathbb{R}})$ this becomes $$\label{CoresTensR}
\operatorname{Cores}_{E/{\mathbb{Q}}} (C^0(Q)) \otimes_{{\mathbb{Q}}} {\mathbb{R}}\simeq \left\{
\begin{aligned} & \mathrm{Mat}_{2^{d-1}} ({\mathbb{H}}) \; \text{for even} \; d \\
& \mathrm{Mat}_{2^{d}} ({\mathbb{R}}) \; \text{for odd} \; d.
\end{aligned} \right.$$ On the other hand, the corestriction induces a homomorphism of Brauer groups $$\mathrm{cores} : \mathrm{Br}(E) \to \mathrm{Br}({\mathbb{Q}})$$ (cf. [@D §9, Thm. 5]). Therefore, the exponent of $\operatorname{Cores}_{E/{\mathbb{Q}}}(C^0(Q))$ in the Brauer group of ${\mathbb{Q}}$ is 2. By the Brauer–Hasse–Noether Theorem \[BrauerHasseNoether\] there exists a (possibly split) quaternion algebra $D$ over ${\mathbb{Q}}$ with $$\label{Cores=D}
\operatorname{Cores}_{E/{\mathbb{Q}}} (C^0(Q)) \simeq \mathrm{Mat}_{2^{d-1}} (D).$$ Combining (\[CoresTensR\]) with (\[Cores=D\]) we see that $D$ is a definite quaternion algebra over ${\mathbb{Q}}$ in case $d$ is even and an indefinite quaternion algebra in case $d$ is odd. The endomorphism algebra of a Kuga–Satake variety of $(T,h,q)$ is $\mathrm{Mat}_{2^{2d-2}}(D)$. Since the dimension of a Kuga–Satake variety is $2^{\dim_{{\mathbb{Q}}} (T) -2} = 2^{3d-2}$, we have proved
\[KorRM\] Let $(T,q,h)$ be a Hodge structure of K3 type with $E = \operatorname{End}_{\operatorname{Hdg}}(T)$ a totally real number field of degree $d$ over ${\mathbb{Q}}$. Assume that $\dim_E(T)=3$. Then for any Kuga–Satake variety $A$ of $(T,h,q)$ there exists an isogeny $$A \sim B^{2^{2d-2}}$$ where $B$ is a $2^d$-dimensional Abelian variety.
If $d$ is even, $B$ is a simple Abelian variety of type III, i.e. $\operatorname{End}_{{\mathbb{Q}}} (B) = D$ for a definite quaternion algebra $D$ over ${\mathbb{Q}}$.
If $d$ is odd, $B$ has endomorphism algebra $\operatorname{End}_{{\mathbb{Q}}}(B) =D$ for an indefinite (possibly split) quaternion algebra $D$ over ${\mathbb{Q}}$.
[*Remark.*]{} (i) In the case $d = 2$ and $\dim_E(T) =3$, van Geemen showed in [@vG2 Prop. 5.7] that the Kuga–Satake variety of $T$ is isogenous to a self-product of an Abelian fourfold with definite quaternion multiplication and Picard number 1. It is this case which will be of interest in the next section.
\(ii) The case $d=\dim_E (T) =3$ was also treated by van Geemen (see [@vG2 5.8 and 6.4]). He considers the case $D \simeq \mathrm{Mat}_2({\mathbb{Q}})$ and relates this to work of Mumford and Galluzzi. Note that in this case the Abelian variety $B$ of the corollary is not simple.
[*Example.*]{} In [@vG2 3.4], van Geemen constructs a one-dimensional family of six-dimensional K3 type Hodge structures with real multiplication by a quadratic field $E= {\mathbb{Q}}(\sqrt{d})$ for some square-free integer $d> 0$ which can be written in the form $d = c^2 + e^2$ for rational $c,e>0$. These Hodge structures are realized as the transcendental lattice of certain K3 surfaces which are double covers of ${\mathbb{P}}^2$, see Section \[SectionDoubleCovers\]. Pick a member $S$ of this family. Then $T(S) \otimes_{{\mathbb{Q}}} E$ splits in the direct sum of two three-dimensional $E$-vector spaces $T_1$ and $T_2$. It turns out that the quadratic space $(T_1, q_1) = (T_1,Q)$ is isometric to $(E^3, \sqrt{d} X_1^2 + \sqrt{d} X_2^2 - (d-\sqrt{d}c) X_3^2)$. Consequently $$C^0(Q) = (-d, \sqrt{d} (d - \sqrt{d}c))_E \simeq (-1, \sqrt{d}- c)_E.$$ Here for $a,b \in E^*$, the symbol $(a,b)_E$ denotes the quaternion algebra over $E$ generated by elements $1, i$ and $j$ subject to the relations $i^2 = a, j^2 =b$ and $ij = -ji$ (see [@vG1 Ex. 7.5]).
The projection formula for central simple algebras (see [@T Thm. 3.2]) implies that $$\begin{aligned}
\operatorname{Cores}_{E/{\mathbb{Q}}} (C^0({\mathbb{Q}})) & \simeq (-1, N_{E/{\mathbb{Q}}}( \sqrt{d} - c))_{{\mathbb{Q}}} \\
& \simeq (-1, c^2 - d)_{{\mathbb{Q}}} \simeq (-1, - e^2)_{{\mathbb{Q}}} \simeq (-1, -1)_{{\mathbb{Q}}} \end{aligned}$$ which are simply Hamilton’s quaternions over ${\mathbb{Q}}$. Here, $N_{E/{\mathbb{Q}}}: E \to {\mathbb{Q}}$ is the norm map. Hence, a Kuga–Satake variety for $T(S)$ is isogenous to a self-product $B^4$ where $B$ is a simple Abelian fourfold with $\operatorname{End}_{{\mathbb{Q}}}(B) = (-1, -1)_{{\mathbb{Q}}}$.
[Double covers of P2 branched along six lines]{} \[SectionDoubleCovers\] Let $S$ be a K3 surface which admits a morphism $p: S \to {\mathbb{P}}^2$ such that the branch locus of $p$ is the union of six lines.
In this section we use the decomposition theorem to prove Theorem \[ThmHCDC\] which states that the Hodge conjecture holds for $S \times S$.
[Abelian varieties of Weil type]{} \[AVWeilType\] By a result of Lombardo [@L], the Kuga–Satake variety of $S$ is of Weil type. We briefly recall what this means.
Let $K = {\mathbb{Q}}(\sqrt{-d})$ for some square-free $d \in {\mathbb{N}}$. A polarized Abelian variety $(A,H)$ of dimension $2n$ is said to be of *Weil type for $K$* if there is an inclusion $K \subset \operatorname{End}_{{\mathbb{Q}}}(A)$ mapping $\sqrt{-d}$ to $\varphi$ such that
$\bullet$ the restriction of $\varphi^*: H^1(A,{\mathbb{C}}) \to H^1(A,{\mathbb{C}})$ to $H^{1,0}(A)$ is diagonalizable with eigenvalues $\sqrt{-d}$ and $- \sqrt{-d}$, both appearing with multiplicity $n$,
$\bullet$ $\varphi^* H = d H$.
There is a natural $K$-valued Hermitian form on the $K$-vector space $H^1(A,{\mathbb{Q}})$ which is defined by $$\widetilde{H}: H^1(A,{\mathbb{Q}}) \times H^1(A,{\mathbb{Q}}) \to K, \; \; \;
(v,w) \mapsto H(\varphi^* v, w) + \sqrt{-d} H (v,w).$$ By definition, the discriminant of a polarized Abelian variety of Weil type $(A,H,K)$ is $$\mathrm{disc}(A,H,K) = \mathrm{disc} (\widetilde{H}) \in {\mathbb{Q}}^* / N_{K/{\mathbb{Q}}}(K^*)$$ where $N_{K/{\mathbb{Q}}}: K \to {\mathbb{Q}}$ is the norm map.
Polarized Abelian varieties of Weil type come in $n^2$-dimensional families (see [@vG3 5.3]).
Weil introduced such varieties as examples of Abelian varieties which carry interesting Hodge classes. He constructs a two-dimensional space, called the space of *Weil cycles* $$W_K \subset H^{n,n}(A,{\mathbb{Q}}).$$ For the definition of $W_K$ see [@vG3 5.2]. In general, the algebraicity of the classes in $W_K$ is not known. Nonetheless there are some positive results. Here we mention one which we will use below.
\[ThmvG\] Let $(A,H)$ be a polarized Abelian fourfold of Weil type for the field ${\mathbb{Q}}(i)$. Assume that the discriminant of $(A,H,{\mathbb{Q}}(i))$ is $1$. Then the space of Weil cycles $W_{{\mathbb{Q}}(i)}$ is spanned by classes of algebraic cycles.
Van Geemen uses a six-dimensional eigenspace in the complete linear system of the unique totally symmetric line bundle $\mathcal{L}$ with $\operatorname{c}_1(\mathcal{L}) = H$ to get a rational (2:1) map of $A$ onto a quadric $Q \subset {\mathbb{P}}^5$. Then the projection on $W_{{\mathbb{Q}}(i)}$ of the classes of the pullbacks of the two rulings of $Q$ generate the space $W_{{\mathbb{Q}}(i)}$.
[Abelian varieties with quaternion multiplication]{} Let $D$ be a definite quaternion algebra over ${\mathbb{Q}}$. Such a $D$ admits an involution $x \mapsto \overline{x}$ which after tensoring with ${\mathbb{R}}$ becomes the natural involution on Hamilton’s quaternions ${\mathbb{H}}$.
A polarized Abelian variety $(A,H)$ of dimension $2n$ has *quaternion multiplication by $D$* if there is an inclusion $D \subset \operatorname{End}_{{\mathbb{Q}}}(A)$ such that
$\bullet$ $H^1(A,{\mathbb{Q}})$ becomes a $D$-vector space and
$\bullet$ for $x \in D$ we have $x^* H = x \overline{x} H$.
We say that $(A,H,D)$ is an Abelian variety of definite quaternion type. Polarized Abelian varieties of dimension $2n$ with quaternion multiplication by the same quaternion algebra come in $n (n-1)/2$-dimensional families (cf. [@BL Sect. 9.5]).
Let $K \subset D$ be a quadratic extension field of ${\mathbb{Q}}$. Then $K$ is a CM field and $(A,H,K)$ is a polarized Abelian variety of Weil type (see [@vGV Lemma 4.5]). The space of quaternion Weil cycles of $(A,H,D)$ $$W_D \subset H^{n,n}(A,{\mathbb{Q}})$$ is defined as the span of $x^* W_K$ where $x$ runs over $D$. It can be shown that this is independent of the choice of $K$ (see [@vGV Prop. 4.7]). For the general member of the family of polarized Abelian varieties with quaternion multiplication these are essentially all Hodge classes:
\[ThmA\] Let $(A,H,D)$ be a general Abelian variety of quaternion type. Then the space of Hodge classes on any self-product of $A$ is generated by products of divisor classes and quaternion Weil cycles on $A$.
In particular, if for one quadratic extension field $K \subset D$ the space of Weil cycles $W_K$ is known to be algebraic, then the Hodge conjecture holds for any self-product of $A$.
In Abdulali’s theorem, a triple $(A,H,D)$ is general if the special Mumford–Tate group of $H^1(A,{\mathbb{Q}})$ is the maximal one. In the moduli space of triples $(A,H,D)$ the locus of general triples is everything but a countable union of proper, closed subsets.
[The transcendental lattice of S]{} We now turn back to our K3 surface $S$. Let $p: S \to {\mathbb{P}}^2$ be the (2:1) morphism which is ramified over six lines.
The Néron–Severi group of $S$ contains the 15 classes $e_1, \ldots, e_{15}$ corresponding to the exceptional divisors over the intersection points of the six lines. Let $h$ be the class of the pullback of ${\mathcal{O}}_{{\mathbb{P}}^2}(1)$.
Define $\widetilde{T}(S):= \langle e_1, \ldots, e_{15} ,h \rangle^{\perp} \subset H^2(S ,{\mathbb{Q}})$. The (rational) transcendental lattice of $S$ is defined to be $T(S): = \operatorname{NS}(S)^{\perp} \subset
H^2(S,{\mathbb{Q}})$. Then we have $$T(S) \subset \widetilde{T}(S).$$ Both, $T(S)$ and $\widetilde{T}(S)$ are Hodge structures of K3 type. In addition, $T(S)$ is irreducible. Since the second Betti number of $S$ is 22, the ${\mathbb{Q}}$-dimension of $\widetilde{T}(S)$ is 6.
[The Kuga–Satake variety of T(S)]{} Denote by $A$ the Kuga–Satake variety associated with $\widetilde{T}(S)$.
\[ThmLombardo\] There is an isogeny $$A \sim B^4$$ where $B$ is an Abelian fourfold with ${\mathbb{Q}}(i) \subset \operatorname{End}_{{\mathbb{Q}}}(B)$. Moreover, $B$ admits a polarization $H$ such that $(B,H,{\mathbb{Q}}(i))$ is a polarized Abelian variety of Weil type with $\mathrm{disc}(B,H,{\mathbb{Q}}(i))=1$.
Paranjape [@P] explains in a very nice way how this variety $B$ is geometrically related to $S$. He shows that there exists a triple $$(C, E, f: C \to E)$$ where $C$ is a genus five curve, $E$ an elliptic curve and $f$ a $(4:1)$ map such that $$\mathrm{Prym}(f) = B.$$ Then $S$ can be obtained as the resolution of a certain quotient of $C \times C$. It is noteworthy that Paranjape does not construct explicitly a triple $(C,E,f)$ starting with a K3 surface $S$ in the family $\pi$. His proof goes the other way round. He associates to any triple a K3 surface and shows then that letting vary the triple he obtains all surfaces in the family $\pi$.
Paranjape’s construction establishes that the Kuga–Satake inclusion $$\label{Paranjape}
\widetilde{T}(S) \hookrightarrow H^2(B^4 \times B^4, {\mathbb{Q}})$$ is given by an algebraic cycle on $S \times B^4 \times B^4$.
[Proof of Theorem 2]{} As pointed out in the introduction, we have to prove that $E_S := \operatorname{End}_{\operatorname{Hdg}}(T(S))$ is spanned by algebraic classes. Since the Picard number of $S$ is at least 16, we can apply Ramón-Marí’s corollary [@Ma] of Mukai’s theorem [@Mu] which proves the assertion in the case that $S$ has complex multiplication.
Therefore, we may assume that $S$ has real multiplication. Note that $T(S)$ is an $E_S$-vector space and that $\dim_{E_S} T(S) \cdot [E_S :{\mathbb{Q}}] = \dim_{{\mathbb{Q}}} T(S) \le 6$. On the other hand, by [@vG2 Lemma 3.2], we know that $\dim_{E_S} T(S) \ge 3$. It follows that either $E_S ={\mathbb{Q}}$ or $E_S = {\mathbb{Q}}(\sqrt{d})$ for some square-free $d \in {\mathbb{Q}}_{>0}$. In the first case we use the fact, that the class of the diagonal $\Delta \subset S \times S$ induces the identity on the cohomology and that the Künneth projectors are algebraic on surfaces so that ${\mathbb{Q}}\operatorname{id}\subset E_S$ is spanned by an algebraic class.
It remains to study the case $E_S = {\mathbb{Q}}(\sqrt{d})$. The idea is to consider the Kuga–Satake variety $A(S)$ of $\widetilde{T}(S) = T(S)$. By Paranjape’s theorem the inclusion $$\widetilde{T}(S) \subset H^2(A(S) \times A(S),{\mathbb{Q}})$$ is algebraic. It follows that there is an algebraic projection $\pi: H^2(A(S) \times A(S),{\mathbb{Q}}) \to \widetilde{T}(S)$ (see [@Kl Cor. 3.14]) and therefore it is enough to show that there is an algebraic class $$\alpha \in H^2(A(S) \times A(S),{\mathbb{Q}}) \otimes H^2(A(S) \times A(S),{\mathbb{Q}}) \subset H^4(A(S)^4, {\mathbb{Q}})$$ with $\pi \otimes \pi (\alpha) = \sqrt{d}$.
Combining Corollary \[KorRM\] with Lombardo’s theorem \[ThmLombardo\] we see that $A(S) \sim B^4$ where $B$ is an Abelian fourfold with $\operatorname{End}_{{\mathbb{Q}}}(B) = D$ for a definite quaternion algebra and ${\mathbb{Q}}(i) \subset D$. Moreover, there is a polarization $H$ of $B$ such that $(B,H,{\mathbb{Q}}(i))$ is a polarized Abelian variety of Weil type of discriminant 1. Since by [@BL Prop. 5.5.7], the Picard number of $B$ is 1, $(B,H,D)$ is a polarized Abelian variety of quaternion type.
There is a one-dimensional family $(B,H,D)_t$ of deformations of $(B,H,D)$ and this corresponds to a one-dimensional family $S_t$ of deformations of $S$ which parametrizes K3 surfaces with real multiplication by the same class. By Abdulali’s Theorem \[ThmA\], for $t$ general the space of Hodge classes on $(B_t)^{16} \sim A(S_t)^4$ is generated by products of divisors and quaternion Weil cycles, that is by products of $H$ and classes in $W_D$. Denote the span of these products in $H^4(A(S_t)^{4},{\mathbb{Q}})$ by $F_t$.
Since the class corresponding to $\sqrt{d} \in \widetilde{T}(S_t) \otimes \widetilde{T}(S_t)$, the projection $\pi: H^2(A(S_t)^2,{\mathbb{Q}}) \to \widetilde{T}(S_t)$ and the space $F_t$ are locally constant, there $\mbox{exists}$ a locally constant class $\alpha_t \in H^4(A_{S_t},{\mathbb{Q}})$ with the properties:
$\bullet$ for all $t$ we have $\pi \otimes \pi (\alpha_t) = \sqrt{d}$,
$\bullet$ for all $t$ we have $\alpha_t \in F_t$.
Now by Schoen’s and van Geemen’s Theorem \[ThmvG\] the space of Weil cycles $W_{{\mathbb{Q}}(i)}$ is generated by algebraic classes on any $B_t$. It follows that $W_D$ is generated by algebraic classes and consequently $F_t$ is generated by algebraic classes for any $t$. In particular, $\alpha_t \in F_t$ is algebraic. This proves the theorem. $\Box$
[00000]{}
S. Abdulali, *Abelian varieties of type III and the Hodge conjecture*, Int. J. Math. [**10**]{}, no. 6, (1999), 667-675. C. Birkenhake, H. Lange, *Complex abelian varieties. Second edition*, Grundlehren, Band 302, Springer (2004). S. Bosch, W. Lütkebohmert, M. Raynaud, *Néron models*, Erg. Math. 3. Folge, Band 21, Springer (1990). R. Brauer, H. Hasse, E. Noether, *Beweis eines Hauptsatzes in der Theorie der Algebren*, J. reine angew. Math. [**167**]{} (1932), 399-404. P. Draxl, *Skew fields*, LMS Lect. Note Series 31, Cambridge University Press (1983). B. van Geemen, *Theta functions and cycles on some abelian fourfolds*, Math. Zeit. (1996), 617-631. B. van Geemen, *An introduction to the Hodge conjecture for Abelian varieties*, in: Algebraic Cycles and Hodge Theory (Torino, 1993), LNM 1594, Springer (1994), 233-252. B. van Geemen, *Kuga–Satake varieties and the Hodge conjecture*, in: The arithmetic and geometry of algebraic cycles (Banff, AB, 1998), Kluwer, Dordrecht (2000), 51-82. B. van Geemen, *Real multiplication on K3 surfaces and Kuga–Satake varieties*, Mich. Math. J. [**56**]{} (2008), 375-399. B. van Geemen, A. Verra, *Quaternionic Pryms and Hodge classes*, Topology [**42**]{} (2003), 35-53. B. Gordon, *A Survey of the Hodge Conjecture for Abelian Varieties*, Appendix B in: J. Lewis, A Survey of the Hodge Conjecture, CRM Monograph Series 10, 2nd ed. (1999). M. Gross, D. Huybrechts, D. Joyce, *Calabi–Yau manifolds and related geometries*, Springer Universitext (2002). S. Kleiman, *Algebraic cycles and the Weil conjectures*, in: Dix exposés sur la cohomologie des schémas, North-Holland, Amsterdam (1968), 359-386. M. Kuga, I. Satake, *Abelian varieties attached to polarized K3 surfaces*, Math. Ann. [**169**]{} (1967), 239-242. S. Lang, *Algebra*, 3rd ed., Addison-Wesley (1993). G. Lombardo, *Abelian varieties of Weil type and Kuga–Satake varieties*, Tôhoku Math. J. [**53**]{} (2001), 453-466. K. Matsumoto, T. Sasaki, M. Yoshida, *The monodromy of the period map of a 4-parameter family of K3 surfaces and the hypergeometric function of type (3,6)*, Int. J. Math. [**3**]{} (1992), 1-164. S. Mukai, *On the moduli space of bundles on K3 surfaces, I*, in: Vector Bundles on Algebraic Varieties, Bombay (1984), 341-413. S. Mukai, *Vector bundles on a K3 surface*, Proc. of the ICM, Vol. II, (Beijing, 2002), 495-502. V. Nikulin, *On correspondences between surfaces of K3 type*, (in Russian), translation in: Math. USSR-Izv. [**30**]{} (1988), 375-383. K. Paranjape, *Abelian varieties associated to certain K3 surfaces*, Comp. Math. [**68**]{} (1988), 11-22. J. Ramón-Marí, *On the Hodge conjecture for products of certain surfaces*, eprint [math.AG/055357]{}. C. Schoen, *Hodge classes on self-products of a variety with an automorphism*, Comp. Math. [**65**]{} (1988), 3-32. J.-P. Tignol, *On the Corestriction of Central Simple Algebras*, Math. Zeit. [**194**]{} (1987), 267-274. Y. Zarhin, *Hodge groups of K3 surfaces*, J. reine angew. Math. [**341**]{} (1983), 193-220.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'The Seyfert galaxy NGC 5515 has double-peaked narrow-line emission in its optical spectrum, and it has been suggested that this could indicate that it has two active nuclei. We observed the source with high resolution Very Long Baseline Interferometry (VLBI) at two radio frequencies, reduced archival Very Large Array data, and re-analysed its optical spectrum. We detected a single, compact radio source at the position of NGC 5515, with no additional radio emission in its vicinity. The optical spectrum of the source shows that the blue and red components of the double-peaked lines have very similar characteristics. While we cannot rule out unambiguously that NGC 5515 harbours a dual AGN, the assumption of a single AGN provides a more plausible explanation for the radio observations and the optical spectrum.'
author:
- |
K.É. Gabányi$^{1,2}$[^1], S. Frey$^{3}$, T. Xiao$^{4,5}\thanks{LAMOST fellow}$, Z. Paragi$^{6}$, T. An$^{5,7}$, E. Kun$^{1}$ and L.Á. Gergely$^{1,8}$\
$^{1}$Departments of Theoretical and Experimental Physics, University of Szeged, Dóm tér 9, H-6720 Szeged, Hungary\
$^{2}$Konkoly Observatory, MTA Research Centre for Astronomy and Earth Sciences, P.O. Box 67, H-1525 Budapest, Hungary\
$^{3}$FÖMI Satellite Geodetic Observatory, P.O. Box 585, H-1592 Budapest, Hungary\
$^{4}$Partner Group of Max Planck Institute for Astrophysics and Key Laboratory for Research in Galaxies and Cosmology of Chinese Academy\
of Sciences, P.R. China\
$^{5}$Shanghai Astronomical Observatory, Chinese Academy of Sciences, 80 Nandan Road, 200030 Shanghai, P.R. China\
$^{6}$Joint Institute for VLBI in Europe, Postbus 2, 7990 AA Dwingeloo, The Netherlands\
$^{7}$Key Laboratory of Radio Astronomy, Chinese Academy of Sciences, P.R. China\
$^{8}$Department of Physics, Tokyo University of Science, Shinjuku-ku, Tokyo, Japan
date: 'Accepted 2014 June 16. Received 2014 June 12; in original form 2014 May 14'
title: 'A single radio-emitting nucleus in the dual AGN candidate NGC 5515'
---
\[firstpage\]
galaxies: active – galaxies: Seyfert – galaxies: individual: NGC 5515 – radio continuum: galaxies – techniques: interferometric – techniques: spectroscopic.
Introduction
============
It is widely accepted that most massive galaxies harbour supermassive black holes (SMBHs) in their centres [@salpeter; @lbell]. In hierarchical structure formation models, interactions and mergers between galaxies play an important role in their evolution and consequently in the growth of their central SMBHs. Thus, it is expected that a particular phase in the merging process, namely systems with dual SMBHs exist in the Universe. In such systems, one or both of the SMBHs may be active; several studies suggest that the merging process can cause enhanced accretion onto the central SMBHs and thus initiate “activity” [e.g. @Dima2005]. High-resolution particle hydrodynamical simulations [@VW] suggest that simultaneous activity is mostly expected at the late phases of mergers, at or below 10 kpc-scale separations. Therefore dual active galactic nuclei (AGN, with two active SMBHs in a merger system) are expected to be observed. So far, only a few convincing cases of dual AGN are known [e.g. @Komossa; @rodriguez06; @Bondi; @Liu_disc; @Shen2011; @Fu2012]. In some cases [e.g. @Liu_disc], high spatial-resolution optical photometry and spectroscopy or high resolution Very Long Baseline Interferometry (VLBI) provided the unambiguous evidence of the dual system.
Originally, it was thought that the presence of double-peaked narrow optical emission lines is indicative of the existence of dual AGN, as these lines may originate from the two distinct narrow line regions (NLR) of the two AGN [@nlr_wang]. However, several studies already in the eighties [@heckman81; @heckman84] showed that double-peaked narrow emission lines can arise due to peculiar kinematics and jet–cloud interaction in a single NLR. As of now, there is no known observational approach which could be used to select a large sample of compelling dual AGN candidates. Therefore, it is crucial to check with independent methods whether the candidate sources are indeed dual systems.
Recently, [@Beni13a] reported the serendipitous discovery of two new dual AGN candidates. They studied a sample of ten close-by, intermediate-type Seyfert galaxies chosen from the Sloan Digital Sky Survey [SDSS, @sdssdr4] database. The intermediate-type Seyfert galaxies (spanning from Sy 1.2 to Sy 1.9) belong to Seyfert 1 galaxies as they show broad emission lines, however with decreasing intensity [@o81]. The role of this kind of AGN within the unified scheme is not clear. Surprisingly, half of the sample studied by [@Beni13a] showed narrow double-peaked emission lines. Based upon the line ratios (O[iii]{}/H$\beta$, N[ii]{}/H$\alpha$, S[ii]{}/H$\alpha$ and O[i]{}/H$\alpha$), [@Beni13a] concluded that among the five double-peaked narrow-line emitters, two (NGC5515 and Mrk1469) are good candidates for being dual AGNs.
Both of the sources are radio emitters. We conducted high-resolution radio imaging observations of the brighter radio source, NGC 5515, with the European VLBI Network (EVN). The best resolution achieved was $\sim 2$milli-arcseconds (mas), which corresponds to a projected linear distance of $1.04$pc in the rest frame of the source, at a redshift of $z$=0.0257, assuming a flat $\Lambda$CDM cosmological model with $H_0$=70 kms$^{-1}$Mpc$^{-1}$, $\Omega_{\rm m}$=0.27, and $\Omega_\Lambda$=0.73 [@Wrig06]. In this model, the luminosity distance of NGC 5515 is $D_{\rm L}$=112.3 Mpc.
Our observations of NGC 5515 and the details of data reduction are given in Section \[observations\], including the analysis of archival multi-frequency data obtained with the US National Radio Astronomy Observatory (NRAO) Very Large Array (VLA). The results are presented in Section \[results\] and further discussed, together with the related optical data, in Section \[discussion\]. The summary is given in Section \[sum\].
Observations and data reduction {#observations}
===============================
VLBI observations
-----------------
We observed the nucleus of NGC 5515 with the EVN at 1.7 and 5 GHz frequencies (project codes EG070A and EG070B). The observations were conducted in e-VLBI mode [@Szom08] where the participating radio telescopes are connected to the central EVN data processor at the Joint Institute for VLBI in Europe (JIVE, Dwingeloo, the Netherlands) via optical fibre networks, to allow correlation in real time. The maximum data transmission rate per station was 1024 Mbit s$^{-1}$, resulting in a total bandwidth of 128 MHz in both left and right circular polarizations, using 2-bit sampling. Both experiments lasted for 4 h. Their dates (2013 April 16 at 1.7 GHz and 2013 June 18 at 5 GHz) were chosen close to each other to minimize the effects of potential long-term source variability. At 1.7 GHz, the following 7 radio telescopes provided useful data: Effelsberg (Germany), Medicina (Italy), Onsala (Sweden), Toruń (Poland), Hartebeesthoek (South Africa), Sheshan (China), and the phased array of the Westerbork Synthesis Radio Telescope (WSRT, the Netherlands). At 5 GHz, the successfully participating 7 radio telescopes were Effelsberg, Medicina, Toruń, Hartebeesthoek, Sheshan, the WSRT, and Yebes (Spain).
NGC 5515 was observed in phase-reference mode [e.g., @Beas95]. By regularly changing the pointing direction of the telescopes between the target source and a sufficiently bright and compact nearby calibrator, the coherent integration time on the target can be extended and thus the imaging sensitivity improved. The quasar J1419+3821 was selected as the phase-reference calibrator, at $1\fdg68$ angular separation from NGC 5515. The calibrator is one of the defining sources of the current 2nd realization of the International Celestial Reference Frame [ICFR2, @Fey09], at right ascension $\alpha_0$=$14^{\rm h}19^{\rm m}46\fs6137607$ and $\delta_0$=$38\degr21\arcmin48\farcs475093$, with a formal uncertainty of 0.04 mas in each coordinate. Within each target–reference cycle of $\sim$5 min, NGC 5515 was observed for 3.3 min, accumulating a total on-source integration time of $\sim$2.4 h at both frequencies. In an ideal case with no loss of data, the expected image thermal noise level was 14 $\mu$Jy beam$^{-1}$ and 16 $\mu$Jy beam$^{-1}$ at 1.7 and 5 GHz, respectively[^2]. The noise levels achieved in practice depend on various factors e.g. downtimes, actual system temperatures, data rate limitations, and radio-frequency interference at the telescope sites.
The NRAO Astronomical Image Processing System [[AIPS]{}, @Grei03] was used for the data calibration in a standard way [e.g. @Diam95]. The visibility amplitudes were calibrated using system temperatures and antenna gains measured at the telescope sites. Fringe-fitting was performed for the calibrator source J1419+3821. The calibrator data were then exported to the [Difmap]{} package [@Shep94] for imaging. The conventional hybrid mapping procedure involving several iterations of CLEANing [@Hogb74] and phase (then amplitude) self-calibration resulted in an image and a brightness distribution model for the practically unresolved calibrator. Overall antenna gain correction factors (typically less than 10 per cent) were determined in [Difmap]{} and applied to the visibility amplitudes in [AIPS]{}. Fringe-fitting was repeated for the phase-reference calibrator in [AIPS]{}, now taking its CLEAN component model into account in order to compensate for small residual phases resulting from its structure. The solutions obtained were interpolated and applied to the NGC 5515 data. The calibrated and phase-referenced visibility data of NGC 5515 were imaged in [Difmap]{}.
Because the a priori position of the central radio source in NGC 5515 used for correlation was accurate to only $\sim$$0\farcs5$, the phase centre was shifted to the location of the brightness peak. After obtaining an initial CLEAN component model, a phase-only self-calibration was performed for 15-min solution intervals, to correct for long-period phase variations at the antennas. The source appeared the most resolved on the baselines to Sheshan, therefore data from this telescope were fixed before another CLEANing and phase self-calibration was performed, now with 5-min solution intervals. Finally, only the phases at the two most sensitive antennas, Effelsberg and the WSRT were allowed to vary when the self-calibration solution interval was set to zero. No amplitude self-calibration was done on the target source. The total intensity images restored with the final CLEAN component models are displayed in Fig. \[image\]. The weights of data points were made inversely proportional to the amplitude errors, by setting [*uvweight 0,$-$1*]{} (natural weighting) in [Difmap]{}.
Archival VLA data
-----------------
We also reduced VLA observations of NGC 5515 found in the NRAO archive. The source was observed in the most extended A configuration at L, C, and X bands (at 1.5, 5, and 8 GHz) in snapshot mode on 1991 Aug 24, 1992 Oct 20, and 1995 Aug 14, respectively. (The project codes were: AC301, AF233 and AM484). Additionally, the source was observed in snapshot mode at L band in B configuration on 1993 Apr 26 (project id.: AT149) as well. The on-source integration times were less than 5 min in all cases.
NGC 5515 was detected as a point source in all three bands with the following flux densities: in A-configuration $S_\mathrm{L}^\mathrm{A}=(16\pm 1)$mJy, $S_\mathrm{C}=(16 \pm 1)$mJy, and $S_\mathrm{X}=(29\pm 1)$mJy; and in B-configuration $S_\mathrm{L}^\mathrm{B}=(26\pm 3)$mJy.
The source was also observed with the VLA at L band in the NRAO VLA Sky Survey [NVSS, @nvss] and the Faint Images of the Radio Sky at Twenty-Centimeters [FIRST, @first] surveys. The NVSS observation was conducted in D configuration in April 1995, the flux density is $(28.8\pm1)$mJy. The FIRST observation was conducted in B configuration in 1994, the flux density is $(19.19\pm0.14)$mJy.
Results
=======
Our EVN images in Fig. \[image\] show a single compact mas-scale radio source. Based on the slight asymmetry of the contours, there is a hint of a somewhat more extended structure in about the east–west direction. This notion is supported by the fact that the interferometer phases on the baselines from the European antennas to Hartebeesthoek (i.e. north–south direction) appeared less noisy than on the baselines to Sheshan (east–west direction, at about the same baseline length), therefore the source seems more resolved in the latter direction.
To quantitatively characterize the brightness distribution of NGC 5515, we fitted Gaussian model components directly to the self-calibrated VLBI visibility data in [Difmap]{}. The parameters of the best-fitting elliptical Gaussian model components are given in Table \[modelfit\]. The statistical errors are estimated according to @Foma99, assuming additional 5% flux density calibration uncertainties. The sizes of the fitted model components exceed the values obtained for the minimum resolvable angular size [e.g. @Kova05] in our experiments. Consistently with our previous remark on the possible extension, the major axes of the Gaussians at both frequencies closely align with the east–west direction (i.e. position angle 90; the position angles are measured from north through east). Notably, these are almost coincident with the major axis direction of the disk and the pseudobulge of the host galaxy [103–104, @Beni13b].
------------- ---------------- --------------------- --------------------- ------------------------ ------------------------
Frequency Flux density Major axis Brightness temperature
$\nu$ (GHz) $S$ (mJy) $\vartheta_1$ (mas) $\vartheta_2$ (mas) position angle () $T_{\rm B}$ ($10^9$ K)
1.7 12.07$\pm$0.61 2.241$\pm$0.003 1.136$\pm$0.003 107 2.14$\pm$0.12
5 16.52$\pm$0.83 0.940$\pm$0.001 0.120$\pm$0.001 86 7.36$\pm$0.44
------------- ---------------- --------------------- --------------------- ------------------------ ------------------------
Based on the fitted VLBI component flux densities, the two-point spectral index is $\alpha_{1.7}^{5}=0.29$ (where the spectral index is defined as $S\propto\nu^{\alpha}$). This indicates a slightly inverted radio spectrum, unless the source was strongly variable between the two observing epochs separated by $\sim$2 months.
At 5 GHz, the WSRT data taken parallel with the EVN observation were also analyzed. The recovered flux density is $19.6\pm 0.09$mJy, close to the value obtained by EVN (Table \[modelfit\]). Thus, the source is dominated by the emission from the compact core, the large-scale structure resolved out by EVN observation accounts for $\sim 3$mJy ($\sim15$ per cent).
According to the archival VLA A-configuration observations, the source is significantly brighter at 8GHz than at lower frequencies. However, the spectral index is also compatible with a flat spectral shape as well as an inverted spectral shape ($\alpha=0.39 \pm 0.31$). Moreover, there is a large temporal gap between the observations (three years), therefore intrinsic source variability may hinder the estimation of the spectral index of the source. Indeed, the variability is apparent, when comparing observations performed at the same resolution and frequency (VLA B-configuration at 1.4 GHz). The source was significantly brighter (more than $30$ per cent) in April 1993 than in the FIRST survey observation conducted in 1994.
Even though the spectral index cannot be calculated reliably from the archival VLA observations, the measurements agree with the source having most likely flat or slightly inverted spectrum and thus in accord with the spectral index determined from our EVN measurements. The displayed variability and the spectral shape is consistent with the radio emission coming from a compact partially self-absorbed synchrotron source [@bk].
The astrometric position of the 5-GHz radio brightness peak in NGC 5515 (right ascension $\alpha$=$14^{\rm h}12^{\rm m}38\fs15423$ and declination $\delta$=$39\degr18\arcmin36\farcs8162$) was derived using the [MAXFIT]{} verb in [AIPS]{}. We estimate that each coordinate is accurate to within 1 mas. The sources of the positional error are the thermal noise of the interferometer phases, the error of the phase-reference calibrator position, and the systematic error of phase-referencing observations mainly originating from the ionospheric and tropospheric fluctuations. In our case, the latter is by far the most dominant error component. The position of the brightness peak at 1.7 GHz coincides with the 5-GHz position well within the uncertainties.
A large window of $8\farcs2 \times 8\farcs2$ around the brightness peak, within the undistorted field of view, [^3] was checked for additional radio emission. This size corresponds to a region of 4.25 kpc $\times$ 4.25 kpc at the distance of NGC 5515. No other compact radio component was found above the $\sim$6$\sigma$ image noise level of 0.13 mJy beam$^{-1}$ at 1.7 GHz. We also checked the field of view of the archival VLA data for possible radio sources. According to the optical SDSS image, the size of the NGC5515 galaxy is roughly $80\arcsec \times 60\arcsec$. Within this range we did not detect any additional radio emitting source in the VLA images above $\sim 6\sigma$ image noise level ($1$-$2$mJy beam$^{-1}$).
Discussion
==========
We calculated the rest-frame brightness temperature of the radio source in NGC 5515, $$T_{\rm B} = 1.22\times 10^{12} \frac{S}{\vartheta_1 \vartheta_2 \nu^2} (1+z) \,\,{\rm K},$$ ($S$ is given in Jy, $\vartheta_1$ and $\vartheta_2$ in mas, and $\nu$ in GHz) using the Gaussian model parameters listed in Table \[modelfit\]. The values obtained (higher than $10^9$ K, see Table \[modelfit\]) clearly prove the AGN-related non-thermal synchrotron origin of the radio emission since the brightness temperatures for thermally-dominated “normal” galaxies (i.e. the ones with no central active nucleus) do not exceed $\sim$$10^{5}$ K [e.g. @Cond92].
Our non-detection of any additional compact source in the field of view implies an upper limit of the radio power $P\la 2 \times 10^{20}$ WHz$^{-1}$. According to e.g. [@Kewl00] and [@Midd11], AGN have high-luminosity cores, with power exceeding $ 2 \times 10^{21}\mathrm{\,W\,Hz}^{-1}$, therefore we can rule out the existence of another radio-emitting AGN in the field of view. We can thus conclude that there is no dual radio-emitting AGN in the centre of NGC 5515.
The coordinates of the detected radio source agree with the position of the optical galaxy in SDSS DR9 within the errors. (The distance between the optical and radio positions is $\sim 0\farcs22$.) According to the SDSS, there is an optical source $\sim9\arcsec$ away from the radio position, SDSS J141237.38+391835.2, at a similar redshift ($z=0.026$). We checked the L-band and C-band EVN maps at the position of this source and we did not find any radio source down to a brightness limit of $150\mathrm{\,}\mu\mathrm{Jy\,beam}^{-1}$ and $200\mathrm{\,}\mu\mathrm{Jy\,beam}^{-1}$ in L band and C band, respectively. These brightness limits were calculated by taking into account the bandwidth and time-average smearing effects [@bs]. There is also an X-ray source at a distance of $\sim 7\arcsec$ from the radio position in the ROSAT All-Sky Survey Faint Source Catalog [RASS-FSC, @rosat_faint]. Since its positional uncertainty is $11\arcsec$, it can be related to either of the two optical sources. However, given that our radio observations indicate an AGN in the nucleus of NGC5515, the X-ray emission is most likely associated with that object.
We re-analyzed the SDSS spectrum of NGC 5515. Similarly to [@Beni13a], we found that the narrow H$\beta$, H$\alpha$, O[iii]{}, N[ii]{}, and S[ii]{} emission lines are double-peaked. The double peaks are only marginally resolved in the SDSS spectrum. Reasonable fitting was achieved by using the same fitting scheme for the other lines as derived for the S[ii]{} line. The velocities of the blue shifted and redshifted lines are $119\mathrm{\,km\,s}^{-1}$ and $-145\mathrm{\,km\,s}^{-1}$, respectively.
According to [@Nelson], the width of the O[iii]{} line at wavelength of $5007$Å($W$) can be used as a surrogate value for the velocity dispersion as $\sigma_*=W/2.35$. Therefore one can use the width of the two Gaussian profiles fitted to the O[iii]{} line to estimate the masses of the two putative black holes separately in a dual AGN [e.g. @peng]. In the case of NGC 5515, the velocity dispersions for the blue and red lines are $\sigma_\mathrm{b}=(140.63 \pm 15.13)\,\mathrm{km\,s}^{-1}$ and $\sigma_\mathrm{r}=(131.74 \pm 7.9)\,\mathrm{km\,s}^{-1}$, respectively. Thus using the coefficients derived for pseudobulges from [@Gultekin], the assumed black hole masses are: $\log{(M^\mathrm{b}_\mathrm{BH} M^{-1}_\mathrm{\odot})}=7.29 \pm 0.5$ and $\log{(M^\mathrm{r}_\mathrm{BH} M^{-1}_\mathrm{\odot})}=7.17 \pm 0.44$, derived from the blue- and redshifted lines, respectively.
We compare the sum of these assumed two black hole masses with the mass estimates of [@Beni13b] who derived the black hole mass for NGC 5515 as a single black hole system using the $M_\mathrm{BH}-\sigma_*$ relation for galaxies containing pseudobulges [@Hu], and with the scaling relation between bolometric luminosity and black hole mass [@VP]. The two methods yielded the following consistent values: $\log{(M^{\sigma_*}_\mathrm{BH} M^{-1}_\mathrm{\odot})}=7.45 \pm 0.25$ and $\log{(M_\mathrm{BH} M^{-1}_\mathrm{\odot})}=7.01 \pm 0.18$. Using the slightly different values for pseudobulges given by [@Gultekin], a higher black hole estimate can be obtained from the $M_\mathrm{BH}-\sigma_*$ relation: $\log{(M^{\sigma_*}_\mathrm{BH} M^{-1}_\mathrm{\odot})}=7.94 \pm 0.24$. Our sum of the assumed two black hole masses derived above ($\log{(M^\mathrm{sum}_\mathrm{BH} M^{-1}_\mathrm{\odot})}=7.54 \pm 0.48$) agrees well with the values calculated with the assumption that only one supermassive black hole resides in NGC 5515.
Knowing the masses, we can derive the Eddington luminosities following [@RL]: $$L_\mathrm{Edd}=1.38 \times 10^{38} M_\mathrm{BH} M^{-1}_\odot \mathrm{\,erg\,s}^{-1}$$ In the scenario with two black holes separately: $L^\mathrm{b}_\mathrm{Edd}=2.7 \cdot 10^{45}\mathrm{\,erg\,s}^{-1}$ and $L^\mathrm{r}_\mathrm{Edd}=2.0 \cdot 10^{45}\mathrm{\,erg\,s}^{-1}$. If we assume that the two black holes with similar masses contribute to the measured bolometric luminosity, $L_\mathrm{bol}=(5.19\pm1.38) \cdot 10^{42}\mathrm{\,erg\,s}^{-1}$ [@Beni13a] evenly, we obtain an Eddington ratio of $\sim 10^{-3}$ for each. Assuming instead, a scenario where a single black hole is responsible for the observed bolometric luminosity, the implied Eddington ratio is $\sim 10^{-3} - 10^{-4}$. All these values are within the range of Eddington ratios derived for Seyfert galaxies [$10^{-4}-10^{-2}$ and $10^{-4}-10^{-1}$, @zhang09; @Sy_Singh respectively]. Thus, based upon the mass estimates, we cannot exclude either the single or the dual black hole scenarios.
One possible explanation for the double-peaked narrow emission lines is dual AGN, but such a spectrum can also be explained as originating from biconic outflows or rotating disks on kpc scales or otherwise disturbed NLR kinematics [e.g., @Liu2010; @An13 and references therein]. In that case, it is the same single AGN which illuminates the NLR, therefore it is expected that the blue- and redshifted lines have similar characteristics. In the case of NGC 5515, both the line widths and the line ratios (O[iii]{}/H$\beta$, N[ii]{}/H$\alpha$, S[ii]{}/H$\alpha$) are very similar, equal within the uncertainties, for the blue- and redshifted components. Therefore the double-peaked emission lines can be explained in a straightforward way if both components are ionized by the same source.
Recent studies [e.g., @comerford] also show that the double-peaked emission-line diagnostics alone is an inefficient way of identifying real dual AGN, but proposed that in combination with other methods such as long-slit spectroscopy and X-ray and/or radio imaging observations, dual AGN candidates can be chosen more reliably.
Summary {#sum}
=======
Based upon the optical spectrum of NGC 5155, [@Beni13a] claimed that this galaxy is a good candidate for hosting dual AGN. We investigated this source using available multi-frequency radio data and new high-resolution VLBI observations conducted with the EVN. Our EVN observations revealed a compact radio emitting source with inverted spectrum, and brightness temperature exceeding $10^9$K. This radio emission clearly originates from a non-thermal synchrotron source associated with an AGN. The AGN nature of the radio emission is also in agreement with the long-term radio variability suggested by archival VLA data.
The position of the radio emitting source in our EVN maps agrees within the errors with the optical position of NGC 5155. We did not detect any additional radio source within $\sim 2$kpc of the nucleus. According to archival VLA observations, there is no additional radio emitting feature down to a brightness level of $1 -2 \mathrm{\,mJy\,beam}^{-1}$ in the entire region covered by the galaxy in the SDSS optical image. There is an optical source at a distance of $\sim 9\arcsec$ from the radio position of NGC 5515. We did not detect any radio emission in our EVN maps at this position, either. Thus, we can exclude the possibility of having two radio-emitting AGN in the Seyfert galaxy NGC 5515. However, we cannot exclude from our VLBI data that a secondary radio-quiet AGN resides in the galaxy.
We re-analysed the SDSS spectrum of NGC 5155, and fitted the H$\alpha$, H$\beta$, O[iii]{}, N[ii]{}, and S[ii]{} emission lines with double-peaked profiles. The parameters obtained from the fitting of the red and blue components are very similar. The line ratios are the same for the blue and red components. Assuming a scenario with dual AGN, we estimated the black hole masses from the blue and red components of the O[iii]{} line, and found that the sum of the inferred two black hole masses is in agreement with the total mass estimates given by [@Beni13b]. Thus, the double-peaked narrow lines in the spectrum of NGC 5155 can be explained either by assuming two SMBHs with very similar masses and ionizing properties, from which only one is radio-emitting, or more plausibly by assuming one common ionizing source residing in the NLR, a single radio-emitting AGN.
Double-peaked narrow emission lines were originally thought to be a promising tool to select dual AGN, however their usefulness is severely questioned as other explanations (outflows, disturbed NLR kinematics, or a rotating disk) can equally well account for the observed spectral shapes in several cases.
Acknowledgments {#acknowledgments .unnumbered}
===============
The EVN is a joint facility of European, Chinese, South African, and other radio astronomy institutes funded by their national research councils. The e-VLBI research infrastructure in Europe was supported by the European Community’s Seventh Framework Programme (FP7/2007-2013) under grant agreement RI-261525 NEXPReS. The research leading to these results has received funding from the European Commission Seventh Framework Programme (FP/2007-2013) under grant agreement No. 283393 (RadioNet3). The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. For this research, K.É.G. was supported by the European Union and the State of Hungary, co-financed by the European Social Fund in the framework of TÁMOP-4.2.4.A/2-11/1-2012-0001 “National Excellence Program”. S.F. was supported by the Hungarian Scientific Research Fund (OTKA, K104539). S.F. and T.A. thank the China–Hungary Collaboration and Exchange Programme by the International Cooperation Bureau of the Chinese Academy of Sciences (CAS) for support. T.X. is supported by NSFC under Grant No. 11203056. E.K. and Z.P. acknowledge financial support from the International Space Science Institute. T.A. was supported by the 973 Program (No. 2013CB837900), NSFC (No. 11261140641), and CAS grant (No. KJZD-EW-T01). L.Á.G. was supported by the Japan Society for the Promotion of Science. This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. Funding for SDSS-III has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, and the U.S. Department of Energy Office of Science. The SDSS-III web site is http://www.sdss3.org/.
[99]{}
[Adelman-McCarthy]{} J. K. et al. 2006, , 162, 38 An T. et al. 2013, MNRAS, 433, 1161
Beasley A.J., Conway J.E., 1995, in Zensus J.A., Diamond P.J., Napier P.J., eds, ASP Conf. Ser. 82, Very Long Baseline Interferometry and the VLBA. Astron. Soc. Pac., San Francisco, p. 327
Ben[í]{}tez E. et al. 2013a, ApJ, 763, 36
Ben[í]{}tez E. et al. 2013b, ApJ, 763, 136
R. D., [K[ö]{}nigl]{} A. 1979, , 232, 34
M., [P[é]{}rez-Torres]{} M.-A. 2010, , 714, L271
A. H., [Schwab]{} F. R. 1989, in Perley R. A., Schwab F. R., Bridle A. H., eds, ASP Conf. Series Vol. 6. Synthesis Imaging in Radio Astronomy. Astron. Soc. Pac., San Francisco, p. 247
J. M., [Gerke]{} B. F., [Stern]{}, D., [Cooper]{} M. C., [Weiner]{} B. J., [Newman]{} J. A., [Madsen]{} K., [Barrows]{} R. S. 2012, , 753, 42
Condon J.J., Cotton W.D., Greisen E.W., Yin Q.F., Perley R.A., Taylor G.B., Broderick J.J. 1998, , 115, 1693
Condon J.J. 1992, ARAA, 30, 573
T., [Springel]{} V., [Hernquist]{} L. 2005, , 433, 604
Diamond P.J., 1995, in Zensus J.A., Diamond P.J., Napier P.J., eds, ASP Conf. Ser. 82, Very Long Baseline Interferometry and the VLBA. Astron. Soc. Pac., San Francisco, p. 227
Fey A.L., Gordon D., Jacobs C.S. (eds.) 2009, IERS Technical Note 35 (Verlag des BKG, Frankfurt am Main)
Fomalont E.B., 1999, in Taylor G.B., Carilli C.L., Perley R.A., eds, ASP Conf. Ser. 180, Synthesis Imaging in Radio Astronomy II. Astron. Soc. Pac., San Francisco, p. 301
H., [Yan]{} L., [Myers]{} A. D., [Stockton]{} A., [Djorgovski]{} S. G., [Aldering]{} G., [Rich]{} J. A. 2012, , 745, 67
Greisen E.W. 2003, in Heck A., ed, Information Handling in Astronomy – Historical Vistas, Astrophys. Space Sci. Lib., 285, 109
K., et al. 2009, , 698, 198
T. M., [Miley]{} G. K., [van Breugel]{} W. J. M., [Butcher]{} H. R. 1981, , 247, 403
T. M., [Miley]{} G. K., [Green]{} R. F. 1984, , 281, 525
Högbom J.A. 1974, A&AS, 15, 417
Hu J. 2008, , 386, 2242
Kewley L.J., Heisler C.A., Dopita M.A., Sutherland R., Norris R.P., Reynolds J., Lumsden S. 2000, ApJ, 530, 704
S., [Burwitz]{} V., [Hasinger]{} G., [Predehl]{} P., [Kaastra]{} J. S., [Ikebe]{} Y. 2003, , 582, L15
Kovalev Y.Y. et al. 2005, AJ, 130, 2473
Liu X., [Greene]{} J. E., [Shen]{} Y., [Strauss]{} M. A. 2010a, , 715, L30
Liu X., [Shen]{} Y., [Strauss]{} M. A., [Greene]{} J. E. 2010b, , 708, 427
Lynden-Bell D. 1969, Nature, 223, 690
Middelberg E. et al. 2011, A&A, 526, A74
Nelson, C. H. 2000, , 544, L91
Osterbrock D. E. 1981, , 249, 462
Z.-X., [Chen]{} Y.-M., [Gu]{} Q.-S., [Hu]{} C. 2011, Res. Astron. Astrophys., 11, 411
C., [Taylor]{} G. B., [Zavala]{} R. T., [Peck]{} A. B., Pollack L. K., Romani R. W. 2006, , 646, 49
G. B., [Lightman]{} A. P. 1979, Radiative Processes in Astrophysics. New York: Wiley
, E. E. 1964, , 140, 796
Y., [Liu]{} X., [Greene]{} J. E., [Strauss]{} M. A., 2011, , 735, 48
Shepherd M.C., Pearson T.J., Taylor G.B. 1994, BAAS, 26, 987
Singh V., Shastri P., Risaliti G. 2011, A&A, 533, A128
Szomoru A. 2008, Proceedings of Science, PoS(IX EVN Symposium)040
S., [Volonteri]{} M., [Mayer]{} L., [Dotti]{} M., [Bellovary]{} J., [Callegari]{} S. 2012, , 748, L7
M., [Peterson]{} B. M. 2006, , 641, 689
W., et al. 2000, , 7432, 1
J. M., [Chen]{} Y. M., [Hu]{} C., [Mao]{} W.-M., [Zhang]{} S., [Bian]{} W.-H. 2009, , 705, L76
R. L., [Becker]{} R. H., [Helfand]{} D. J., [Gregg]{} M. D. 1997, , 475, 479
Wright E.L. 2006, PASP, 118, 1711
W. M., [Soria]{} R., [Zhang]{} S. N., [Swartz]{} D. A., [Liu]{} J. F. 2009, , 699, 281
\[lastpage\]
[^1]: E-mail: [email protected]
[^2]: EVN Calculator: http://www.evlbi.org/cgi-bin/EVNcalc
[^3]: The undistorted field of view is defined as an area where the expected brightness loss for a point source is less than 10 per cent with respect to the pointing centre.
|
{
"pile_set_name": "ArXiv"
}
|
---
address: |
Department of Mathematics\
University of Bristol\
University Walk\
Bristol BS8 1TW\
United Kingdom\
author:
-
title: 'Discussion of: A statistical analysis of multiple temperature proxies: Are reconstructions of surface temperatures over the last 1000 years reliable?'
---
The authors are to be congratulated on the clarity of their paper, which gives discussants and readers much to sink their teeth into. My comments are somewhat critical, but this should in no way devalue this paper as an important contribution to the ongoing debate concerning the information about historical climates that is recoverable from proxies. Figure 14, in particular, provides much food for thought.
In Section 3.2, comparing the proxy-based reconstruction of climate to measures based on actual climate (in-sample mean and ARMA model) is not very helpful for assessing the performance of the proxy—in fact, it confirms information already presented about the nature of the climate process and the relative variability of the proxies. This distracts from the more pertinent finding in Section 3.3 that the proxy-based reconstruction seems to perform no better than various random proxies. Again, though, this result is not necessarily detrimental to the proxy. If one generates 1138 random sequences of length 149 with roughly the right time-series properties, one should not be surprised to find that a 1139th sequence is near the span of a small subset, and it is a testament to the Lasso procedure that it seems to be doing a good job at picking this subset out. Hold-outs at the end of the calibration period would provide a more powerful test; for hold-outs in the middle, one can be fairly confident that if the Lasso finds a match at both ends, then the middle will fit reasonably well. In Section 3.5, the finding that large numbers of pseudo-proxies are selected can be explained in the same way. Moreover, the Lasso procedure will have a bias against selecting actual proxies, if they are correlated with each other. Overall, I do not think that Section 3 presents evidence against the proxies.
I am bemused by Section 5. First, let us be very clear that this is not a “fully Bayesian” analysis. What we have here is a normalised likelihood function over $\beta$ and $\sigma$ masquerading as a posterior distribution, in order to implement a sampling procedure over the model parameters. This seems a perfectly reasonable ad-hockery \[although a Normal Inverse Gamma conjugate analysis would be more conventional; see @ohf04, Chapter 11\], but to call it “fully Bayesian” is stretching the point. No attempt has been made to write down a joint probability distribution over the observations and the predictands, notably one that accounts for the possibility of auto-correlated error in the proxy reconstruction. Furthermore, the reconstructions are clearly not conditional on the calibration data, which is what the authors assert in Section 5.3. If they were, then there would be no reconstruction uncertainty over the calibration period.
Then there is Figure 15, which is referred to repeatedly to show the poor performance of the proxy-based reconstruction over the calibration period, particularly the 1990s. The statistical model for this figure is initialised with temperatures from 1999 and 2000. But 1998 was probably the warmest year of the millennium, as the authors themselves cite in Section 1, and so the two initialisation values are going to start the reconstruction curve too low. What we may have here is an artifact of a somewhat arbitrary choice of initialisation period. The authors must present evidence that the curve is robust to these choices.
Finally, I have a deeper concern, not about the authors’ paper in particular, but about the general principles of reconstruction discussed here. There is a rich literature on statistical methods for reconstructions; @braak95 provides a review. In this literature, a distinction is made between the “classical” approach, in which the proxies $X$ are regressed on climate quantities $Y$, and the “inverse” approach in which the climate quantities are regressed on the proxies. An advantage of the inverse approach is that it is very tractable—it can proceed one climate quantity at a time, and it leads to a simple plug-in approach in which the historical proxy $x_0$ is used directly to predict the historical climate value $y_0$. The classical approach, on the other hand, is a joint reconstruction over several climate quantities, and requires more complicated methods to predict $y_0$ from $x_0$, such as numerical optimisation (or a Bayesian approach). In its favour, however, the classical approach respects the dominant causal direction (from climate to the proxies) and the statistical model can reflect known features of the ecological response function. The broad finding regarding these two approaches is unsurprising: the classical approach performs better in extrapolation. Given that historical climate reconstruction is clearly an extrapolation from the climate in the calibration period, and given that the proxies generally respond to multiple aspects of climate, the use of the inverse approach, as adopted by the authors and their forerunners, seems to me to sacrifice too much to tractability.
[3]{}
().
().
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We present the first X-ray spectrum obtained by the Low Energy Transmission Grating Spectrometer (LETGS) aboard the Chandra X-ray Observatory. The spectrum is of Capella and covers a wavelength range of 5–175 Å (2.5–0.07 keV). The measured wavelength resolution, which is in good agreement with ground calibration, is $\Delta \lambda \simeq$ 0.06 Å (FWHM). Although in-flight calibration of the LETGS is in progress, the high spectral resolution and unique wavelength coverage of the LETGS are well demonstrated by the results from Capella, a coronal source rich in spectral emission lines. While the primary purpose of this letter is to demonstrate the spectroscopic potential of the LETGS, we also briefly present some preliminary astrophysical results. We discuss plasma parameters derived from line ratios in narrow spectral bands, such as the electron density diagnostics of the He-like triplets of carbon, nitrogen, and oxygen, as well as resonance scattering of the strong Fe XVII line at 15.014 Å.'
author:
- 'A.C. Brinkman, C.J.T. Gunsing, J.S. Kaastra, R.L.J. van der Meer, R. Mewe, F. Paerels , A.J.J. Raassen , J.J. van Rooijen'
- 'H. Bräuninger, W. Burkert, V. Burwitz, G. Hartner, P. Predehl'
- 'J.-U. Ness, J.H.M.M. Schmitt'
- 'J.J. Drake, O. Johnson, M. Juda, V. Kashyap, S.S. Murray, D. Pease, P. Ratzlaff, B.J. Wargelin'
title: 'First Light Measurements of Capella with the Low Energy Transmission Grating Spectrometer aboard the Chandra X-ray Observatory'
---
Introduction
============
The LETGS consists of three components of the Chandra Observatory: the High Resolution Mirror Assembly (HRMA) [@Spe97], the Low Energy Transmission Grating (LETG) [@Brink87; @Brink97; @Pre97], and the spectroscopic array of the High Resolution Camera () [@Mur97]. The LETG, designed and manufactured in a collaborative effort of SRON in the Netherlands and MPE in Germany, consists of a toroidally shaped structure which supports 180 grating modules. Each module holds three 1.5-cm diameter grating facets which have a line density of 1008 lines/mm. The three flat detector elements of the , each 10 cm long and 2 cm wide, are tilted to approximate the Rowland focal surface at all wavelengths, assuring a nearly coma-free spectral image. The detector can be moved in the cross-dispersion direction and along the optical axis, to optimize the focus for spectroscopy. [^1]
An image of the LETG spectrum is focused on the with zeroth order at the focus position and dispersed positive and negative orders symmetric on either side of it. The dispersion is 1.15 Å/mm in first spectral order. The spectral width in the cross-dispersion direction is minimal at zeroth order and increases at larger wavelengths due to the intrinsic astigmatism of the Rowland circle spectrograph. The extraction of the spectrum from the image is done by applying a spatial filter around the spectral image and constructing a histogram of counts vs. position along the dispersion direction. The background is estimated from areas on the detector away from the spectral image and can be reduced by filtering events by pulse-height.
First Light Spectrum
====================
Capella is a binary system at a distance of 12.9 pc consisting of G8 and G1 giants with an orbital period of 104 days [@Hum94]. It is the brightest quiescent coronal X-ray source in the sky after the Sun, and is therefore an obvious line source candidate for first light and for instrument calibration. X rays from Capella were discovered in 1975 [@Cat75; @Mew75] and subsequent satellite observations provided evidence for a multi-temperature component plasma (e.g. @Mew91a for references). Recent spectra were obtained with EUVE longward of 70 Å with a resolution of about 0.5 Å [@Dup93; @Sch95].
The LETG First Light observation of Capella was performed on 6 September 1999 (00h27m UT – 10h04m UT) with LETG and . For the analysis we use a composite of six observations obtained in the week after first light, with a total observing time of 95 ksec. The output was processed through standard pipeline processing. For LETG/ events, only the product of the wavelength and diffraction order is known because no diffraction order information can be extracted. Preliminary analysis of the pipeline output immediately revealed a beautiful line-rich spectrum. The complete background-subtracted, negative-order spectrum between 5 and 175 Å is shown in Fig. \[spec\_img\]. Line identifications were made using previously measured and/or theoretical wavelengths from the literature. The most prominent lines are listed in Table \[tab1\].
The spectral resolution $\Delta \lambda$ of the LETGS is nearly constant when expressed in wavelength units, and therefore the resolving power $\lambda / \Delta \lambda$ is greatest at long wavelengths. With the current uncertainty of the LETGS wavelength scale of about 0.015 Å, this means that the prominent lines at 150 and 171 Å could be used to measure Doppler shifts as small as 30 km/sec, such as may occur during stellar-flare mass ejections, once the absolute wavelength calibration of the instrument has been established. This requires, however, that line rest-frame wavelengths are accurately known and that effects such as the orbital velocity of the Earth around the Sun are taken into account. Higher-order lines, such as the strong O VIII Ly$\alpha$ line at 18.97 Å, which is seen out to 6th order, can also be used.
Diagnostics
===========
A quantitative analysis of the entire spectrum by multi-temperature fitting or differential emission measure modeling yields a detailed thermal structure of the corona, but this requires accurate detector efficiency calibration which has not yet been completed. However, some diagnostics based on intensity ratios of lines lying closely together can already be applied. In this letter we consider the helium-like line diagnostic and briefly discuss the resonance scattering in the Fe XVII 15.014 Å line.
Electron Density & Temperature Diagnostics
------------------------------------------
Electron densities, $n_e$, can be measured using density-sensitive spectral lines originating from metastable levels, such as the forbidden ($f$) $2^3S\to 1^1S$ line in helium-like ions. This line and the associated resonance ($r$) $2^1P\to 1^1S$ and intercombination ($i$) $2^3P\to 1^1S$ line make up the so-called helium-like “triplet” lines [@Gab69; @Pra82; @Mew85]. The intensity ratio $(i+f)/r$ varies with electron temperature, T, but more importantly, the ratio $i/f$ varies with $n_e$ due to the collisional coupling between the $2^3S$ and $2^3P$ level.
The LETGS wavelength band contains the He-like triplets from C, N, O, Ne, Mg, and Si ($\sim$ 40, 29, 22, 13.5, 9.2, and 6.6 Å, respectively). However, the Si and Mg triplets are not sufficiently resolved and the Ne IX triplet is too heavily blended with iron and nickel lines for unambiguous density analysis. The O VII lines are clean (see Fig. \[OVII\]) and the C V and N VI lines can be separated from the blends by simultaneous fitting of all lines. These triplets are suited to diagnose plasmas in the range $n_e$ = 10$^8$–10$^{11}$ cm$^{-3}$ and $T$ $\sim$ 1–3 MK. For the C, N, and O triplets the measured $i/f$ ratios are $0.38\pm 0.14$, $0.52\pm 0.15$, and $0.250\pm 0.035$, respectively, which imply [@Pra82] $n_e$ (in $10^9$ cm$^{-3}$) = $2.8\pm 1.3$, $6\pm 3$, and $\la$ 5 (1$\sigma$ upper limit), respectively, for typical temperatures as indicated by the $(i+f)/r$ ratios of 1, 1, and 3 MK, respectively. This concerns the lower temperature part of a multi-temperature structure which also contains a hot ($\sim$6–8 MK), and dense ($\ga$ 10$^{12}$ cm$^{-3}$) compact plasma component (see Section \[resonance\]). The derived densities are comparable to those of active regions on the Sun with a temperature of a few MK. Fig. \[OVII\] shows a fit to the O VII triplet measured in the –1 order. The He-like triplet diagnostic, which was first applied to the Sun (e.g., @Act72 [@Wol83]) has now for the first time been applied to a star other than the Sun.
The long-wavelength region of the LETGS between 90 and 150 Å contains a number of density-sensitive lines from $2\ell$–$2\ell'$ transitions in the Fe-L ions Fe XX–XXII which provide density diagnostics for relatively hot ($\ga$ 5 MK) and dense ($\ga$ 10$^{12}$ cm$^{-3}$) plasmas [@Mew85; @Mew91b; @Brick95]. These have been applied in a few cases to EUVE spectra of late-type stars and in the case of Capella have suggested densities more than two orders of magnitude higher than found here for cooler plasma [@Dup93; @Sch95]. These diagnostics will also be applied to the LETGS spectrum as soon as the long-wavelength efficiency calibration is established.
The 15–17 Å region: resonance scattering of Fe XVII? {#resonance}
----------------------------------------------------
Transitions in Ne-like Fe XVII yield the strongest emission lines in the range 15–17 Å (cf. Fig. \[spec\_img\]). In principle, the optical depth, $\tau$, in the 15.014 Å line can be obtained by applying a simplified escape-factor model to the ratio of the Fe XVII 15.014 Å resonance line with a large oscillator strength to a presumably optically thin Fe XVII line with a small oscillator strength. We use the 15.265 Å line because the 16.780 Å line can be affected by radiative cascades [@Lie99]. Solar physicists have used this technique to derive the density in active regions on the Sun (e.g., @Sab99 [@Phi96; @Phi97]).
Various theoretical models predict 15.014/15.265 ratio values in the range 3.3–4.7 with only a slow variation ($\la$ 5%) with temperature or energy in the region 2–5 MK or 0.1–0.3 keV [@Bro98; @Bha92]. The fact that most ratios observed in the Sun typically range from 1.5–2.8 (@Bro98, and references above), significantly lower than the theoretical ratios, supports claims that in solar active regions the 15.014 Å line is affected by resonant scattering. The 15.014/15.265 ratio which was recently measured in the Livermore Electron Beam Ion Trap (EBIT) [@Bro98] and ranges from 2.77–3.15 (with individual uncertainties of about $\pm~0.2$) at energies between 0.85–1.3 keV, is significantly lower than calculated values. Although the EBIT results do not include probably minor contributions from processes such as dielectronic recombination satellites and resonant excitation, this may imply that the amount of solar scattering has been overestimated in past analyses. Our measured ratio Fe XVIII 16.078 Å/Fe XVII 15.265 Å gives a temperature of $\sim$6 MK and the photon flux ratio 15.014/15.265 is measured to be 2.64$\pm 0.10$. If we compare this to the recent EBIT results we conclude that there is little or no evidence for opacity effects in the 15.014 Å line seen in our Capella spectrum.
Conclusion
==========
The Capella measurements with LETGS show a rich spectrum with excellent spectral resolution ($\Delta\lambda \simeq $0.06 Å, FWHM). About 150 lines have been identified of which the brightest hundred are presented in Table \[tab1\]. The high-resolution spectra of the Chandra grating spectrometers allow us to carry out direct density diagnostics, using the He-like triplets of the most abundant elements in the LETGS-band, which were previously only possible for the Sun. Density estimates based on C, N and O He-like complexes indicate densities typical of solar active regions and some two or more orders of magnitude lower than density estimates for the hotter ($>$5 MK) plasma obtained from EUVE spectra. A preliminary investigation into the effect of resonance scattering in the Fe XVII line at 15.014 Å showed no clear evidence for opacity effects. After further LETGS in-flight calibration it is expected that relative Doppler velocities of the order of 30 km/s will be detectable at the longest wavelengths.
The LETGS data as presented here could only be produced after dedicated efforts of many people for many years. Our special gratitude goes to the technical and scientific colleagues at SRON, MPE and their subcontractors for making such a superb LETG and to the colleagues at many institutes for building the payload. Special thanks goes to the many teams who made Chandra a success, particularly the project scientist team, headed by Dr. Weisskopf, the MSFC project team, headed by Mr. Wojtalik, the TRW industrial teams and their subcontractors, the Chandra observatory team, headed by Dr. Tananbaum, and the crew of Space Shuttle flight STS-93.
JJD, OJ, MJ, VK, SSM, DP, PR, and BJW were supported by Chandra X-ray Center NASA contract NAS8-39073 during the course of this research.
Acton, L. W., Catura, R. C., Meyerott, A., & Wolfson, C. J. 1972, Solar Phys., 26, 183 Bhatia, A. K. & Doschek, G. A. 1992, At. Data Nucl. data Tables, 52, 1 Brickhouse, N. S., Raymond, J. C., & Smith, B. W. 1995, , 97, 551 Brinkman, A. C., et al. 1987, Astroph. lett. & comm., Vol. 26, 73–80 Brinkman, A. C., et al. 1997, SPIE, 3113, 181–192 Brown, G. V., Beiersdorfer, P., Liedahl, D. A., & Widmann, K. 1998, , 502, 1015 Catura, R. C., Acton, L. W., & Johnson, H. M. 1975, , 196, L47 Dupree, A. K., Brickhouse, N. S., Doschek, G. A., Green, J. C., & Raymond, J. C. 1993, , 418, L41 Gabriel, A. H. & Jordan, C. 1969, , 145, 241–248 Hummel, C. A., Armstrong, J. T., Quirrenbach, A., Buscher, D. F., Mozurkewich, D., Elias, N. M. II, & Wilson, R. E. 1994, , 107, 1859 Liedahl, D. A. 1999, private communication Mason, H. E., Bhatia, A. K, Kastner, S. O., Neupert, W. M., & Swartz, M. 1984, Sol.Phys., 92, 199 Mewe, R. 1991, , 3, 127 Mewe, R., Gronenschild, E. H. B. M., & van den Oord, G. H. J. 1985, A&ASS, 62, 197 Mewe, R., Heise, J., Gronenschild, E. H. B. M., Brinkman, A. C., Schrijver, J., & den Boggende, A. J. F. 1975, , 202, L67 Mewe, R., Kaastra, J. S., & Liedahl, D. A. 1995, Legacy, 6, 16 (MEKAL) Mewe, R., Lemen, J. R., & Schrijver, C. J. 1991, , 182, 35 Murray, S. S., et al. 1997, SPIE, 3114, 11 Phillips, K. J. H., Greer, C. J., Bhatia, A. K., Coffey, I. H., Barnsley, R., & Keenan, F. P. 1997, , 324, 381 Phillips, K. J. H., Greer, C. J., Bhatia, A. K., & Keenan, F. P. 1996, , 469, L57 Phillips, K. J. H., Mewe, R., Harra-Murnion, L. K., Kaastra, J. S., Beiersdorf, P., Brown, G. V., & Liedahl, D. A. 1999, A&ASS, 138, 381 Pradhan, A. K. 1982, , 263, 477 Predehl, P., et al. 1997, SPIE, 3113, 172–180 Saba, J. L. R., Schmelz, J. T., Bhatia, A. K., & Strong, K. T. 1999, , 510, 1064 Schrijver, C. J., Mewe, R., van den Oord, G. H. J., & Kaastra, J. S. 1995, , 302, 438 Van Speybroeck, L. P., Jerius, D., Edgar, R. J., Gaetz, T. J., & Zhao, P. 1997, SPIE, 3113, 89 Wolfson, C. J., Doyle, J. G., & Phillips, K. J. H. 1983, , 269, 319
[r@l@r@[ ]{}c@[ ]{}rl@[ ]{}ll]{} 6.65 & & 6.65 & 7.00 & 5.1 & Si & XIII & He4w\
6.74 & & 6.74 & 7.00 & 2.9 & Si & XIII & He6z\
8.42 & & 8.42 & 7.00 & 4.6 & Mg & XII & H1AB\
9.16 & & 9.17 & 6.80 & 6.2 & Mg & XI & He4w\
9.31 & & 9.32 & 6.80 & 3.1 & Mg & XI & He6z\
11.54 & b & 11.55 & 6.60 & 3.5 & Ne & IX & He3A\
... & & 11.53 & 6.85 & & Fe & XVIII & F22\
12.14 & b & 12.13 & 6.80 & 16.8 & Ne & X & H1AB\
... & & 12.12 & 6.75 & & Fe & XVII & 4C\
12.27 & b & 12.26 & 6.75 & 6.6 & Fe & XVII & 4D\
... & & 12.29 & 7.00 & & Fe & XXI & C13\
12.43 & & 12.43 & 6.70 & 3.5 & Ni & XIX & Ne5\
12.84 & b & 12.83 & 7.00 & 4.9 & Fe & XX & N16\
... & & 12.85 & 7.00 & & Fe & XX & N15\
13.46 & & 13.45 & 6.60 & 9.7 & Ne & IX & He4w\
13.53 & b & 13.52 & 6.90 & 11.5 & Fe & XIX & O1-68\
... & & 13.51 & 6.90 & & Fe & XIX & O1-71\
... & & 13.55 & 6.90 & & Ne & IX & He5xy\
13.71 & & 13.70 & 6.60 & 6.6 & Ne & IX & He6z\
13.82 & b & 13.83 & 6.70 & 7.5 & Fe & XVII & 3A\
... & & 13.84 & 6.90 & & Fe & XIX & O1-50\
14.07 & & 14.06 & 6.70 & 4.2 & Ni & XIX & Ne8AB\
14.22 & & 14.21 & 6.80 & 18.0 & Fe & XVIII & F1-56,55\
14.27 & & 14.26 & 6.80 & 5.3 & Fe & XVIII & F1-52,53\
14.38 & b & 14.38 & 6.80 & 6.2 & Fe & XVIII & F12\
... & & 14.36 & 6.80 & & Fe & XVIII & F2-57,58\
14.56 & b & 14.54 & 6.80 & 5.3 & Fe & XVIII & F10\
... & & 14.56 & 6.80 & & Fe & XVIII & F9\
15.02 & & 15.01 & 6.70 & 44.2 & Fe & XVII & 3C\
15.18 & b & 15.18 & 6.60 & 3.4 & O & VIII & H3\
... & & 15.21 & 6.90 & & Fe & XIX & O4\
15.27 & & 15.27 & 6.70 & 16.7 & Fe & XVII & 3D\
15.46 & & 15.46 & 6.70 & 3.1 & Fe & XVII & 3E\
15.64 & & 15.63 & 6.80 & 6.2 & Fe & XVIII & F7\
15.83 & & 15.83 & 6.80 & 4.3 & Fe & XVIII & F6\
15.88 & & 15.87 & 6.80 & 4.4 & Fe & XVIII & F5\
16.02 & b & 16.01 & 6.60 & 14.6 & O & VIII & H2\
... & & 16.00 & 6.80 & & Fe & XVIII & F4\
16.08 & b & 16.08 & 6.80 & 16.0 & Fe & XVIII & F3\
... & & 16.11 & 6.90 & & Fe & XIX & O2\
16.30 & b & 16.34 & 6.70 & 2.2 & Fe & XVII & E2L\
... & & 16.31 & 6.80 & & Fe & XVIII & F3-62\
16.78 & & 16.78 & 6.70 & 27.9 & Fe & XVII & 3F\
17.05 & & 17.06 & 6.70 & 30.5 & Fe & XVII & 3G\
17.10 & & 17.10 & 6.70 & 29.5 & Fe & XVII & M2\
17.62 & & 17.63 & 6.80 & 4.4 & Fe & XVIII & F1\
18.62 & b & 18.63 & 6.80 & 2.0 & Mg & XI & He6z(2)\
... & & 18.63 & 6.30 & & O & VII & He3A\
18.96 & & 18.97 & 6.50 & 28.7 & O & VIII & H1AB\
21.61 & & 21.60 & 6.30 & 6.5 & O & VII & He4w(r)\
21.82 & & 21.80 & 6.30 & 1.1 & O & VII & He5xy(i)\
22.11 & & 22.10 & 6.30 & 4.5 & O & VII & He6z(f)\
24.79 & & 24.78 & 6.30 & 4.4 & N & VII & H1AB\
28.78 & & 28.79 & 6.20 & 1.1 & N & VI & He4w\
29.52 & & 29.53 & 6.20 & 0.9 & N & VI & He6z\
30.02 & & 30.03 & 6.70 & 1.8 & Fe & XVII & 3C(2)\
33.74 & & 33.74 & 6.10 & 4.9 & C & VI & H1AB\
34.10 & & 34.10 & 6.70 & 1.5 & Fe & XVII & 3G(2)\
34.20 & & 34.20 & 6.70 & 1.1 & Fe & XVII & M2(2)\
36.40 & b & 36.37 & 6.70 & 1.4 & Fe & XVII & 4C(3)\
... & & 36.40 & 6.30 & & S & XII & B6A\
37.94 & & 37.95 & 6.50 & 1.1 & O & VIII & H1AB(2)\
44.03 & b & 44.02 & 6.30 & 3.3 & Si & XII & Li6A\
... & & 44.05 & 6.10 & & Mg & X & Li2\
44.16 & & 44.17 & 6.30 & 4.9 & Si & XII & Li6B\
45.03 & & 45.04 & 6.70 & 4.2 & Fe & XVII & 3C(3)\
45.68 & & 45.68 & 6.30 & 1.9 & Si & XII & Li7A\
50.31 & & 50.35 & 6.50 & 5.3 & Fe & XVI & Na6A\
50.55 & b & 50.53 & 6.20 & 2.2 & Si & X & B6A\
... & & 50.56 & 6.50 & & Fe & XVI & Na6B\
51.15 & & 51.17 & 6.70 & 2.7 & Fe & XVII & 3G(3)\
51.27 & & 51.30 & 6.70 & 2.9 & Fe & XVII & M2(3)\
54.12 & & 54.14 & 6.50 & 2.9 & Fe & XVI & Na7B\
54.71 & & 54.73 & 6.50 & 4.4 & Fe & XVI & Na7A\
56.89 & & 56.92 & 6.50 & 1.8 & O & VIII & H1AB(3)\
60.04 & & 60.06 & 6.70 & 1.3 & Fe & XVII & 3C(4)\
62.84 & & 62.88 & 6.50 & 2.0 & Fe & XVI & Na8B\
63.68 & & 63.72 & 6.50 & 2.9 & Fe & XVI & Na8A\
66.37 & & 66.37 & 6.50 & 2.7 & Fe & XVI & Na9A\
68.20 & & 68.22 & 6.70 & 1.0 & Fe & XVII & 3G(4)\
68.40 & & 68.40 & 6.70 & 1.2 & Fe & XVII & M2(4)\
75.06 & & 75.07 & 6.70 & 0.8 & Fe & XVII & 3C(5)\
75.87 & & 75.89 & 6.50 & 0.9 & O & VIII & H1AB(4)\
85.24 & & 85.28 & 6.70 & 0.8 & Fe & XVII & 3G(5)\
85.44 & & 85.50 & 6.70 & 0.6 & Fe & XVII & M2(5)\
90.08 & & 90.08 & 6.70 & 1.0 & Fe & XVII & 3C(6)\
93.91 & & 93.92 & 6.80 & 12.4 & Fe & XVIII & F4A\
94.84 & & 94.87 & 6.50 & 0.4 & O & VIII & H1AB(5)\
101.55 & & 101.55 & 6.90 & 2.5 & Fe & XIX & O6B\
102.30 & & 102.33 & 6.70 & 0.8 & Fe & XVII & 3G(6)\
102.57 & & 102.60 & 6.70 & 0.4 & Fe & XVII & M2(6)\
103.94 & & 103.94 & 6.70 & 4.4 & Fe & XVIII & F4B\
108.35 & & 108.37 & 6.90 & 6.1 & Fe & XIX & O6A\
113.79 & & 113.84 & 6.50 & 0.5 & O & VIII & H1AB(6)\
117.14 & & 117.17 & 7.10 & 1.2 & Fe & XXII & B11\
118.69 & & 118.66 & 7.00 & 1.4 & Fe & XX & N6C\
119.99 & & 120.00 & 6.90 & 1.8 & Fe & XIX & O6D\
121.86 & & 121.83 & 7.00 & 2.0 & Fe & XX & N6B\
128.74 & & 128.74 & 7.00 & 1.6 & Fe & XXI & C6A\
132.86 & b & 132.85 & 7.00 & 4.0 & Fe & XX & N6A\
... & & 132.85 & 7.10 & & Fe & XXIII & Be13A\
150.09 & & 150.10 & 5.50 & 0.5 & O & VI & Li5AB\
171.06 & & 171.08 & 5.80 & 2.2 & Fe & IX & A4\
[^1]: Further information on LETGS components is found in the AXAF Observatory Guide (<http://asc.harvard.edu/udocs/>) and at the Chandra X-ray Center calibration webpages (<http://asc.harvard.edu/cal/>).
|
{
"pile_set_name": "ArXiv"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.